Compare commits

..

1 Commits

Author SHA1 Message Date
Juan Font
0d98493360 Reduced the number of containers in integration tests 2022-04-30 21:14:56 +00:00
274 changed files with 19484 additions and 42355 deletions

5
.github/FUNDING.yml vendored
View File

@@ -1,3 +1,2 @@
# These are supported funding model platforms
ko_fi: headscale
ko_fi: kradalby
github: [kradalby]

View File

@@ -6,24 +6,19 @@ labels: ["bug"]
assignees: ""
---
<!--
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
to make sure that it is truly a bug.
The issue tracker is not the place to ask for support or how to set up Headscale.
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
Bug reports without the sufficient information will be closed.
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Bug description
**Bug description**
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
and how it is currently different. If you are unsure if it is a bug, consider discussing
it on our Discord server first. -->
## Environment
**To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example:
- Version of headscale used
@@ -33,33 +28,3 @@ All bug reports needs to be in English.
- The relevant config parameters you used
- Log output
-->
- OS:
- Headscale version:
- Tailscale version:
<!--
We do not support running Headscale in a container nor behind a (reverse) proxy.
If either of these are true for your environment, ask the community in Discord
instead of filing a bug report.
-->
- [ ] Headscale is behind a (reverse) proxy
- [ ] Headscale runs in a container
## To Reproduce
<!-- Steps to reproduce the behavior. -->
## Logs and attachments
<!-- Please attach files with:
- Client netmap dump (see below)
- ACL configuration
- Headscale configuration
Dump the netmap of tailscale clients:
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
Please provide information describing the netmap, which client, which headscale version etc.
-->

View File

@@ -6,21 +6,12 @@ labels: ["enhancement"]
assignees: ""
---
<!--
We typically have a clear roadmap for what we want to improve and reserve the right
to close feature requests that does not fit in the roadmap, or fit with the scope
of the project, or we actually want to implement ourselves.
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Why
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
## Description
**Feature request**
<!-- A clear and precise description of what new or changed feature you want. -->
<!-- Please include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->

30
.github/ISSUE_TEMPLATE/other_issue.md vendored Normal file
View File

@@ -0,0 +1,30 @@
---
name: "Other issue"
about: "Report a different issue"
title: ""
labels: ["bug"]
assignees: ""
---
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
<!-- If you have a question, please consider using our Discord for asking questions -->
**Issue description**
<!-- Please add your issue description. -->
**To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->

View File

@@ -1,18 +1,6 @@
<!--
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
-->
<!-- Please tick if the following things apply. You… -->
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
- [ ] read the [CONTRIBUTING guidelines](README.md#user-content-contributing)
- [ ] raised a GitHub issue or discussed it on the projects chat beforehand
- [ ] added unit tests
- [ ] added integration tests

26
.github/renovate.json vendored
View File

@@ -6,27 +6,31 @@
"onboarding": false,
"extends": ["config:base", ":rebaseStalePrs"],
"ignorePresets": [":prHourlyLimit2"],
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"],
"enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
"includeForks": true,
"repositories": ["juanfont/headscale"],
"platform": "github",
"packageRules": [
{
"matchDatasources": ["go"],
"groupName": "Go modules",
"groupSlug": "gomod",
"separateMajorMinor": false
"matchDatasources": ["go"],
"groupName": "Go modules",
"groupSlug": "gomod",
"separateMajorMinor": false
},
{
"matchDatasources": ["docker"],
"groupName": "Dockerfiles",
"groupSlug": "dockerfiles"
}
"matchDatasources": ["docker"],
"groupName": "Dockerfiles",
"groupSlug": "dockerfiles"
}
],
"regexManagers": [
{
"fileMatch": [".github/workflows/.*.yml$"],
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"],
"fileMatch": [
".github/workflows/.*.yml$"
],
"matchStrings": [
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
],
"datasourceTemplate": "golang-version",
"depNameTemplate": "actions/go-version"
}

View File

@@ -8,64 +8,35 @@ on:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
permissions: write-all
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
uses: tj-actions/changed-files@v14.1
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run build
id: build
if: steps.changed-files.outputs.files == 'true'
run: |
nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}"
if: steps.changed-files.outputs.any_changed == 'true'
run: nix build
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@v6
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.rest.pulls.createReviewComment({
pull_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@v4
if: steps.changed-files.outputs.files == 'true'
- uses: actions/upload-artifact@v2
if: steps.changed-files.outputs.any_changed == 'true'
with:
name: headscale-linux
path: result/bin/headscale

View File

@@ -1,41 +0,0 @@
name: Check integration tests workflow
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Generate and check integration tests
if: steps.changed-files.outputs.files == 'true'
run: |
nix develop --command bash -c "cd cmd/gh-action-integration-generator/ && go generate"
git diff --exit-code .github/workflows/test-integration.yaml
- name: Show missing tests
if: failure()
run: |
git diff .github/workflows/test-integration.yaml

35
.github/workflows/contributors.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Contributors
on:
push:
branches:
- main
workflow_dispatch:
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Delete upstream contributor branch
# Allow continue on failure to account for when the
# upstream branch is deleted or does not exist.
continue-on-error: true
run: git push origin --delete update-contributors
- name: Create up-to-date contributors branch
run: git checkout -B update-contributors
- name: Push empty contributors branch
run: git push origin update-contributors
- name: Switch back to main
run: git checkout main
- uses: BobAnkh/add-contributors@v0.2.2
with:
CONTRIBUTOR: "## Contributors"
COLUMN_PER_ROW: "6"
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: "100"
FONT_SIZE: "14"
PATH: "/README.md"
COMMIT_MESSAGE: "docs(README): update contributors"
AVATAR_SHAPE: "round"
BRANCH: "update-contributors"
PULL_REQUEST: "main"

View File

@@ -1,27 +0,0 @@
name: Test documentation build
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict

View File

@@ -1,52 +0,0 @@
name: Build documentation
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
permissions:
pages: write
id-token: write
runs-on: ubuntu-latest
needs: build
steps:
- name: Configure Pages
uses: actions/configure-pages@v4
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -1,22 +0,0 @@
name: GitHub Actions Version Updater
on:
schedule:
# Automatically run on every Sunday
- cron: "0 0 * * 0"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@v0.8.1
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}

View File

@@ -1,75 +1,76 @@
name: Lint
---
name: CI
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
on: [push, pull_request]
jobs:
golangci-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
uses: tj-actions/changed-files@v14.1
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: golangci-lint
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- golangci-lint run --new-from-rev=${{github.event.pull_request.base.sha}} --out-format=github-actions .
if: steps.changed-files.outputs.any_changed == 'true'
uses: golangci/golangci-lint-action@v2
with:
version: latest
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
# blocked because new linters has appared and other
# parts of the code is affected.
only-new-issues: true
prettier-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
uses: tj-actions/changed-files@v14.1
with:
filters: |
files:
- '*.nix'
- '**/*.md'
- '**/*.yml'
- '**/*.yaml'
- '**/*.ts'
- '**/*.js'
- '**/*.sass'
- '**/*.css'
- '**/*.scss'
- '**/*.html'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
files: |
*.nix
**/*.md
**/*.yml
**/*.yaml
**/*.ts
**/*.js
**/*.sass
**/*.css
**/*.scss
**/*.html
- name: Prettify code
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- prettier --no-error-on-unmatched-pattern --ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
if: steps.changed-files.outputs.any_changed == 'true'
uses: creyD/prettier_action@v4.0
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
only_changed: false
dry: true
proto-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- name: Buf lint
run: nix develop --command -- buf lint proto
- uses: actions/checkout@v2
- uses: bufbuild/buf-setup-action@v0.7.0
- uses: bufbuild/buf-lint-action@v1
with:
input: "proto"

View File

@@ -1,5 +1,5 @@
---
name: Release
name: release
on:
push:
@@ -9,30 +9,215 @@ on:
jobs:
goreleaser:
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.18.0
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y gcc-aarch64-linux-gnu
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
with:
distribution: goreleaser
version: latest
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,value=latest
type=sha
- name: Login to DockerHub
uses: docker/login-action@v3
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
latest=false
tags: |
type=semver,pattern={{version}}-debug
type=semver,pattern={{major}}.{{minor}}-debug
type=semver,pattern={{major}}-debug
type=raw,value=latest-debug
type=sha,suffix=-debug
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
- name: Run goreleaser
run: nix develop --command -- goreleaser release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker-alpine-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-alpine
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-alpine-
- name: Docker meta
id: meta-alpine
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
latest=false
tags: |
type=semver,pattern={{version}}-alpine
type=semver,pattern={{major}}.{{minor}}-alpine
type=semver,pattern={{major}}-alpine
type=raw,value=latest-alpine
type=sha,suffix=-alpine
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.alpine
tags: ${{ steps.meta-alpine.outputs.tags }}
labels: ${{ steps.meta-alpine.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-alpine
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-alpine
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine

27
.github/workflows/renovatebot.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: Renovate
on:
schedule:
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
workflow_dispatch:
jobs:
renovate:
runs-on: ubuntu-latest
steps:
- name: Get token
id: get_token
uses: machine-learning-apps/actions-app-token@master
with:
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
- name: Checkout
uses: actions/checkout@v2.0.0
- name: Self-hosted Renovate
uses: renovatebot/github-action@v31.81.3
with:
configurationFile: .github/renovate.json
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
# env:
# LOG_LEVEL: "debug"

View File

@@ -1,23 +0,0 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:
days-before-issue-stale: 90
days-before-issue-close: 7
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 90 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,114 +0,0 @@
name: Integration Tests
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
integration-test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
test:
- TestACLHostsInNetMapTable
- TestACLAllowUser80Dst
- TestACLDenyAllPort80
- TestACLAllowUserDst
- TestACLAllowStarDst
- TestACLNamedHostsCanReachBySubnet
- TestACLNamedHostsCanReach
- TestACLDevice1CanAccessDevice2
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndRelogin
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
- TestPreAuthKeyCommandReusableEphemeral
- TestApiKeyCommand
- TestNodeTagCommand
- TestNodeAdvertiseTagNoACLCommand
- TestNodeAdvertiseTagWithACLCommand
- TestNodeCommand
- TestNodeExpireCommand
- TestNodeRenameCommand
- TestNodeMoveCommand
- TestDERPServerScenario
- TestPingAllByIP
- TestPingAllByIPPublicDERP
- TestAuthKeyLogoutAndRelogin
- TestEphemeral
- TestPingAllByHostname
- TestTaildrop
- TestResolveMagicDNS
- TestExpireNode
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- TestEnablingRoutes
- TestHASubnetRouterFailover
- TestEnableDisableAutoApprovedRoute
- TestSubnetRouteACL
- TestHeadscale
- TestCreateTailscale
- TestTailscaleNodesJoiningHeadcale
- TestSSHOneUserToAll
- TestSSHMultipleUsersAllToAll
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
database: [postgres, sqlite]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: satackey/action-docker-layer-caching@main
if: steps.changed-files.outputs.files == 'true'
continue-on-error: true
- name: Run Integration Test
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.files == 'true'
env:
USE_POSTGRES: ${{ matrix.database == 'postgres' && '1' || '0' }}
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
--env HEADSCALE_INTEGRATION_POSTGRES=${{env.USE_POSTGRES}} \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^${{ matrix.test }}$"
- uses: actions/upload-artifact@v4
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ matrix.test }}-${{matrix.database}}-logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v4
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ matrix.test }}-${{matrix.database}}-pprof
path: "control_logs/*.pprof.tar"

30
.github/workflows/test-integration.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: CI
on: [pull_request]
jobs:
integration-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run Integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- make test_integration

View File

@@ -1,37 +1,30 @@
name: Tests
name: CI
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
uses: tj-actions/changed-files@v14.1
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run tests
if: steps.changed-files.outputs.files == 'true'
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --check

View File

@@ -1,18 +0,0 @@
name: update-flake-lock
on:
workflow_dispatch: # allows manual triggering
schedule:
- cron: "0 0 * * 0" # runs weekly on Sunday at 00:00
jobs:
lockfile:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
- name: Update flake.lock
uses: DeterminateSystems/update-flake-lock@main
with:
pr-title: "Update flake.lock"

18
.gitignore vendored
View File

@@ -1,7 +1,3 @@
ignored/
tailscale/
.vscode/
# Binaries for programs and plugins
*.exe
*.exe~
@@ -16,9 +12,8 @@ tailscale/
*.out
# Dependency directories (remove the comment below to include it)
vendor/
# vendor/
dist/
/headscale
config.json
config.yaml
@@ -31,17 +26,8 @@ derp.yaml
# Exclude Jetbrains Editors
.idea
test_output/
control_logs/
test_output/
# Nix build output
result
.direnv/
integration_test/etc/config.dump.yaml
# MkDocs
.cache
/site
__debug_bin

View File

@@ -1,8 +1,6 @@
---
run:
timeout: 10m
build-tags:
- ts2019
issues:
skip-dirs:
@@ -10,8 +8,6 @@ issues:
linters:
enable-all: true
disable:
- depguard
- exhaustivestruct
- revive
- lll
@@ -28,18 +24,6 @@ linters:
- tagliatelle
- godox
- ireturn
- execinquery
- exhaustruct
- nolintlint
- musttag # causes issues with imported libs
- depguard
# deprecated
- structcheck # replaced by unused
- ifshort # deprecated by the owner
- varcheck # replaced by unused
- nosnakecase # replaced by revive
- deadcode # replaced by unused
# We should strive to enable these:
- wrapcheck
@@ -47,9 +31,6 @@ linters:
- makezero
- maintidx
# Limits the methods of an interface to 10. We have more in integration tests
- interfacebloat
# We might want to enable this, but it might be a lot of work
- cyclop
- nestif

View File

@@ -1,186 +1,76 @@
---
before:
hooks:
- go mod tidy -compat=1.22
- go mod vendor
- go mod tidy -compat=1.17
release:
prerelease: auto
builds:
- id: headscale
main: ./cmd/headscale
- id: darwin-amd64
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
targets:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
goos:
- darwin
goarch:
- amd64
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
- id: linux-armhf
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm
goarm:
- "7"
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
- id: linux-amd64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- amd64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
- id: linux-arm64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
archives:
- id: golang-cross
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
builds:
- darwin-amd64
- linux-armhf
- linux-amd64
- linux-arm64
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
format: binary
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}"
format: tar.gz
files:
- "vendor/"
nfpms:
# Configure nFPM for .deb and .rpm releases
#
# See https://nfpm.goreleaser.com/configuration/
# and https://goreleaser.com/customization/nfpm/
#
# Useful tools for debugging .debs:
# List file contents: dpkg -c dist/headscale...deb
# Package metadata: dpkg --info dist/headscale....deb
#
- builds:
- headscale
package_name: headscale
priority: optional
vendor: headscale
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
homepage: https://github.com/juanfont/headscale
license: BSD
bindir: /usr/bin
formats:
- deb
contents:
- src: ./config-example.yaml
dst: /etc/headscale/config.yaml
type: config|noreplace
file_info:
mode: 0644
- src: ./docs/packaging/headscale.systemd.service
dst: /usr/lib/systemd/system/headscale.service
- dst: /var/lib/headscale
type: dir
- dst: /var/run/headscale
type: dir
scripts:
postinstall: ./docs/packaging/postinstall.sh
postremove: ./docs/packaging/postremove.sh
kos:
- id: ghcr
repository: ghcr.io/juanfont/headscale
# bare tells KO to only use the repository
# for tagging and naming the container.
bare: true
base_image: gcr.io/distroless/base-debian12
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
- id: dockerhub
build: headscale
base_image: gcr.io/distroless/base-debian12
repository: headscale/headscale
bare: true
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
- id: ghcr-debug
repository: ghcr.io/juanfont/headscale
bare: true
base_image: "debian:12"
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
- id: dockerhub-debug
build: headscale
base_image: "debian:12"
repository: headscale/headscale
bare: true
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
checksum:
name_template: "checksums.txt"
snapshot:

View File

@@ -1,6 +0,0 @@
.github/workflows/test-integration-v2*
docs/dns-records.md
docs/running-headscale-container.md
docs/running-headscale-linux-manual.md
docs/running-headscale-linux.md
docs/running-headscale-openbsd.md

View File

@@ -1,256 +1,13 @@
# CHANGELOG
## 0.23.0 (2023-XX-XX)
This release is mainly a code reorganisation and refactoring, significantly improving the maintainability of the codebase. This should allow us to improve further and make it easier for the maintainers to keep on top of the project.
**Please remember to always back up your database between versions**
#### Here is a short summary of the broad topics of changes:
Code has been organised into modules, reducing use of global variables/objects, isolating concerns and “putting the right things in the logical place”.
The new [policy](https://github.com/juanfont/headscale/tree/main/hscontrol/policy) and [mapper](https://github.com/juanfont/headscale/tree/main/hscontrol/mapper) package, containing the ACL/Policy logic and the logic for creating the data served to clients (the network “map”) has been rewritten and improved. This change has allowed us to finish SSH support and add additional tests throughout the code to ensure correctness.
The [“poller”, or streaming logic](https://github.com/juanfont/headscale/blob/main/hscontrol/poll.go) has been rewritten and instead of keeping track of the latest updates, checking at a fixed interval, it now uses go channels, implemented in our new [notifier](https://github.com/juanfont/headscale/tree/main/hscontrol/notifier) package and it allows us to send updates to connected clients immediately. This should both improve performance and potential latency before a client picks up an update.
Headscale now supports sending “delta” updates, thanks to the new mapper and poller logic, allowing us to only inform nodes about new nodes, changed nodes and removed nodes. Previously we sent the entire state of the network every time an update was due.
While we have a pretty good [test harness](https://github.com/search?q=repo%3Ajuanfont%2Fheadscale+path%3A_test.go&type=code) for validating our changes, we have rewritten over [10000 lines of code](https://github.com/juanfont/headscale/compare/b01f1f1867136d9b2d7b1392776eb363b482c525...main) and bugs are expected. We need help testing this release. In addition, while we think the performance should in general be better, there might be regressions in parts of the platform, particularly where we prioritised correctness over speed.
There are also several bugfixes that has been encountered and fixed as part of implementing these changes, particularly
after improving the test harness as part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460).
### BREAKING
- Code reorganisation, a lot of code has moved, please review the following PRs accordingly [#1473](https://github.com/juanfont/headscale/pull/1473)
- Change the structure of database configuration, see [config-example.yaml](./config-example.yaml) for the new structure. [#1700](https://github.com/juanfont/headscale/pull/1700)
- Old structure has been remove and the configuration _must_ be converted.
- Adds additional configuration for PostgreSQL for setting max open, idle conection and idle connection lifetime.
- API: Machine is now Node [#1553](https://github.com/juanfont/headscale/pull/1553)
- Remove support for older Tailscale clients [#1611](https://github.com/juanfont/headscale/pull/1611)
- The latest supported client is 1.38
- Headscale checks that _at least_ one DERP is defined at start [#1564](https://github.com/juanfont/headscale/pull/1564)
- If no DERP is configured, the server will fail to start, this can be because it cannot load the DERPMap from file or url.
- Embedded DERP server requires a private key [#1611](https://github.com/juanfont/headscale/pull/1611)
- Add a filepath entry to [`derp.server.private_key_path`](https://github.com/juanfont/headscale/blob/b35993981297e18393706b2c963d6db882bba6aa/config-example.yaml#L95)
- Docker images are now built with goreleaser (ko) [#1716](https://github.com/juanfont/headscale/pull/1716) [#1763](https://github.com/juanfont/headscale/pull/1763)
- Entrypoint of container image has changed from shell to headscale, require change from `headscale serve` to `serve`
- `/var/lib/headscale` and `/var/run/headscale` is no longer created automatically, see [container docs](./docs/running-headscale-container.md)
- Prefixes are now defined per v4 and v6 range. [#1756](https://github.com/juanfont/headscale/pull/1756)
- `ip_prefixes` option is now `prefixes.v4` and `prefixes.v6`
- `prefixes.allocation` can be set to assign IPs at `sequential` or `random`. [#1869](https://github.com/juanfont/headscale/pull/1869)
## 0.16.0 (2022-xx-xx)
### Changes
- Use versioned migrations [#1644](https://github.com/juanfont/headscale/pull/1644)
- Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484)
- SSH support [#1487](https://github.com/juanfont/headscale/pull/1487)
- State management has been improved [#1492](https://github.com/juanfont/headscale/pull/1492)
- Use error group handling to ensure tests actually pass [#1535](https://github.com/juanfont/headscale/pull/1535) based on [#1460](https://github.com/juanfont/headscale/pull/1460)
- Fix hang on SIGTERM [#1492](https://github.com/juanfont/headscale/pull/1492) taken from [#1480](https://github.com/juanfont/headscale/pull/1480)
- Send logs to stderr by default [#1524](https://github.com/juanfont/headscale/pull/1524)
- Fix [TS-2023-006](https://tailscale.com/security-bulletins/#ts-2023-006) security UPnP issue [#1563](https://github.com/juanfont/headscale/pull/1563)
- Turn off gRPC logging [#1640](https://github.com/juanfont/headscale/pull/1640) fixes [#1259](https://github.com/juanfont/headscale/issues/1259)
- Added the possibility to manually create a DERP-map entry which can be customized, instead of automatically creating it. [#1565](https://github.com/juanfont/headscale/pull/1565)
- Add support for deleting api keys [#1702](https://github.com/juanfont/headscale/pull/1702)
- Add command to backfill IP addresses for nodes missing IPs from configured prefixes. [#1869](https://github.com/juanfont/headscale/pull/1869)
- Log available update as warning [#1877](https://github.com/juanfont/headscale/pull/1877)
## 0.22.3 (2023-05-12)
### Changes
- Added missing ca-certificates in Docker image [#1463](https://github.com/juanfont/headscale/pull/1463)
## 0.22.2 (2023-05-10)
### Changes
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
- Profiles are continously generated in our integration tests.
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
- Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
## 0.22.1 (2023-04-20)
### Changes
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
## 0.22.0 (2023-04-20)
### Changes
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
## 0.21.0 (2023-03-20)
### Changes
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
## 0.20.0 (2023-02-03)
### Changes
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
- defaults to 180 days like Tailscale SaaS
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
## 0.19.0 (2023-01-29)
### BREAKING
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
- **BACKUP your database before upgrading**
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
## 0.18.0 (2023-01-14)
### Changes
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
## 0.17.1 (2022-12-05)
### Changes
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
## 0.17.0 (2022-11-26)
### BREAKING
- `noise.private_key_path` has been added and is required for the new noise protocol.
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
### Important Changes
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
- Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847)
- Please note that this support should be considered _partially_ implemented
- SSH ACLs status:
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
- If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
### Changes
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
- Give a warning when running Headscale with reverse proxy improperly configured for WebSockets [#788](https://github.com/juanfont/headscale/pull/788)
- Fix subnet routers with Primary Routes [#811](https://github.com/juanfont/headscale/pull/811)
- Added support for JSON logs [#653](https://github.com/juanfont/headscale/issues/653)
- Sanitise the node key passed to registration url [#823](https://github.com/juanfont/headscale/pull/823)
- Add support for generating pre-auth keys with tags [#767](https://github.com/juanfont/headscale/pull/767)
- Add support for evaluating `autoApprovers` ACL entries when a machine is registered [#763](https://github.com/juanfont/headscale/pull/763)
- Add config flag to allow Headscale to start if OIDC provider is down [#829](https://github.com/juanfont/headscale/pull/829)
- Fix prefix length comparison bug in AutoApprovers route evaluation [#862](https://github.com/juanfont/headscale/pull/862)
- Random node DNS suffix only applied if names collide in namespace. [#766](https://github.com/juanfont/headscale/issues/766)
- Remove `ip_prefix` configuration option and warning [#899](https://github.com/juanfont/headscale/pull/899)
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971)
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
## 0.16.4 (2022-08-21)
### Changes
- Add ability to connect to PostgreSQL over TLS/SSL [#745](https://github.com/juanfont/headscale/pull/745)
- Fix CLI registration of expired machines [#754](https://github.com/juanfont/headscale/pull/754)
## 0.16.3 (2022-08-17)
### Changes
- Fix issue with OIDC authentication [#747](https://github.com/juanfont/headscale/pull/747)
## 0.16.2 (2022-08-14)
### Changes
- Fixed bugs in the client registration process after migration to NodeKey [#735](https://github.com/juanfont/headscale/pull/735)
## 0.16.1 (2022-08-12)
### Changes
- Updated dependencies (including the library that lacked armhf support) [#722](https://github.com/juanfont/headscale/pull/722)
- Fix missing group expansion in function `excludeCorretlyTaggedNodes` [#563](https://github.com/juanfont/headscale/issues/563)
- Improve registration protocol implementation and switch to NodeKey as main identifier [#725](https://github.com/juanfont/headscale/pull/725)
- Add ability to connect to PostgreSQL via unix socket [#734](https://github.com/juanfont/headscale/pull/734)
## 0.16.0 (2022-07-25)
**Note:** Take a backup of your database before upgrading.
### BREAKING
- Old ACL syntax is no longer supported ("users" & "ports" -> "src" & "dst"). Please check [the new syntax](https://tailscale.com/kb/1018/acls/).
### Changes
- **Drop** armhf (32-bit ARM) support. [#609](https://github.com/juanfont/headscale/pull/609)
- Headscale fails to serve if the ACL policy file cannot be parsed [#537](https://github.com/juanfont/headscale/pull/537)
- Fix labels cardinality error when registering unknown pre-auth key [#519](https://github.com/juanfont/headscale/pull/519)
- Fix send on closed channel crash in polling [#542](https://github.com/juanfont/headscale/pull/542)
- Fixed spurious calls to setLastStateChangeToNow from ephemeral nodes [#566](https://github.com/juanfont/headscale/pull/566)
- Add command for moving nodes between namespaces [#362](https://github.com/juanfont/headscale/issues/362)
- Added more configuration parameters for OpenID Connect (scopes, free-form paramters, domain and user allowlist)
- Add command to set tags on a node [#525](https://github.com/juanfont/headscale/issues/525)
- Add command to view tags of nodes [#356](https://github.com/juanfont/headscale/issues/356)
- Add --all (-a) flag to enable routes command [#360](https://github.com/juanfont/headscale/issues/360)
- Fix issue where nodes was not updated across namespaces [#560](https://github.com/juanfont/headscale/pull/560)
- Add the ability to rename a nodes name [#560](https://github.com/juanfont/headscale/pull/560)
- Node DNS names are now unique, a random suffix will be added when a node joins
- This change contains database changes, remember to **backup** your database before upgrading
- Add option to enable/disable logtail (Tailscale's logging infrastructure) [#596](https://github.com/juanfont/headscale/pull/596)
- This change disables the logs by default
- Use [Prometheus]'s duration parser, supporting days (`d`), weeks (`w`) and years (`y`) [#598](https://github.com/juanfont/headscale/pull/598)
- Add support for reloading ACLs with SIGHUP [#601](https://github.com/juanfont/headscale/pull/601)
- Use new ACL syntax [#618](https://github.com/juanfont/headscale/pull/618)
- Add -c option to specify config file from command line [#285](https://github.com/juanfont/headscale/issues/285) [#612](https://github.com/juanfont/headscale/pull/601)
- Add configuration option to allow Tailscale clients to use a random WireGuard port. [kb/1181/firewalls](https://tailscale.com/kb/1181/firewalls) [#624](https://github.com/juanfont/headscale/pull/624)
- Improve obtuse UX regarding missing configuration (`ephemeral_node_inactivity_timeout` not set) [#639](https://github.com/juanfont/headscale/pull/639)
- Fix nodes being shown as 'offline' in `tailscale status` [#648](https://github.com/juanfont/headscale/pull/648)
- Improve shutdown behaviour [#651](https://github.com/juanfont/headscale/pull/651)
- Drop Gin as web framework in Headscale [648](https://github.com/juanfont/headscale/pull/648) [677](https://github.com/juanfont/headscale/pull/677)
- Make tailnet node updates check interval configurable [#675](https://github.com/juanfont/headscale/pull/675)
- Fix regression with HTTP API [#684](https://github.com/juanfont/headscale/pull/684)
- nodes ls now print both Hostname and Name(Issue [#647](https://github.com/juanfont/headscale/issues/647) PR [#687](https://github.com/juanfont/headscale/pull/687))
## 0.15.0 (2022-03-20)
@@ -321,7 +78,7 @@ This is a part of aligning `headscale`'s behaviour with Tailscale's upstream beh
- OpenID Connect users will be mapped per namespaces
- Each user will get its own namespace, created if it does not exist
- `oidc.domain_map` option has been removed
- `strip_email_domain` option has been added (see [config-example.yaml](./config-example.yaml))
- `strip_email_domain` option has been added (see [config-example.yaml](./config_example.yaml))
### Changes

View File

@@ -1,134 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation
in our community a harassment-free experience for everyone, regardless
of age, body size, visible or invisible disability, ethnicity, sex
characteristics, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance,
race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open,
welcoming, diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for
our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our
mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- Trolling, insulting or derogatory comments, and personal or
political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in
a professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our
standards of acceptable behavior and will take appropriate and fair
corrective action in response to any behavior that they deem
inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit,
or reject comments, commits, code, wiki edits, issues, and other
contributions that are not aligned to this Code of Conduct, and will
communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also
applies when an individual is officially representing the community in
public spaces. Examples of representing our community include using an
official e-mail address, posting via an official social media account,
or acting as an appointed representative at an online or offline
event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
at our Discord channel. All complaints
will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and
security of the reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in
determining the consequences for any action they deem in violation of
this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior
deemed unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders,
providing clarity around the nature of the violation and an
explanation of why the behavior was inappropriate. A public apology
may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued
behavior. No interaction with the people involved, including
unsolicited interaction with those enforcing the Code of Conduct, for
a specified period of time. This includes avoiding interactions in
community spaces as well as external channels like social
media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards,
including sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or
public communication with the community for a specified period of
time. No public or private interaction with the people involved,
including unsolicited interaction with those enforcing the Code of
Conduct, is allowed during this period. Violating these terms may lead
to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of
community standards, including sustained inappropriate behavior,
harassment of an individual, or aggression toward or disparagement of
classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction
within the community.
## Attribution
This Code of Conduct is adapted from the [Contributor
Covenant][homepage], version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of
conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the
FAQ at https://www.contributor-covenant.org/faq. Translations are
available at https://www.contributor-covenant.org/translations.

22
Dockerfile Normal file
View File

@@ -0,0 +1,22 @@
# Builder image
FROM docker.io/golang:1.18.0-bullseye AS build
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN GGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale
# Production image
FROM gcr.io/distroless/base-debian11
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp
CMD ["headscale"]

23
Dockerfile.alpine Normal file
View File

@@ -0,0 +1,23 @@
# Builder image
FROM docker.io/golang:1.18.0-alpine AS build
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN apk add gcc musl-dev
RUN go mod download
COPY . .
RUN GGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale
# Production image
FROM docker.io/alpine:latest
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -1,9 +1,5 @@
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.22-bookworm AS build
ARG VERSION=dev
# Builder image
FROM docker.io/golang:1.18.0-bullseye AS build
ENV GOPATH /go
WORKDIR /go/src/headscale
@@ -12,21 +8,15 @@ RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN GGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale
RUN test -e /go/bin/headscale
# Debug image
FROM docker.io/golang:1.22-bookworm
FROM gcr.io/distroless/base-debian11:debug
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
RUN apt-get update \
&& apt-get install --no-install-recommends --yes less jq \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
RUN mkdir -p /var/run/headscale
# Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT []
EXPOSE 8080/tcp

17
Dockerfile.tailscale Normal file
View File

@@ -0,0 +1,17 @@
FROM ubuntu:latest
ARG TAILSCALE_VERSION=*
ARG TAILSCALE_CHANNEL=stable
RUN apt-get update \
&& apt-get install -y gnupg curl \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
&& rm -rf /var/lib/apt/lists/*
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -1,21 +1,21 @@
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM golang:latest
RUN apt-get update \
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
&& apt-get install -y ca-certificates dnsutils git iptables \
&& rm -rf /var/lib/apt/lists/*
RUN useradd --shell=/bin/bash --create-home ssh-it-user
RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR /go/tailscale
WORKDIR tailscale
RUN git checkout main \
&& sh build_dist.sh tailscale.com/cmd/tailscale \
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
&& cp tailscale /usr/local/bin/ \
&& cp tailscaled /usr/local/bin/
RUN sh build_dist.sh tailscale.com/cmd/tailscale
RUN sh build_dist.sh tailscale.com/cmd/tailscaled
RUN cp tailscale /usr/local/bin/
RUN cp tailscaled /usr/local/bin/
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -1,15 +1,8 @@
# Calculate version
version ?= $(shell git describe --always --tags --dirty)
version = $(git describe --always --tags --dirty)
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
# Determine if OS supports pie
GOOS ?= $(shell uname | tr '[:upper:]' '[:lower:]')
ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
pieflags = -buildmode=pie
else
endif
# GO_SOURCES = $(wildcard *.go)
# PROTO_SOURCES = $(wildcard **/*.proto)
GO_SOURCES = $(call rwildcard,,*.go)
@@ -17,22 +10,27 @@ PROTO_SOURCES = $(call rwildcard,,*.proto)
build:
nix build
CGO_ENABLED=0 go build -trimpath -buildmode=pie -mod=readonly -ldflags "-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$(version)" cmd/headscale/headscale.go
dev: lint test build
test:
gotestsum -- -short -coverprofile=coverage.out ./...
@go test -coverprofile=coverage.out ./...
test_integration:
docker run \
-t --rm \
-v ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
-v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \
golang:1 \
go run gotest.tools/gotestsum@latest -- -failfast ./... -timeout 120m -parallel 8
go test -failfast -tags integration -timeout 30m -count=1 ./...
test_integration_cli:
go test -tags integration -v integration_cli_test.go integration_common_test.go
test_integration_derp:
go test -tags integration -v integration_embedded_derp_test.go integration_common_test.go
coverprofile_func:
go tool cover -func=coverage.out
coverprofile_html:
go tool cover -html=coverage.out
lint:
golangci-lint run --fix --timeout 10m
@@ -50,4 +48,11 @@ compress: build
generate:
rm -rf gen
buf generate proto
go run github.com/bufbuild/buf/cmd/buf generate proto
install-protobuf-plugins:
go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc

490
README.md
View File

@@ -1,4 +1,4 @@
![headscale logo](./docs/logo/headscale3_header_stacked_left.png)
# headscale
![ci](https://github.com/juanfont/headscale/actions/workflows/test.yml/badge.svg)
@@ -28,22 +28,26 @@ and exposes the advertised routes of your nodes.
A [Tailscale network (tailnet)](https://tailscale.com/kb/1136/tailnet/) is private
network which Tailscale assigns to a user in terms of private users or an
organisation.
organisations.
## Design goal
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
control server.
Headscale's goal is to provide self-hosters and hobbyists with an open-source
server they can use for their projects and labs.
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
open-source organisation.
`headscale` aims to implement a self-hosted, open source alternative to the Tailscale
control server. `headscale` has a narrower scope and an instance of `headscale`
implements a _single_ Tailnet, which is typically what a single organisation, or
home/personal setup would use.
## Supporting Headscale
`headscale` uses terms that maps to Tailscale's control server, consult the
[glossary](./docs/glossary.md) for explainations.
## Support
If you like `headscale` and find it useful, there is a sponsorship and donation
buttons available in the repo.
If you would like to sponsor features, bugs or prioritisation, reach out to
one of the maintainers.
## Features
- Full "base" support of Tailscale's features
@@ -63,50 +67,27 @@ buttons available in the repo.
## Client OS support
| OS | Supports headscale |
| ------- | --------------------------------------------------------- |
| Linux | Yes |
| OpenBSD | Yes |
| FreeBSD | Yes |
| macOS | Yes (see `/apple` on your headscale for more information) |
| Windows | Yes [docs](./docs/windows-client.md) |
| Android | Yes [docs](./docs/android-client.md) |
| iOS | Yes [docs](./docs/iOS-client.md) |
| OS | Supports headscale |
| ------- | ----------------------------------------------------------------------------------------------------------------- |
| Linux | Yes |
| OpenBSD | Yes |
| FreeBSD | Yes |
| macOS | Yes (see `/apple` on your headscale for more information) |
| Windows | Yes [docs](./docs/windows-client.md) |
| Android | [You need to compile the client yourself](https://github.com/juanfont/headscale/issues/58#issuecomment-885255270) |
| iOS | Not yet |
## Running headscale
**Please note that we do not support nor encourage the use of reverse proxies
and container to run Headscale.**
Please have a look at the [`documentation`](https://headscale.net/).
## Talks
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
- presented by Juan Font Alonso and Kristoffer Dalby
Please have a look at the documentation under [`docs/`](docs/).
## Disclaimer
This project is not associated with Tailscale Inc.
However, one of the active maintainers for Headscale [is employed by Tailscale](https://tailscale.com/blog/opensource) and he is allowed to spend work hours contributing to the project. Contributions from this maintainer are reviewed by other maintainers.
The maintainers work together on setting the direction for the project. The underlying principle is to serve the community of self-hosters, enthusiasts and hobbyists - while having a sustainable project.
1. We have nothing to do with Tailscale, or Tailscale Inc.
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
## Contributing
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
### Requirements
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
and [Buf](https://buf.build)(Protobuf generator).
@@ -114,6 +95,8 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
be done with `nix develop`, which will install the tools and give you a shell.
This guarantees that you will have the same dev env as `headscale` maintainers.
PRs and suggestions are welcome.
### Code style
To ensure we have some consistency with a growing number of contributions,
@@ -175,8 +158,417 @@ make build
## Contributors
<a href="https://github.com/juanfont/headscale/graphs/contributors">
<img src="https://contrib.rocks/image?repo=juanfont/headscale" />
</a>
Made with [contrib.rocks](https://contrib.rocks).
<table>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kradalby>
<img src=https://avatars.githubusercontent.com/u/98431?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kristoffer Dalby/>
<br />
<sub style="font-size:14px"><b>Kristoffer Dalby</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/juanfont>
<img src=https://avatars.githubusercontent.com/u/181059?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Juan Font/>
<br />
<sub style="font-size:14px"><b>Juan Font</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cure>
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
<br />
<sub style="font-size:14px"><b>Ward Vandewege</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
<br />
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/e-zk>
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
<br />
<sub style="font-size:14px"><b>e-zk</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/arch4ngel>
<img src=https://avatars.githubusercontent.com/u/11574161?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Justin Angel/>
<br />
<sub style="font-size:14px"><b>Justin Angel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ItalyPaleAle>
<img src=https://avatars.githubusercontent.com/u/43508?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Alessandro (Ale) Segala/>
<br />
<sub style="font-size:14px"><b>Alessandro (Ale) Segala</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/reynico>
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
<br />
<sub style="font-size:14px"><b>Nico</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/unreality>
<img src=https://avatars.githubusercontent.com/u/352522?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=unreality/>
<br />
<sub style="font-size:14px"><b>unreality</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
<br />
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Niek>
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
<br />
<sub style="font-size:14px"><b>Niek van der Maas</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/negbie>
<img src=https://avatars.githubusercontent.com/u/20154956?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Eugen Biegler/>
<br />
<sub style="font-size:14px"><b>Eugen Biegler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/qbit>
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
<br />
<sub style="font-size:14px"><b>Aaron Bieber</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fdelucchijr>
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
<br />
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hdhoang>
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Hoàng Đức Hiếu/>
<br />
<sub style="font-size:14px"><b>Hoàng Đức Hiếu</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mevansam>
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
<br />
<sub style="font-size:14px"><b>Mevan Samaratunga</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dragetd>
<img src=https://avatars.githubusercontent.com/u/3639577?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael G./>
<br />
<sub style="font-size:14px"><b>Michael G.</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ptman>
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
<br />
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/artemklevtsov>
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
<br />
<sub style="font-size:14px"><b>Artem Klevtsov</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cmars>
<img src=https://avatars.githubusercontent.com/u/23741?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Casey Marshall/>
<br />
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/SilverBut>
<img src=https://avatars.githubusercontent.com/u/6560655?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Silver Bullet/>
<br />
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lachy2849>
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
<br />
<sub style="font-size:14px"><b>lachy2849</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/t56k>
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
<br />
<sub style="font-size:14px"><b>thomas</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aberoham>
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
<br />
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aofei>
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
<br />
<sub style="font-size:14px"><b>Aofei Sheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/awoimbee>
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
<br />
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stensonb>
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
<br />
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/yangchuansheng>
<img src=https://avatars.githubusercontent.com/u/15308462?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt= Carson Yang/>
<br />
<sub style="font-size:14px"><b> Carson Yang</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fkr>
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
<br />
<sub style="font-size:14px"><b>Felix Kronlage-Dammers</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/felixonmars>
<img src=https://avatars.githubusercontent.com/u/1006477?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Yan/>
<br />
<sub style="font-size:14px"><b>Felix Yan</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JJGadgets>
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
<br />
<sub style="font-size:14px"><b>JJGadgets</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/madjam002>
<img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
<br />
<sub style="font-size:14px"><b>Jamie Greeff</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimt>
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
<br />
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/piec>
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
<br />
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/rcursaru>
<img src=https://avatars.githubusercontent.com/u/16259641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=rcursaru/>
<br />
<sub style="font-size:14px"><b>rcursaru</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/renovate-bot>
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=WhiteSource Renovate/>
<br />
<sub style="font-size:14px"><b>WhiteSource Renovate</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ryanfowler>
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
<br />
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/shaananc>
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
<br />
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/m-tanner-dev0>
<img src=https://avatars.githubusercontent.com/u/97977342?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tanner/>
<br />
<sub style="font-size:14px"><b>Tanner</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Teteros>
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
<br />
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gitter-badger>
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
<br />
<sub style="font-size:14px"><b>The Gitter Badger</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tianon>
<img src=https://avatars.githubusercontent.com/u/161631?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tianon Gravi/>
<br />
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/woudsma>
<img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/>
<br />
<sub style="font-size:14px"><b>Tjerk Woudsma</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/y0ngb1n>
<img src=https://avatars.githubusercontent.com/u/25719408?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yang Bin/>
<br />
<sub style="font-size:14px"><b>Yang Bin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zekker6>
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
<br />
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Bpazy>
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ZiYuan/>
<br />
<sub style="font-size:14px"><b>ZiYuan</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/bravechamp>
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
<br />
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/derelm>
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
<br />
<sub style="font-size:14px"><b>derelm</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nning>
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
<br />
<sub style="font-size:14px"><b>henning mueller</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ignoramous>
<img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/>
<br />
<sub style="font-size:14px"><b>ignoramous</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lion24>
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lion24/>
<br />
<sub style="font-size:14px"><b>lion24</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
<br />
<sub style="font-size:14px"><b>pernila</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Wakeful-Cloud>
<img src=https://avatars.githubusercontent.com/u/38930607?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Wakeful-Cloud/>
<br />
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/xpzouying>
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
<br />
<sub style="font-size:14px"><b>zy</b></sub>
</a>
</td>
</tr>
</table>

459
acls.go Normal file
View File

@@ -0,0 +1,459 @@
package headscale
import (
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/rs/zerolog/log"
"github.com/tailscale/hujson"
"gopkg.in/yaml.v3"
"inet.af/netaddr"
"tailscale.com/tailcfg"
)
const (
errEmptyPolicy = Error("empty policy")
errInvalidAction = Error("invalid action")
errInvalidGroup = Error("invalid group")
errInvalidTag = Error("invalid tag")
errInvalidPortFormat = Error("invalid port format")
)
const (
Base8 = 8
Base10 = 10
BitSize16 = 16
BitSize32 = 32
BitSize64 = 64
portRangeBegin = 0
portRangeEnd = 65535
expectedTokenItems = 2
)
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
func (h *Headscale) LoadACLPolicy(path string) error {
log.Debug().
Str("func", "LoadACLPolicy").
Str("path", path).
Msg("Loading ACL policy from path")
policyFile, err := os.Open(path)
if err != nil {
return err
}
defer policyFile.Close()
var policy ACLPolicy
policyBytes, err := io.ReadAll(policyFile)
if err != nil {
return err
}
switch filepath.Ext(path) {
case ".yml", ".yaml":
log.Debug().
Str("path", path).
Bytes("file", policyBytes).
Msg("Loading ACLs from YAML")
err := yaml.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
log.Trace().
Interface("policy", policy).
Msg("Loaded policy from YAML")
default:
ast, err := hujson.Parse(policyBytes)
if err != nil {
return err
}
ast.Standardize()
policyBytes = ast.Pack()
err = json.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
}
if policy.IsZero() {
return errEmptyPolicy
}
h.aclPolicy = &policy
return h.UpdateACLRules()
}
func (h *Headscale) UpdateACLRules() error {
rules, err := h.generateACLRules()
if err != nil {
return err
}
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
h.aclRules = rules
return nil
}
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
rules := []tailcfg.FilterRule{}
if h.aclPolicy == nil {
return nil, errEmptyPolicy
}
machines, err := h.ListMachines()
if err != nil {
return nil, err
}
for index, acl := range h.aclPolicy.ACLs {
if acl.Action != "accept" {
return nil, errInvalidAction
}
srcIPs := []string{}
for innerIndex, user := range acl.Users {
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, user)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, User %d", index, innerIndex)
return nil, err
}
srcIPs = append(srcIPs, srcs...)
}
destPorts := []tailcfg.NetPortRange{}
for innerIndex, ports := range acl.Ports {
dests, err := h.generateACLPolicyDestPorts(machines, *h.aclPolicy, ports)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Port %d", index, innerIndex)
return nil, err
}
destPorts = append(destPorts, dests...)
}
rules = append(rules, tailcfg.FilterRule{
SrcIPs: srcIPs,
DstPorts: destPorts,
})
}
return rules, nil
}
func (h *Headscale) generateACLPolicySrcIP(
machines []Machine,
aclPolicy ACLPolicy,
u string,
) ([]string, error) {
return expandAlias(machines, aclPolicy, u, h.cfg.OIDC.StripEmaildomain)
}
func (h *Headscale) generateACLPolicyDestPorts(
machines []Machine,
aclPolicy ACLPolicy,
d string,
) ([]tailcfg.NetPortRange, error) {
tokens := strings.Split(d, ":")
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
return nil, errInvalidPortFormat
}
var alias string
// We can have here stuff like:
// git-server:*
// 192.168.1.0/24:22
// tag:montreal-webserver:80,443
// tag:api-server:443
// example-host-1:*
if len(tokens) == expectedTokenItems {
alias = tokens[0]
} else {
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
}
expanded, err := expandAlias(
machines,
aclPolicy,
alias,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
return nil, err
}
ports, err := expandPorts(tokens[len(tokens)-1])
if err != nil {
return nil, err
}
dests := []tailcfg.NetPortRange{}
for _, d := range expanded {
for _, p := range *ports {
pr := tailcfg.NetPortRange{
IP: d,
Ports: p,
}
dests = append(dests, pr)
}
}
return dests, nil
}
// expandalias has an input of either
// - a namespace
// - a group
// - a tag
// and transform these in IPAddresses.
func expandAlias(
machines []Machine,
aclPolicy ACLPolicy,
alias string,
stripEmailDomain bool,
) ([]string, error) {
ips := []string{}
if alias == "*" {
return []string{"*"}, nil
}
log.Debug().
Str("alias", alias).
Msg("Expanding")
if strings.HasPrefix(alias, "group:") {
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
for _, n := range namespaces {
nodes := filterMachinesByNamespace(machines, n)
for _, node := range nodes {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
}
return ips, nil
}
if strings.HasPrefix(alias, "tag:") {
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
for _, namespace := range owners {
machines := filterMachinesByNamespace(machines, namespace)
for _, machine := range machines {
hi := machine.GetHostInfo()
for _, t := range hi.RequestTags {
if alias == t {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
}
}
}
return ips, nil
}
// if alias is a namespace
nodes := filterMachinesByNamespace(machines, alias)
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias)
for _, n := range nodes {
ips = append(ips, n.IPAddresses.ToStringSlice()...)
}
if len(ips) > 0 {
return ips, nil
}
// if alias is an host
if h, ok := aclPolicy.Hosts[alias]; ok {
return []string{h.String()}, nil
}
// if alias is an IP
ip, err := netaddr.ParseIP(alias)
if err == nil {
return []string{ip.String()}, nil
}
// if alias is an CIDR
cidr, err := netaddr.ParseIPPrefix(alias)
if err == nil {
return []string{cidr.String()}, nil
}
log.Warn().Msgf("No IPs found with the alias %v", alias)
return ips, nil
}
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
// that are correctly tagged since they should not be listed as being in the namespace
// we assume in this function that we only have nodes from 1 namespace.
func excludeCorrectlyTaggedNodes(
aclPolicy ACLPolicy,
nodes []Machine,
namespace string,
) []Machine {
out := []Machine{}
tags := []string{}
for tag, ns := range aclPolicy.TagOwners {
if containsString(ns, namespace) {
tags = append(tags, tag)
}
}
// for each machine if tag is in tags list, don't append it.
for _, machine := range nodes {
hi := machine.GetHostInfo()
found := false
for _, t := range hi.RequestTags {
if containsString(tags, t) {
found = true
break
}
}
if !found {
out = append(out, machine)
}
}
return out
}
func expandPorts(portsStr string) (*[]tailcfg.PortRange, error) {
if portsStr == "*" {
return &[]tailcfg.PortRange{
{First: portRangeBegin, Last: portRangeEnd},
}, nil
}
ports := []tailcfg.PortRange{}
for _, portStr := range strings.Split(portsStr, ",") {
rang := strings.Split(portStr, "-")
switch len(rang) {
case 1:
port, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(port),
Last: uint16(port),
})
case expectedTokenItems:
start, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
last, err := strconv.ParseUint(rang[1], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(start),
Last: uint16(last),
})
default:
return nil, errInvalidPortFormat
}
}
return &ports, nil
}
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
out := []Machine{}
for _, machine := range machines {
if machine.Namespace.Name == namespace {
out = append(out, machine)
}
}
return out
}
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
// a group cannot be composed of groups.
func expandTagOwners(
aclPolicy ACLPolicy,
tag string,
stripEmailDomain bool,
) ([]string, error) {
var owners []string
ows, ok := aclPolicy.TagOwners[tag]
if !ok {
return []string{}, fmt.Errorf(
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
errInvalidTag,
tag,
)
}
for _, owner := range ows {
if strings.HasPrefix(owner, "group:") {
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
if err != nil {
return []string{}, err
}
owners = append(owners, gs...)
} else {
owners = append(owners, owner)
}
}
return owners, nil
}
// expandGroup will return the list of namespace inside the group
// after some validation.
func expandGroup(
aclPolicy ACLPolicy,
group string,
stripEmailDomain bool,
) ([]string, error) {
outGroups := []string{}
aclGroups, ok := aclPolicy.Groups[group]
if !ok {
return []string{}, fmt.Errorf(
"group %v isn't registered. %w",
group,
errInvalidGroup,
)
}
for _, group := range aclGroups {
if strings.HasPrefix(group, "group:") {
return []string{}, fmt.Errorf(
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
errInvalidGroup,
)
}
grp, err := NormalizeToFQDNRules(group, stripEmailDomain)
if err != nil {
return []string{}, fmt.Errorf(
"failed to normalize group %q, err: %w",
group,
errInvalidGroup,
)
}
outGroups = append(outGroups, grp)
}
return outGroups, nil
}

1228
acls_test.go Normal file

File diff suppressed because it is too large Load Diff

101
acls_types.go Normal file
View File

@@ -0,0 +1,101 @@
package headscale
import (
"encoding/json"
"strings"
"github.com/tailscale/hujson"
"gopkg.in/yaml.v3"
"inet.af/netaddr"
)
// ACLPolicy represents a Tailscale ACL Policy.
type ACLPolicy struct {
Groups Groups `json:"Groups" yaml:"Groups"`
Hosts Hosts `json:"Hosts" yaml:"Hosts"`
TagOwners TagOwners `json:"TagOwners" yaml:"TagOwners"`
ACLs []ACL `json:"ACLs" yaml:"ACLs"`
Tests []ACLTest `json:"Tests" yaml:"Tests"`
}
// ACL is a basic rule for the ACL Policy.
type ACL struct {
Action string `json:"Action" yaml:"Action"`
Users []string `json:"Users" yaml:"Users"`
Ports []string `json:"Ports" yaml:"Ports"`
}
// Groups references a series of alias in the ACL rules.
type Groups map[string][]string
// Hosts are alias for IP addresses or subnets.
type Hosts map[string]netaddr.IPPrefix
// TagOwners specify what users (namespaces?) are allow to use certain tags.
type TagOwners map[string][]string
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
type ACLTest struct {
User string `json:"User" yaml:"User"`
Allow []string `json:"Allow" yaml:"Allow"`
Deny []string `json:"Deny,omitempty" yaml:"Deny,omitempty"`
}
// UnmarshalJSON allows to parse the Hosts directly into netaddr objects.
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
newHosts := Hosts{}
hostIPPrefixMap := make(map[string]string)
ast, err := hujson.Parse(data)
if err != nil {
return err
}
ast.Standardize()
data = ast.Pack()
err = json.Unmarshal(data, &hostIPPrefixMap)
if err != nil {
return err
}
for host, prefixStr := range hostIPPrefixMap {
if !strings.Contains(prefixStr, "/") {
prefixStr += "/32"
}
prefix, err := netaddr.ParseIPPrefix(prefixStr)
if err != nil {
return err
}
newHosts[host] = prefix
}
*hosts = newHosts
return nil
}
// UnmarshalYAML allows to parse the Hosts directly into netaddr objects.
func (hosts *Hosts) UnmarshalYAML(data []byte) error {
newHosts := Hosts{}
hostIPPrefixMap := make(map[string]string)
err := yaml.Unmarshal(data, &hostIPPrefixMap)
if err != nil {
return err
}
for host, prefixStr := range hostIPPrefixMap {
prefix, err := netaddr.ParseIPPrefix(prefixStr)
if err != nil {
return err
}
newHosts[host] = prefix
}
*hosts = newHosts
return nil
}
// IsZero is perhaps a bit naive here.
func (policy ACLPolicy) IsZero() bool {
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
return true
}
return false
}

661
api.go Normal file
View File

@@ -0,0 +1,661 @@
package headscale
import (
"bytes"
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"html/template"
"io"
"net/http"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/klauspost/compress/zstd"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
const (
reservedResponseHeaderSize = 4
RegisterMethodAuthKey = "authkey"
RegisterMethodOIDC = "oidc"
RegisterMethodCLI = "cli"
ErrRegisterMethodCLIDoesNotSupportExpire = Error(
"machines registered with CLI does not support expire",
)
)
// KeyHandler provides the Headscale pub key
// Listens in /key.
func (h *Headscale) KeyHandler(ctx *gin.Context) {
ctx.Data(
http.StatusOK,
"text/plain; charset=utf-8",
[]byte(MachinePublicKeyStripPrefix(h.privateKey.Public())),
)
}
type registerWebAPITemplateConfig struct {
Key string
}
var registerWebAPITemplate = template.Must(
template.New("registerweb").Parse(`
<html>
<head>
<title>Registration - Headscale</title>
</head>
<body>
<h1>headscale</h1>
<h2>Machine registration</h2>
<p>
Run the command below in the headscale server to add this machine to your network:
</p>
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
</body>
</html>
`))
// RegisterWebAPI shows a simple message in the browser to point to the CLI
// Listens in /register.
func (h *Headscale) RegisterWebAPI(ctx *gin.Context) {
machineKeyStr := ctx.Query("key")
if machineKeyStr == "" {
ctx.String(http.StatusBadRequest, "Wrong params")
return
}
var content bytes.Buffer
if err := registerWebAPITemplate.Execute(&content, registerWebAPITemplateConfig{
Key: machineKeyStr,
}); err != nil {
log.Error().
Str("func", "RegisterWebAPI").
Err(err).
Msg("Could not render register web API template")
ctx.Data(
http.StatusInternalServerError,
"text/html; charset=utf-8",
[]byte("Could not render register web API template"),
)
}
ctx.Data(http.StatusOK, "text/html; charset=utf-8", content.Bytes())
}
// RegistrationHandler handles the actual registration process of a machine
// Endpoint /machine/:id.
func (h *Headscale) RegistrationHandler(ctx *gin.Context) {
body, _ := io.ReadAll(ctx.Request.Body)
machineKeyStr := ctx.Param("id")
var machineKey key.MachinePublic
err := machineKey.UnmarshalText([]byte(MachinePublicKeyEnsurePrefix(machineKeyStr)))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot parse machine key")
machineRegistrations.WithLabelValues("unknown", "web", "error", "unknown").Inc()
ctx.String(http.StatusInternalServerError, "Sad!")
return
}
req := tailcfg.RegisterRequest{}
err = decode(body, &req, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot decode message")
machineRegistrations.WithLabelValues("unknown", "web", "error", "unknown").Inc()
ctx.String(http.StatusInternalServerError, "Very sad!")
return
}
now := time.Now().UTC()
machine, err := h.GetMachineByMachineKey(machineKey)
if errors.Is(err, gorm.ErrRecordNotFound) {
log.Info().Str("machine", req.Hostinfo.Hostname).Msg("New machine")
machineKeyStr := MachinePublicKeyStripPrefix(machineKey)
// If the machine has AuthKey set, handle registration via PreAuthKeys
if req.Auth.AuthKey != "" {
h.handleAuthKey(ctx, machineKey, req)
return
}
hname, err := NormalizeToFQDNRules(
req.Hostinfo.Hostname,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
log.Error().
Caller().
Str("func", "RegistrationHandler").
Str("hostinfo.name", req.Hostinfo.Hostname).
Err(err)
return
}
// The machine did not have a key to authenticate, which means
// that we rely on a method that calls back some how (OpenID or CLI)
// We create the machine and then keep it around until a callback
// happens
newMachine := Machine{
MachineKey: machineKeyStr,
Name: hname,
NodeKey: NodePublicKeyStripPrefix(req.NodeKey),
LastSeen: &now,
Expiry: &time.Time{},
}
if !req.Expiry.IsZero() {
log.Trace().
Caller().
Str("machine", req.Hostinfo.Hostname).
Time("expiry", req.Expiry).
Msg("Non-zero expiry time requested")
newMachine.Expiry = &req.Expiry
}
h.registrationCache.Set(
machineKeyStr,
newMachine,
registerCacheExpiration,
)
h.handleMachineRegistrationNew(ctx, machineKey, req)
return
}
// The machine is already registered, so we need to pass through reauth or key update.
if machine != nil {
// If the NodeKey stored in headscale is the same as the key presented in a registration
// request, then we have a node that is either:
// - Trying to log out (sending a expiry in the past)
// - A valid, registered machine, looking for the node map
// - Expired machine wanting to reauthenticate
if machine.NodeKey == NodePublicKeyStripPrefix(req.NodeKey) {
// The client sends an Expiry in the past if the client is requesting to expire the key (aka logout)
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L648
if !req.Expiry.IsZero() && req.Expiry.UTC().Before(now) {
h.handleMachineLogOut(ctx, machineKey, *machine)
return
}
// If machine is not expired, and is register, we have a already accepted this machine,
// let it proceed with a valid registration
if !machine.isExpired() {
h.handleMachineValidRegistration(ctx, machineKey, *machine)
return
}
}
// The NodeKey we have matches OldNodeKey, which means this is a refresh after a key expiration
if machine.NodeKey == NodePublicKeyStripPrefix(req.OldNodeKey) &&
!machine.isExpired() {
h.handleMachineRefreshKey(ctx, machineKey, req, *machine)
return
}
// The machine has expired
h.handleMachineExpired(ctx, machineKey, req, *machine)
return
}
}
func (h *Headscale) getMapResponse(
machineKey key.MachinePublic,
req tailcfg.MapRequest,
machine *Machine,
) ([]byte, error) {
log.Trace().
Str("func", "getMapResponse").
Str("machine", req.Hostinfo.Hostname).
Msg("Creating Map response")
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Cannot convert to node")
return nil, err
}
peers, err := h.getValidPeers(machine)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Cannot fetch peers")
return nil, err
}
profiles := getMapResponseUserProfiles(*machine, peers)
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Failed to convert peers to Tailscale nodes")
return nil, err
}
dnsConfig := getMapResponseDNSConfig(
h.cfg.DNSConfig,
h.cfg.BaseDomain,
*machine,
peers,
)
resp := tailcfg.MapResponse{
KeepAlive: false,
Node: node,
Peers: nodePeers,
DNSConfig: dnsConfig,
Domain: h.cfg.BaseDomain,
PacketFilter: h.aclRules,
DERPMap: h.DERPMap,
UserProfiles: profiles,
}
log.Trace().
Str("func", "getMapResponse").
Str("machine", req.Hostinfo.Hostname).
// Interface("payload", resp).
Msgf("Generated map response: %s", tailMapResponseToString(resp))
var respBody []byte
if req.Compress == "zstd" {
src, err := json.Marshal(resp)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Failed to marshal response for the client")
return nil, err
}
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody = h.privateKey.SealTo(machineKey, srcCompressed)
} else {
respBody, err = encode(resp, &machineKey, h.privateKey)
if err != nil {
return nil, err
}
}
// declare the incoming size on the first 4 bytes
data := make([]byte, reservedResponseHeaderSize)
binary.LittleEndian.PutUint32(data, uint32(len(respBody)))
data = append(data, respBody...)
return data, nil
}
func (h *Headscale) getMapKeepAliveResponse(
machineKey key.MachinePublic,
mapRequest tailcfg.MapRequest,
) ([]byte, error) {
mapResponse := tailcfg.MapResponse{
KeepAlive: true,
}
var respBody []byte
var err error
if mapRequest.Compress == "zstd" {
src, err := json.Marshal(mapResponse)
if err != nil {
log.Error().
Caller().
Str("func", "getMapKeepAliveResponse").
Err(err).
Msg("Failed to marshal keepalive response for the client")
return nil, err
}
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody = h.privateKey.SealTo(machineKey, srcCompressed)
} else {
respBody, err = encode(mapResponse, &machineKey, h.privateKey)
if err != nil {
return nil, err
}
}
data := make([]byte, reservedResponseHeaderSize)
binary.LittleEndian.PutUint32(data, uint32(len(respBody)))
data = append(data, respBody...)
return data, nil
}
func (h *Headscale) handleMachineLogOut(
ctx *gin.Context,
machineKey key.MachinePublic,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
log.Info().
Str("machine", machine.Name).
Msg("Client requested logout")
h.ExpireMachine(&machine)
resp.AuthURL = ""
resp.MachineAuthorized = false
resp.User = *machine.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "")
return
}
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineValidRegistration(
ctx *gin.Context,
machineKey key.MachinePublic,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
// The machine registration is valid, respond with redirect to /map
log.Debug().
Str("machine", machine.Name).
Msg("Client is registered and we have the current NodeKey. All clear to /map")
resp.AuthURL = ""
resp.MachineAuthorized = true
resp.User = *machine.Namespace.toUser()
resp.Login = *machine.Namespace.toLogin()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("update", "web", "error", machine.Namespace.Name).
Inc()
ctx.String(http.StatusInternalServerError, "")
return
}
machineRegistrations.WithLabelValues("update", "web", "success", machine.Namespace.Name).
Inc()
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineExpired(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
// The client has registered before, but has expired
log.Debug().
Str("machine", machine.Name).
Msg("Machine registration has expired. Sending a authurl to register")
if registerRequest.Auth.AuthKey != "" {
h.handleAuthKey(ctx, machineKey, registerRequest)
return
}
if h.cfg.OIDC.Issuer != "" {
resp.AuthURL = fmt.Sprintf("%s/oidc/register/%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), machineKey.String())
} else {
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), machineKey.String())
}
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("reauth", "web", "error", machine.Namespace.Name).
Inc()
ctx.String(http.StatusInternalServerError, "")
return
}
machineRegistrations.WithLabelValues("reauth", "web", "success", machine.Namespace.Name).
Inc()
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineRefreshKey(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
log.Debug().
Str("machine", machine.Name).
Msg("We have the OldNodeKey in the database. This is a key refresh")
machine.NodeKey = NodePublicKeyStripPrefix(registerRequest.NodeKey)
h.db.Save(&machine)
resp.AuthURL = ""
resp.User = *machine.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "Extremely sad!")
return
}
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineRegistrationNew(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
) {
resp := tailcfg.RegisterResponse{}
// The machine registration is new, redirect the client to the registration URL
log.Debug().
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("The node is sending us a new NodeKey, sending auth url")
if h.cfg.OIDC.Issuer != "" {
resp.AuthURL = fmt.Sprintf(
"%s/oidc/register/%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"),
machineKey.String(),
)
} else {
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), MachinePublicKeyStripPrefix(machineKey))
}
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "")
return
}
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
// TODO: check if any locks are needed around IP allocation.
func (h *Headscale) handleAuthKey(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
) {
machineKeyStr := MachinePublicKeyStripPrefix(machineKey)
log.Debug().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msgf("Processing auth key for %s", registerRequest.Hostinfo.Hostname)
resp := tailcfg.RegisterResponse{}
pak, err := h.checkKeyValidity(registerRequest.Auth.AuthKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Failed authentication via AuthKey")
resp.MachineAuthorized = false
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
return
}
ctx.Data(http.StatusUnauthorized, "application/json; charset=utf-8", respBody)
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("Failed authentication via AuthKey")
if pak != nil {
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
} else {
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", "unknown").Inc()
}
return
}
log.Debug().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("Authentication key was valid, proceeding to acquire IP addresses")
nodeKey := NodePublicKeyStripPrefix(registerRequest.NodeKey)
// retrieve machine information if it exist
// The error is not important, because if it does not
// exist, then this is a new machine and we will move
// on to registration.
machine, _ := h.GetMachineByMachineKey(machineKey)
if machine != nil {
log.Trace().
Caller().
Str("machine", machine.Name).
Msg("machine already registered, refreshing with new auth key")
machine.NodeKey = nodeKey
machine.AuthKeyID = uint(pak.ID)
h.RefreshMachine(machine, registerRequest.Expiry)
} else {
now := time.Now().UTC()
machineToRegister := Machine{
Name: registerRequest.Hostinfo.Hostname,
NamespaceID: pak.Namespace.ID,
MachineKey: machineKeyStr,
RegisterMethod: RegisterMethodAuthKey,
Expiry: &registerRequest.Expiry,
NodeKey: nodeKey,
LastSeen: &now,
AuthKeyID: uint(pak.ID),
}
machine, err = h.RegisterMachine(
machineToRegister,
)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("could not register machine")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
ctx.String(
http.StatusInternalServerError,
"could not register machine",
)
return
}
}
h.UsePreAuthKey(pak)
resp.MachineAuthorized = true
resp.User = *pak.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
ctx.String(http.StatusInternalServerError, "Extremely sad!")
return
}
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "success", pak.Namespace.Name).
Inc()
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
log.Info().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Str("ips", strings.Join(machine.IPAddresses.ToStringSlice(), ", ")).
Msg("Successfully authenticated via AuthKey")
}

164
api_key.go Normal file
View File

@@ -0,0 +1,164 @@
package headscale
import (
"fmt"
"strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"golang.org/x/crypto/bcrypt"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
apiPrefixLength = 7
apiKeyLength = 32
apiKeyParts = 2
errAPIKeyFailedToParse = Error("Failed to parse ApiKey")
)
// APIKey describes the datamodel for API keys used to remotely authenticate with
// headscale.
type APIKey struct {
ID uint64 `gorm:"primary_key"`
Prefix string `gorm:"uniqueIndex"`
Hash []byte
CreatedAt *time.Time
Expiration *time.Time
LastSeen *time.Time
}
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
func (h *Headscale) CreateAPIKey(
expiration *time.Time,
) (string, *APIKey, error) {
prefix, err := GenerateRandomStringURLSafe(apiPrefixLength)
if err != nil {
return "", nil, err
}
toBeHashed, err := GenerateRandomStringURLSafe(apiKeyLength)
if err != nil {
return "", nil, err
}
// Key to return to user, this will only be visible _once_
keyStr := prefix + "." + toBeHashed
hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost)
if err != nil {
return "", nil, err
}
key := APIKey{
Prefix: prefix,
Hash: hash,
Expiration: expiration,
}
h.db.Save(&key)
return keyStr, &key, nil
}
// ListAPIKeys returns the list of ApiKeys for a namespace.
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
keys := []APIKey{}
if err := h.db.Find(&keys).Error; err != nil {
return nil, err
}
return keys, nil
}
// GetAPIKey returns a ApiKey for a given key.
func (h *Headscale) GetAPIKey(prefix string) (*APIKey, error) {
key := APIKey{}
if result := h.db.First(&key, "prefix = ?", prefix); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// GetAPIKeyByID returns a ApiKey for a given id.
func (h *Headscale) GetAPIKeyByID(id uint64) (*APIKey, error) {
key := APIKey{}
if result := h.db.Find(&APIKey{ID: id}).First(&key); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// DestroyAPIKey destroys a ApiKey. Returns error if the ApiKey
// does not exist.
func (h *Headscale) DestroyAPIKey(key APIKey) error {
if result := h.db.Unscoped().Delete(key); result.Error != nil {
return result.Error
}
return nil
}
// ExpireAPIKey marks a ApiKey as expired.
func (h *Headscale) ExpireAPIKey(key *APIKey) error {
if err := h.db.Model(&key).Update("Expiration", time.Now()).Error; err != nil {
return err
}
return nil
}
func (h *Headscale) ValidateAPIKey(keyStr string) (bool, error) {
prefix, hash, err := splitAPIKey(keyStr)
if err != nil {
return false, fmt.Errorf("failed to validate api key: %w", err)
}
key, err := h.GetAPIKey(prefix)
if err != nil {
return false, fmt.Errorf("failed to validate api key: %w", err)
}
if key.Expiration.Before(time.Now()) {
return false, nil
}
if err := bcrypt.CompareHashAndPassword(key.Hash, []byte(hash)); err != nil {
return false, err
}
return true, nil
}
func splitAPIKey(key string) (string, string, error) {
parts := strings.Split(key, ".")
if len(parts) != apiKeyParts {
return "", "", errAPIKeyFailedToParse
}
return parts[0], parts[1], nil
}
func (key *APIKey) toProto() *v1.ApiKey {
protoKey := v1.ApiKey{
Id: key.ID,
Prefix: key.Prefix,
}
if key.Expiration != nil {
protoKey.Expiration = timestamppb.New(*key.Expiration)
}
if key.CreatedAt != nil {
protoKey.CreatedAt = timestamppb.New(*key.CreatedAt)
}
if key.LastSeen != nil {
protoKey.LastSeen = timestamppb.New(*key.LastSeen)
}
return &protoKey
}

View File

@@ -1,4 +1,4 @@
package db
package headscale
import (
"time"
@@ -7,7 +7,7 @@ import (
)
func (*Suite) TestCreateAPIKey(c *check.C) {
apiKeyStr, apiKey, err := db.CreateAPIKey(nil)
apiKeyStr, apiKey, err := app.CreateAPIKey(nil)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
@@ -16,74 +16,74 @@ func (*Suite) TestCreateAPIKey(c *check.C) {
c.Assert(apiKey.Hash, check.NotNil)
c.Assert(apiKeyStr, check.Not(check.Equals), "")
_, err = db.ListAPIKeys()
_, err = app.ListAPIKeys()
c.Assert(err, check.IsNil)
keys, err := db.ListAPIKeys()
keys, err := app.ListAPIKeys()
c.Assert(err, check.IsNil)
c.Assert(len(keys), check.Equals, 1)
}
func (*Suite) TestAPIKeyDoesNotExist(c *check.C) {
key, err := db.GetAPIKey("does-not-exist")
key, err := app.GetAPIKey("does-not-exist")
c.Assert(err, check.NotNil)
c.Assert(key, check.IsNil)
}
func (*Suite) TestValidateAPIKeyOk(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2)
apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr)
valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true)
}
func (*Suite) TestValidateAPIKeyNotOk(c *check.C) {
nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowMinus2)
apiKeyStr, apiKey, err := app.CreateAPIKey(&nowMinus2)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr)
valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, false)
now := time.Now()
apiKeyStrNow, apiKey, err := db.CreateAPIKey(&now)
apiKeyStrNow, apiKey, err := app.CreateAPIKey(&now)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
validNow, err := db.ValidateAPIKey(apiKeyStrNow)
validNow, err := app.ValidateAPIKey(apiKeyStrNow)
c.Assert(err, check.IsNil)
c.Assert(validNow, check.Equals, false)
validSilly, err := db.ValidateAPIKey("nota.validkey")
validSilly, err := app.ValidateAPIKey("nota.validkey")
c.Assert(err, check.NotNil)
c.Assert(validSilly, check.Equals, false)
validWithErr, err := db.ValidateAPIKey("produceerrorkey")
validWithErr, err := app.ValidateAPIKey("produceerrorkey")
c.Assert(err, check.NotNil)
c.Assert(validWithErr, check.Equals, false)
}
func (*Suite) TestExpireAPIKey(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2)
apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr)
valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true)
err = db.ExpireAPIKey(apiKey)
err = app.ExpireAPIKey(apiKey)
c.Assert(err, check.IsNil)
c.Assert(apiKey.Expiration, check.NotNil)
notValid, err := db.ValidateAPIKey(apiKeyStr)
notValid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(notValid, check.Equals, false)
}

871
app.go Normal file
View File

@@ -0,0 +1,871 @@
package headscale
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"io/fs"
"net"
"net/http"
"net/url"
"os"
"os/signal"
"sort"
"strings"
"sync"
"syscall"
"time"
"github.com/coreos/go-oidc/v3/oidc"
"github.com/gin-gonic/gin"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/patrickmn/go-cache"
zerolog "github.com/philip-bui/grpc-zerolog"
zl "github.com/rs/zerolog"
"github.com/rs/zerolog/log"
ginprometheus "github.com/zsais/go-gin-prometheus"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
"golang.org/x/oauth2"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/reflection"
"google.golang.org/grpc/status"
"gorm.io/gorm"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
"tailscale.com/types/key"
)
const (
errSTUNAddressNotSet = Error("STUN address not set")
errUnsupportedDatabase = Error("unsupported DB")
errUnsupportedLetsEncryptChallengeType = Error(
"unknown value for Lets Encrypt challenge type",
)
)
const (
AuthPrefix = "Bearer "
Postgres = "postgres"
Sqlite = "sqlite3"
updateInterval = 5000
HTTPReadTimeout = 30 * time.Second
privateKeyFileMode = 0o600
registerCacheExpiration = time.Minute * 15
registerCacheCleanup = time.Minute * 20
DisabledClientAuth = "disabled"
RelaxedClientAuth = "relaxed"
EnforcedClientAuth = "enforced"
)
// Config contains the initial Headscale configuration.
type Config struct {
ServerURL string
Addr string
MetricsAddr string
GRPCAddr string
GRPCAllowInsecure bool
EphemeralNodeInactivityTimeout time.Duration
IPPrefixes []netaddr.IPPrefix
PrivateKeyPath string
BaseDomain string
DERP DERPConfig
DBtype string
DBpath string
DBhost string
DBport int
DBname string
DBuser string
DBpass string
TLSLetsEncryptListen string
TLSLetsEncryptHostname string
TLSLetsEncryptCacheDir string
TLSLetsEncryptChallengeType string
TLSCertPath string
TLSKeyPath string
TLSClientAuthMode tls.ClientAuthType
ACMEURL string
ACMEEmail string
DNSConfig *tailcfg.DNSConfig
UnixSocket string
UnixSocketPermission fs.FileMode
OIDC OIDCConfig
CLI CLIConfig
}
type OIDCConfig struct {
Issuer string
ClientID string
ClientSecret string
StripEmaildomain bool
}
type DERPConfig struct {
ServerEnabled bool
ServerRegionID int
ServerRegionCode string
ServerRegionName string
STUNAddr string
URLs []url.URL
Paths []string
AutoUpdate bool
UpdateFrequency time.Duration
}
type CLIConfig struct {
Address string
APIKey string
Timeout time.Duration
Insecure bool
}
// Headscale represents the base app of the service.
type Headscale struct {
cfg Config
db *gorm.DB
dbString string
dbType string
dbDebug bool
privateKey *key.MachinePrivate
DERPMap *tailcfg.DERPMap
DERPServer *DERPServer
aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
lastStateChange sync.Map
oidcProvider *oidc.Provider
oauth2Config *oauth2.Config
registrationCache *cache.Cache
ipAllocationMutex sync.Mutex
}
// Look up the TLS constant relative to user-supplied TLS client
// authentication mode. If an unknown mode is supplied, the default
// value, tls.RequireAnyClientCert, is returned. The returned boolean
// indicates if the supplied mode was valid.
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
switch mode {
case DisabledClientAuth:
// Client cert is _not_ required.
return tls.NoClientCert, true
case RelaxedClientAuth:
// Client cert required, but _not verified_.
return tls.RequireAnyClientCert, true
case EnforcedClientAuth:
// Client cert is _required and verified_.
return tls.RequireAndVerifyClientCert, true
default:
// Return the default when an unknown value is supplied.
return tls.RequireAnyClientCert, false
}
}
func NewHeadscale(cfg Config) (*Headscale, error) {
privKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
if err != nil {
return nil, fmt.Errorf("failed to read or create private key: %w", err)
}
var dbString string
switch cfg.DBtype {
case Postgres:
dbString = fmt.Sprintf(
"host=%s port=%d dbname=%s user=%s password=%s sslmode=disable",
cfg.DBhost,
cfg.DBport,
cfg.DBname,
cfg.DBuser,
cfg.DBpass,
)
case Sqlite:
dbString = cfg.DBpath
default:
return nil, errUnsupportedDatabase
}
registrationCache := cache.New(
registerCacheExpiration,
registerCacheCleanup,
)
app := Headscale{
cfg: cfg,
dbType: cfg.DBtype,
dbString: dbString,
privateKey: privKey,
aclRules: tailcfg.FilterAllowAll, // default allowall
registrationCache: registrationCache,
}
err = app.initDB()
if err != nil {
return nil, err
}
if cfg.OIDC.Issuer != "" {
err = app.initOIDC()
if err != nil {
return nil, err
}
}
if app.cfg.DNSConfig != nil && app.cfg.DNSConfig.Proxied { // if MagicDNS
magicDNSDomains := generateMagicDNSRootDomains(app.cfg.IPPrefixes)
// we might have routes already from Split DNS
if app.cfg.DNSConfig.Routes == nil {
app.cfg.DNSConfig.Routes = make(map[string][]dnstype.Resolver)
}
for _, d := range magicDNSDomains {
app.cfg.DNSConfig.Routes[d.WithoutTrailingDot()] = nil
}
}
if cfg.DERP.ServerEnabled {
embeddedDERPServer, err := app.NewDERPServer()
if err != nil {
return nil, err
}
app.DERPServer = embeddedDERPServer
}
return &app, nil
}
// Redirect to our TLS url.
func (h *Headscale) redirect(w http.ResponseWriter, req *http.Request) {
target := h.cfg.ServerURL + req.URL.RequestURI()
http.Redirect(w, req, target, http.StatusFound)
}
// expireEphemeralNodes deletes ephemeral machine records that have not been
// seen for longer than h.cfg.EphemeralNodeInactivityTimeout.
func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
h.expireEphemeralNodesWorker()
}
}
func (h *Headscale) expireEphemeralNodesWorker() {
namespaces, err := h.ListNamespaces()
if err != nil {
log.Error().Err(err).Msg("Error listing namespaces")
return
}
for _, namespace := range namespaces {
machines, err := h.ListMachinesInNamespace(namespace.Name)
if err != nil {
log.Error().
Err(err).
Str("namespace", namespace.Name).
Msg("Error listing machines in namespace")
return
}
expiredFound := false
for _, machine := range machines {
if machine.AuthKey != nil && machine.LastSeen != nil &&
machine.AuthKey.Ephemeral &&
time.Now().
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
expiredFound = true
log.Info().
Str("machine", machine.Name).
Msg("Ephemeral client removed from database")
err = h.db.Unscoped().Delete(machine).Error
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Name).
Msg("🤮 Cannot delete ephemeral machine from the database")
}
}
}
if expiredFound {
h.setLastStateChangeToNow(namespace.Name)
}
}
}
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler) (interface{}, error) {
// Check if the request is coming from the on-server client.
// This is not secure, but it is to maintain maintainability
// with the "legacy" database-based client
// It is also neede for grpc-gateway to be able to connect to
// the server
client, _ := peer.FromContext(ctx)
log.Trace().
Caller().
Str("client_address", client.Addr.String()).
Msg("Client is trying to authenticate")
meta, ok := metadata.FromIncomingContext(ctx)
if !ok {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg("Retrieving metadata is failed")
return ctx, status.Errorf(
codes.InvalidArgument,
"Retrieving metadata is failed",
)
}
authHeader, ok := meta["authorization"]
if !ok {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg("Authorization token is not supplied")
return ctx, status.Errorf(
codes.Unauthenticated,
"Authorization token is not supplied",
)
}
token := authHeader[0]
if !strings.HasPrefix(token, AuthPrefix) {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg(`missing "Bearer " prefix in "Authorization" header`)
return ctx, status.Error(
codes.Unauthenticated,
`missing "Bearer " prefix in "Authorization" header`,
)
}
valid, err := h.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix))
if err != nil {
log.Error().
Caller().
Err(err).
Str("client_address", client.Addr.String()).
Msg("failed to validate token")
return ctx, status.Error(codes.Internal, "failed to validate token")
}
if !valid {
log.Info().
Str("client_address", client.Addr.String()).
Msg("invalid token")
return ctx, status.Error(codes.Unauthenticated, "invalid token")
}
return handler(ctx, req)
}
func (h *Headscale) httpAuthenticationMiddleware(ctx *gin.Context) {
log.Trace().
Caller().
Str("client_address", ctx.ClientIP()).
Msg("HTTP authentication invoked")
authHeader := ctx.GetHeader("authorization")
if !strings.HasPrefix(authHeader, AuthPrefix) {
log.Error().
Caller().
Str("client_address", ctx.ClientIP()).
Msg(`missing "Bearer " prefix in "Authorization" header`)
ctx.AbortWithStatus(http.StatusUnauthorized)
return
}
valid, err := h.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix))
if err != nil {
log.Error().
Caller().
Err(err).
Str("client_address", ctx.ClientIP()).
Msg("failed to validate token")
ctx.AbortWithStatus(http.StatusInternalServerError)
return
}
if !valid {
log.Info().
Str("client_address", ctx.ClientIP()).
Msg("invalid token")
ctx.AbortWithStatus(http.StatusUnauthorized)
return
}
ctx.Next()
}
// ensureUnixSocketIsAbsent will check if the given path for headscales unix socket is clear
// and will remove it if it is not.
func (h *Headscale) ensureUnixSocketIsAbsent() error {
// File does not exist, all fine
if _, err := os.Stat(h.cfg.UnixSocket); errors.Is(err, os.ErrNotExist) {
return nil
}
return os.Remove(h.cfg.UnixSocket)
}
func (h *Headscale) createPrometheusRouter() *gin.Engine {
promRouter := gin.Default()
prometheus := ginprometheus.NewPrometheus("gin")
prometheus.Use(promRouter)
return promRouter
}
func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *gin.Engine {
router := gin.Default()
router.GET(
"/health",
func(c *gin.Context) { c.JSON(http.StatusOK, gin.H{"healthy": "ok"}) },
)
router.GET("/key", h.KeyHandler)
router.GET("/register", h.RegisterWebAPI)
router.POST("/machine/:id/map", h.PollNetMapHandler)
router.POST("/machine/:id", h.RegistrationHandler)
router.GET("/oidc/register/:mkey", h.RegisterOIDC)
router.GET("/oidc/callback", h.OIDCCallback)
router.GET("/apple", h.AppleConfigMessage)
router.GET("/apple/:platform", h.ApplePlatformConfig)
router.GET("/windows", h.WindowsConfigMessage)
router.GET("/windows/tailscale.reg", h.WindowsRegConfig)
router.GET("/swagger", SwaggerUI)
router.GET("/swagger/v1/openapiv2.json", SwaggerAPIv1)
if h.cfg.DERP.ServerEnabled {
router.Any("/derp", h.DERPHandler)
router.Any("/derp/probe", h.DERPProbeHandler)
router.Any("/bootstrap-dns", h.DERPBootstrapDNSHandler)
}
api := router.Group("/api")
api.Use(h.httpAuthenticationMiddleware)
{
api.Any("/v1/*any", gin.WrapF(grpcMux.ServeHTTP))
}
router.NoRoute(stdoutHandler)
return router
}
// Serve launches a GIN server with the Headscale API.
func (h *Headscale) Serve() error {
var err error
// Fetch an initial DERP Map before we start serving
h.DERPMap = GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled {
// When embedded DERP is enabled we always need a STUN server
if h.cfg.DERP.STUNAddr == "" {
return errSTUNAddressNotSet
}
h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
go h.ServeSTUN()
}
if h.cfg.DERP.AutoUpdate {
derpMapCancelChannel := make(chan struct{})
defer func() { derpMapCancelChannel <- struct{}{} }()
go h.scheduledDERPMapUpdateWorker(derpMapCancelChannel)
}
go h.expireEphemeralNodes(updateInterval)
if zl.GlobalLevel() == zl.TraceLevel {
zerolog.RespLog = true
} else {
zerolog.RespLog = false
}
// Prepare group for running listeners
errorGroup := new(errgroup.Group)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
//
//
// Set up LOCAL listeners
//
err = h.ensureUnixSocketIsAbsent()
if err != nil {
return fmt.Errorf("unable to remove old socket file: %w", err)
}
socketListener, err := net.Listen("unix", h.cfg.UnixSocket)
if err != nil {
return fmt.Errorf("failed to set up gRPC socket: %w", err)
}
// Change socket permissions
if err := os.Chmod(h.cfg.UnixSocket, h.cfg.UnixSocketPermission); err != nil {
return fmt.Errorf("failed change permission of gRPC socket: %w", err)
}
// Handle common process-killing signals so we can gracefully shut down:
sigc := make(chan os.Signal, 1)
signal.Notify(sigc, os.Interrupt, syscall.SIGTERM)
go func(c chan os.Signal) {
// Wait for a SIGINT or SIGKILL:
sig := <-c
log.Printf("Caught signal %s: shutting down.", sig)
// Stop listening (and unlink the socket if unix type):
socketListener.Close()
// And we're done:
os.Exit(0)
}(sigc)
grpcGatewayMux := runtime.NewServeMux()
// Make the grpc-gateway connect to grpc over socket
grpcGatewayConn, err := grpc.Dial(
h.cfg.UnixSocket,
[]grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(GrpcSocketDialer),
}...,
)
if err != nil {
return err
}
// Connect to the gRPC server over localhost to skip
// the authentication.
err = v1.RegisterHeadscaleServiceHandler(ctx, grpcGatewayMux, grpcGatewayConn)
if err != nil {
return err
}
// Start the local gRPC server without TLS and without authentication
grpcSocket := grpc.NewServer(zerolog.UnaryInterceptor())
v1.RegisterHeadscaleServiceServer(grpcSocket, newHeadscaleV1APIServer(h))
reflection.Register(grpcSocket)
errorGroup.Go(func() error { return grpcSocket.Serve(socketListener) })
//
//
// Set up REMOTE listeners
//
tlsConfig, err := h.getTLSSettings()
if err != nil {
log.Error().Err(err).Msg("Failed to set up TLS configuration")
return err
}
//
//
// gRPC setup
//
// We are sadly not able to run gRPC and HTTPS (2.0) on the same
// port because the connection mux does not support matching them
// since they are so similar. There is multiple issues open and we
// can revisit this if changes:
// https://github.com/soheilhy/cmux/issues/68
// https://github.com/soheilhy/cmux/issues/91
if tlsConfig != nil || h.cfg.GRPCAllowInsecure {
log.Info().Msgf("Enabling remote gRPC at %s", h.cfg.GRPCAddr)
grpcOptions := []grpc.ServerOption{
grpc.UnaryInterceptor(
grpc_middleware.ChainUnaryServer(
h.grpcAuthenticationInterceptor,
zerolog.NewUnaryServerInterceptor(),
),
),
}
if tlsConfig != nil {
grpcOptions = append(grpcOptions,
grpc.Creds(credentials.NewTLS(tlsConfig)),
)
} else {
log.Warn().Msg("gRPC is running without security")
}
grpcServer := grpc.NewServer(grpcOptions...)
v1.RegisterHeadscaleServiceServer(grpcServer, newHeadscaleV1APIServer(h))
reflection.Register(grpcServer)
grpcListener, err := net.Listen("tcp", h.cfg.GRPCAddr)
if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err)
}
errorGroup.Go(func() error { return grpcServer.Serve(grpcListener) })
log.Info().
Msgf("listening and serving gRPC on: %s", h.cfg.GRPCAddr)
}
//
//
// HTTP setup
//
router := h.createRouter(grpcGatewayMux)
httpServer := &http.Server{
Addr: h.cfg.Addr,
Handler: router,
ReadTimeout: HTTPReadTimeout,
// Go does not handle timeouts in HTTP very well, and there is
// no good way to handle streaming timeouts, therefore we need to
// keep this at unlimited and be careful to clean up connections
// https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/#aboutstreaming
WriteTimeout: 0,
}
var httpListener net.Listener
if tlsConfig != nil {
httpServer.TLSConfig = tlsConfig
httpListener, err = tls.Listen("tcp", h.cfg.Addr, tlsConfig)
} else {
httpListener, err = net.Listen("tcp", h.cfg.Addr)
}
if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err)
}
errorGroup.Go(func() error { return httpServer.Serve(httpListener) })
log.Info().
Msgf("listening and serving HTTP on: %s", h.cfg.Addr)
promRouter := h.createPrometheusRouter()
promHTTPServer := &http.Server{
Addr: h.cfg.MetricsAddr,
Handler: promRouter,
ReadTimeout: HTTPReadTimeout,
WriteTimeout: 0,
}
var promHTTPListener net.Listener
promHTTPListener, err = net.Listen("tcp", h.cfg.MetricsAddr)
if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err)
}
errorGroup.Go(func() error { return promHTTPServer.Serve(promHTTPListener) })
log.Info().
Msgf("listening and serving metrics on: %s", h.cfg.MetricsAddr)
return errorGroup.Wait()
}
func (h *Headscale) getTLSSettings() (*tls.Config, error) {
var err error
if h.cfg.TLSLetsEncryptHostname != "" {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().
Msg("Listening with TLS but ServerURL does not start with https://")
}
certManager := autocert.Manager{
Prompt: autocert.AcceptTOS,
HostPolicy: autocert.HostWhitelist(h.cfg.TLSLetsEncryptHostname),
Cache: autocert.DirCache(h.cfg.TLSLetsEncryptCacheDir),
Client: &acme.Client{
DirectoryURL: h.cfg.ACMEURL,
},
Email: h.cfg.ACMEEmail,
}
switch h.cfg.TLSLetsEncryptChallengeType {
case "TLS-ALPN-01":
// Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737)
// The RFC requires that the validation is done on port 443; in other words, headscale
// must be reachable on port 443.
return certManager.TLSConfig(), nil
case "HTTP-01":
// Configuration via autocert with HTTP-01. This requires listening on
// port 80 for the certificate validation in addition to the headscale
// service, which can be configured to run on any other port.
go func() {
log.Fatal().
Caller().
Err(http.ListenAndServe(h.cfg.TLSLetsEncryptListen, certManager.HTTPHandler(http.HandlerFunc(h.redirect)))).
Msg("failed to set up a HTTP server")
}()
return certManager.TLSConfig(), nil
default:
return nil, errUnsupportedLetsEncryptChallengeType
}
} else if h.cfg.TLSCertPath == "" {
if !strings.HasPrefix(h.cfg.ServerURL, "http://") {
log.Warn().Msg("Listening without TLS but ServerURL does not start with http://")
}
return nil, err
} else {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
}
log.Info().Msg(fmt.Sprintf(
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
h.cfg.TLSClientAuthMode))
tlsConfig := &tls.Config{
ClientAuth: h.cfg.TLSClientAuthMode,
NextProtos: []string{"http/1.1"},
Certificates: make([]tls.Certificate, 1),
MinVersion: tls.VersionTLS12,
}
tlsConfig.Certificates[0], err = tls.LoadX509KeyPair(h.cfg.TLSCertPath, h.cfg.TLSKeyPath)
return tlsConfig, err
}
}
func (h *Headscale) setLastStateChangeToNow(namespace string) {
now := time.Now().UTC()
lastStateUpdate.WithLabelValues("", "headscale").Set(float64(now.Unix()))
h.lastStateChange.Store(namespace, now)
}
func (h *Headscale) getLastStateChange(namespaces ...string) time.Time {
times := []time.Time{}
for _, namespace := range namespaces {
if wrapped, ok := h.lastStateChange.Load(namespace); ok {
lastChange, _ := wrapped.(time.Time)
times = append(times, lastChange)
}
}
sort.Slice(times, func(i, j int) bool {
return times[i].After(times[j])
})
log.Trace().Msgf("Latest times %#v", times)
if len(times) == 0 {
return time.Now().UTC()
} else {
return times[0]
}
}
func stdoutHandler(ctx *gin.Context) {
body, _ := io.ReadAll(ctx.Request.Body)
log.Trace().
Interface("header", ctx.Request.Header).
Interface("proto", ctx.Request.Proto).
Interface("url", ctx.Request.URL).
Bytes("body", body).
Msg("Request did not match")
}
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
privateKey, err := os.ReadFile(path)
if errors.Is(err, os.ErrNotExist) {
log.Info().Str("path", path).Msg("No private key file at path, creating...")
machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText()
if err != nil {
return nil, fmt.Errorf(
"failed to convert private key to string for saving: %w",
err,
)
}
err = os.WriteFile(path, machineKeyStr, privateKeyFileMode)
if err != nil {
return nil, fmt.Errorf(
"failed to save private key to disk: %w",
err,
)
}
return &machineKey, nil
} else if err != nil {
return nil, fmt.Errorf("failed to read private key file: %w", err)
}
trimmedPrivateKey := strings.TrimSpace(string(privateKey))
privateKeyEnsurePrefix := PrivateKeyEnsurePrefix(trimmedPrivateKey)
var machineKey key.MachinePrivate
if err = machineKey.UnmarshalText([]byte(privateKeyEnsurePrefix)); err != nil {
log.Info().
Str("path", path).
Msg("This might be due to a legacy (headscale pre-0.12) private key. " +
"If the key is in WireGuard format, delete the key and restart headscale. " +
"A new key will automatically be generated. All Tailscale clients will have to be restarted")
return nil, fmt.Errorf("failed to parse private key: %w", err)
}
return &machineKey, nil
}

79
app_test.go Normal file
View File

@@ -0,0 +1,79 @@
package headscale
import (
"io/ioutil"
"os"
"testing"
"gopkg.in/check.v1"
"inet.af/netaddr"
)
func Test(t *testing.T) {
check.TestingT(t)
}
var _ = check.Suite(&Suite{})
type Suite struct{}
var (
tmpDir string
app Headscale
)
func (s *Suite) SetUpTest(c *check.C) {
s.ResetDB(c)
}
func (s *Suite) TearDownTest(c *check.C) {
os.RemoveAll(tmpDir)
}
func (s *Suite) ResetDB(c *check.C) {
if len(tmpDir) != 0 {
os.RemoveAll(tmpDir)
}
var err error
tmpDir, err = ioutil.TempDir("", "autoygg-client-test")
if err != nil {
c.Fatal(err)
}
cfg := Config{
IPPrefixes: []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("10.27.0.0/23"),
},
}
app = Headscale{
cfg: cfg,
dbType: "sqlite3",
dbString: tmpDir + "/headscale_test.db",
}
err = app.initDB()
if err != nil {
c.Fatal(err)
}
db, err := app.openDB()
if err != nil {
c.Fatal(err)
}
app.db = db
}
// Enusre an error is returned when an invalid auth mode
// is supplied.
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
_, isValid := LookupTLSClientAuthMode("invalid")
c.Assert(isValid, check.Equals, false)
}
// Ensure that all client auth modes return a nil error.
func (s *Suite) TestAuthModes(c *check.C) {
modes := []string{"disabled", "relaxed", "enforced"}
for _, v := range modes {
_, isValid := LookupTLSClientAuthMode(v)
c.Assert(isValid, check.Equals, true)
}
}

View File

@@ -1,69 +0,0 @@
package main
//go:generate go run ./main.go
import (
"bytes"
"fmt"
"log"
"os/exec"
"strings"
)
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
cmd := exec.Command(rgBin, args...)
var out bytes.Buffer
cmd.Stdout = &out
err = cmd.Run()
if err != nil {
log.Fatalf("failed to run command: %s", err)
}
tests := strings.Split(strings.TrimSpace(out.String()), "\n")
return tests
}
func updateYAML(tests []string) {
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", "))
yqCommand := fmt.Sprintf(
"yq eval '.jobs.integration-test.strategy.matrix.test = %s' ../../.github/workflows/test-integration.yaml -i",
testsForYq,
)
cmd := exec.Command("bash", "-c", yqCommand)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
log.Fatalf("failed to run yq command: %s", err)
}
fmt.Println("YAML file updated successfully")
}
func main() {
tests := findTests()
quotedTests := make([]string, len(tests))
for i, test := range tests {
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
}
updateYAML(quotedTests)
}

View File

@@ -5,9 +5,8 @@ import (
"strconv"
"time"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
@@ -16,7 +15,7 @@ import (
const (
// 90 days.
DefaultAPIKeyExpiry = "90d"
DefaultAPIKeyExpiry = 90 * 24 * time.Hour
)
func init() {
@@ -24,21 +23,16 @@ func init() {
apiKeysCmd.AddCommand(listAPIKeys)
createAPIKeyCmd.Flags().
StringP("expiration", "e", DefaultAPIKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
DurationP("expiration", "e", DefaultAPIKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
apiKeysCmd.AddCommand(createAPIKeyCmd)
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
err := expireAPIKeyCmd.MarkFlagRequired("prefix")
if err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(expireAPIKeyCmd)
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
}
var apiKeysCmd = &cobra.Command{
@@ -72,7 +66,7 @@ var listAPIKeys = &cobra.Command{
}
if output != "" {
SuccessOutput(response.GetApiKeys(), "", output)
SuccessOutput(response.ApiKeys, "", output)
return
}
@@ -80,15 +74,15 @@ var listAPIKeys = &cobra.Command{
tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"},
}
for _, key := range response.GetApiKeys() {
for _, key := range response.ApiKeys {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
expiration = ColourTime(key.Expiration.AsTime())
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10),
strconv.FormatUint(key.GetId(), headscale.Base10),
key.GetPrefix(),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
@@ -124,24 +118,10 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
request := &v1.CreateApiKeyRequest{}
durationStr, _ := cmd.Flags().GetString("expiration")
duration, _ := cmd.Flags().GetDuration("expiration")
expiration := time.Now().UTC().Add(duration)
duration, err := model.ParseDuration(durationStr)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
return
}
expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace().
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
log.Trace().Dur("expiration", duration).Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration)
@@ -160,7 +140,7 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
return
}
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
SuccessOutput(response.ApiKey, response.ApiKey, output)
},
}
@@ -204,44 +184,3 @@ var expireAPIKeyCmd = &cobra.Command{
SuccessOutput(response, "Key expired", output)
},
}
var deleteAPIKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete an ApiKey",
Aliases: []string{"remove", "del"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
prefix, err := cmd.Flags().GetString("prefix")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.DeleteApiKeyRequest{
Prefix: prefix,
}
response, err := client.DeleteApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response, "Key deleted", output)
},
}

View File

@@ -1,22 +0,0 @@
package cli
import (
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(configTestCmd)
}
var configTestCmd = &cobra.Command{
Use: "configtest",
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) {
_, err := getHeadscaleApp()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
},
}

View File

@@ -7,11 +7,11 @@ import (
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
"tailscale.com/types/key"
)
const (
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix")
keyLength = 64
errPreAuthKeyTooShort = Error("key too short, must be 64 hexadecimal characters")
)
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
@@ -27,14 +27,8 @@ func init() {
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("user", "u", "", "User")
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
createNodeNamespaceFlag.Hidden = true
err = createNodeCmd.MarkFlagRequired("user")
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err = createNodeCmd.MarkFlagRequired("namespace")
if err != nil {
log.Fatal().Err(err).Msg("")
}
@@ -57,13 +51,13 @@ var debugCmd = &cobra.Command{
var createNodeCmd = &cobra.Command{
Use: "create-node",
Short: "Create a node that can be registered with `nodes register <>` command",
Short: "Create a node (machine) that can be registered with `nodes register <>` command",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
@@ -93,13 +87,11 @@ var createNodeCmd = &cobra.Command{
return
}
var mkey key.MachinePublic
err = mkey.UnmarshalText([]byte(machineKey))
if err != nil {
if len(machineKey) != keyLength {
err = errPreAuthKeyTooShort
ErrorOutput(
err,
fmt.Sprintf("Failed to parse machine key from flag: %s", err),
fmt.Sprintf("Error: %s", err),
output,
)
@@ -117,24 +109,24 @@ var createNodeCmd = &cobra.Command{
return
}
request := &v1.DebugCreateNodeRequest{
Key: machineKey,
Name: name,
User: user,
Routes: routes,
request := &v1.DebugCreateMachineRequest{
Key: machineKey,
Name: name,
Namespace: namespace,
Routes: routes,
}
response, err := client.DebugCreateNode(ctx, request)
response, err := client.DebugCreateMachine(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()),
fmt.Sprintf("Cannot create machine: %s", status.Convert(err).Message()),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node created", output)
SuccessOutput(response.Machine, "Machine created", output)
},
}

View File

@@ -1,28 +0,0 @@
package cli
import (
"fmt"
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func init() {
rootCmd.AddCommand(dumpConfigCmd)
}
var dumpConfigCmd = &cobra.Command{
Use: "dumpConfig",
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
Hidden: true,
Args: func(cmd *cobra.Command, args []string) error {
return nil
},
Run: func(cmd *cobra.Command, args []string) {
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
if err != nil {
//nolint
fmt.Println("Failed to dump config")
}
},
}

View File

@@ -1,115 +0,0 @@
package cli
import (
"fmt"
"net"
"os"
"strconv"
"time"
"github.com/oauth2-proxy/mockoidc"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
const (
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
refreshTTL = 60 * time.Minute
)
var accessTTL = 2 * time.Minute
func init() {
rootCmd.AddCommand(mockOidcCmd)
}
var mockOidcCmd = &cobra.Command{
Use: "mockoidc",
Short: "Runs a mock OIDC server for testing",
Long: "This internal command runs a OpenID Connect for testing purposes",
Run: func(cmd *cobra.Command, args []string) {
err := mockOIDC()
if err != nil {
log.Error().Err(err).Msgf("Error running mock OIDC server")
os.Exit(1)
}
},
}
func mockOIDC() error {
clientID := os.Getenv("MOCKOIDC_CLIENT_ID")
if clientID == "" {
return errMockOidcClientIDNotDefined
}
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
if clientSecret == "" {
return errMockOidcClientSecretNotDefined
}
addrStr := os.Getenv("MOCKOIDC_ADDR")
if addrStr == "" {
return errMockOidcPortNotDefined
}
portStr := os.Getenv("MOCKOIDC_PORT")
if portStr == "" {
return errMockOidcPortNotDefined
}
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
if accessTTLOverride != "" {
newTTL, err := time.ParseDuration(accessTTLOverride)
if err != nil {
return err
}
accessTTL = newTTL
}
log.Info().Msgf("Access token TTL: %s", accessTTL)
port, err := strconv.Atoi(portStr)
if err != nil {
return err
}
mock, err := getMockOIDC(clientID, clientSecret)
if err != nil {
return err
}
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", addrStr, port))
if err != nil {
return err
}
err = mock.Start(listener, nil)
if err != nil {
return err
}
log.Info().Msgf("Mock OIDC server listening on %s", listener.Addr().String())
log.Info().Msgf("Issuer: %s", mock.Issuer())
c := make(chan struct{})
<-c
return nil
}
func getMockOIDC(clientID string, clientSecret string) (*mockoidc.MockOIDC, error) {
keypair, err := mockoidc.NewKeypair(nil)
if err != nil {
return nil, err
}
mock := mockoidc.MockOIDC{
ClientID: clientID,
ClientSecret: clientSecret,
AccessTTL: accessTTL,
RefreshTTL: refreshTTL,
CodeChallengeMethodsSupported: []string{"plain", "S256"},
Keypair: keypair,
SessionStore: mockoidc.NewSessionStore(),
UserQueue: &mockoidc.UserQueue{},
ErrorQueue: &mockoidc.ErrorQueue{},
}
return &mock, nil
}

View File

@@ -1,10 +1,10 @@
package cli
import (
"errors"
"fmt"
survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
@@ -13,24 +13,26 @@ import (
)
func init() {
rootCmd.AddCommand(userCmd)
userCmd.AddCommand(createUserCmd)
userCmd.AddCommand(listUsersCmd)
userCmd.AddCommand(destroyUserCmd)
userCmd.AddCommand(renameUserCmd)
rootCmd.AddCommand(namespaceCmd)
namespaceCmd.AddCommand(createNamespaceCmd)
namespaceCmd.AddCommand(listNamespacesCmd)
namespaceCmd.AddCommand(destroyNamespaceCmd)
namespaceCmd.AddCommand(renameNamespaceCmd)
}
var errMissingParameter = errors.New("missing parameters")
const (
errMissingParameter = headscale.Error("missing parameters")
)
var userCmd = &cobra.Command{
Use: "users",
Short: "Manage the users of Headscale",
Aliases: []string{"user", "namespace", "namespaces", "ns"},
var namespaceCmd = &cobra.Command{
Use: "namespaces",
Short: "Manage the namespaces of Headscale",
Aliases: []string{"namespace", "ns", "user", "users"},
}
var createUserCmd = &cobra.Command{
var createNamespaceCmd = &cobra.Command{
Use: "create NAME",
Short: "Creates a new user",
Short: "Creates a new namespace",
Aliases: []string{"c", "new"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
@@ -42,7 +44,7 @@ var createUserCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
userName := args[0]
namespaceName := args[0]
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
@@ -50,15 +52,15 @@ var createUserCmd = &cobra.Command{
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
request := &v1.CreateUserRequest{Name: userName}
request := &v1.CreateNamespaceRequest{Name: namespaceName}
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
response, err := client.CreateUser(ctx, request)
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
response, err := client.CreateNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot create user: %s",
"Cannot create namespace: %s",
status.Convert(err).Message(),
),
output,
@@ -67,13 +69,13 @@ var createUserCmd = &cobra.Command{
return
}
SuccessOutput(response.GetUser(), "User created", output)
SuccessOutput(response.Namespace, "Namespace created", output)
},
}
var destroyUserCmd = &cobra.Command{
var destroyNamespaceCmd = &cobra.Command{
Use: "destroy NAME",
Short: "Destroys a user",
Short: "Destroys a namespace",
Aliases: []string{"delete"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
@@ -85,17 +87,17 @@ var destroyUserCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
userName := args[0]
namespaceName := args[0]
request := &v1.GetUserRequest{
Name: userName,
request := &v1.GetNamespaceRequest{
Name: namespaceName,
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
_, err := client.GetUser(ctx, request)
_, err := client.GetNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -111,8 +113,8 @@ var destroyUserCmd = &cobra.Command{
if !force {
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the user '%s' and any associated preauthkeys?",
userName,
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
namespaceName,
),
}
err := survey.AskOne(prompt, &confirm)
@@ -122,14 +124,14 @@ var destroyUserCmd = &cobra.Command{
}
if confirm || force {
request := &v1.DeleteUserRequest{Name: userName}
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
response, err := client.DeleteUser(ctx, request)
response, err := client.DeleteNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot destroy user: %s",
"Cannot destroy namespace: %s",
status.Convert(err).Message(),
),
output,
@@ -137,16 +139,16 @@ var destroyUserCmd = &cobra.Command{
return
}
SuccessOutput(response, "User destroyed", output)
SuccessOutput(response, "Namespace destroyed", output)
} else {
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
}
},
}
var listUsersCmd = &cobra.Command{
var listNamespacesCmd = &cobra.Command{
Use: "list",
Short: "List all the users",
Short: "List all the namespaces",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
@@ -155,13 +157,13 @@ var listUsersCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.ListUsersRequest{}
request := &v1.ListNamespacesRequest{}
response, err := client.ListUsers(ctx, request)
response, err := client.ListNamespaces(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
output,
)
@@ -169,19 +171,19 @@ var listUsersCmd = &cobra.Command{
}
if output != "" {
SuccessOutput(response.GetUsers(), "", output)
SuccessOutput(response.Namespaces, "", output)
return
}
tableData := pterm.TableData{{"ID", "Name", "Created"}}
for _, user := range response.GetUsers() {
for _, namespace := range response.GetNamespaces() {
tableData = append(
tableData,
[]string{
user.GetId(),
user.GetName(),
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
namespace.GetId(),
namespace.GetName(),
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
},
)
}
@@ -198,9 +200,9 @@ var listUsersCmd = &cobra.Command{
},
}
var renameUserCmd = &cobra.Command{
var renameNamespaceCmd = &cobra.Command{
Use: "rename OLD_NAME NEW_NAME",
Short: "Renames a user",
Short: "Renames a namespace",
Aliases: []string{"mv"},
Args: func(cmd *cobra.Command, args []string) error {
expectedArguments := 2
@@ -217,17 +219,17 @@ var renameUserCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.RenameUserRequest{
request := &v1.RenameNamespaceRequest{
OldName: args[0],
NewName: args[1],
}
response, err := client.RenameUser(ctx, request)
response, err := client.RenameNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot rename user: %s",
"Cannot rename namespace: %s",
status.Convert(err).Message(),
),
output,
@@ -236,6 +238,6 @@ var renameUserCmd = &cobra.Command{
return
}
SuccessOutput(response.GetUser(), "User renamed", output)
SuccessOutput(response.Namespace, "Namespace renamed", output)
},
}

View File

@@ -3,14 +3,13 @@ package cli
import (
"fmt"
"log"
"net/netip"
"strconv"
"strings"
"time"
survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
@@ -19,24 +18,11 @@ import (
func init() {
rootCmd.AddCommand(nodeCmd)
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
listNodesNamespaceFlag.Hidden = true
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
nodeCmd.AddCommand(listNodesCmd)
registerNodeCmd.Flags().StringP("user", "u", "", "User")
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
registerNodeNamespaceFlag.Hidden = true
err := registerNodeCmd.MarkFlagRequired("user")
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err := registerNodeCmd.MarkFlagRequired("namespace")
if err != nil {
log.Fatalf(err.Error())
}
@@ -54,51 +40,12 @@ func init() {
}
nodeCmd.AddCommand(expireNodeCmd)
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = renameNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(renameNodeCmd)
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = deleteNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(deleteNodeCmd)
moveNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = moveNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
moveNodeNamespaceFlag.Hidden = true
err = moveNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(moveNodeCmd)
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = tagCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
tagCmd.Flags().
StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
nodeCmd.AddCommand(tagCmd)
nodeCmd.AddCommand(backfillNodeIPsCmd)
}
var nodeCmd = &cobra.Command{
@@ -109,12 +56,12 @@ var nodeCmd = &cobra.Command{
var registerNodeCmd = &cobra.Command{
Use: "register",
Short: "Registers a node to your network",
Short: "Registers a machine to your network",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
@@ -127,24 +74,24 @@ var registerNodeCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting node key from flag: %s", err),
fmt.Sprintf("Error getting machine key from flag: %s", err),
output,
)
return
}
request := &v1.RegisterNodeRequest{
Key: machineKey,
User: user,
request := &v1.RegisterMachineRequest{
Key: machineKey,
Namespace: namespace,
}
response, err := client.RegisterNode(ctx, request)
response, err := client.RegisterMachine(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot register node: %s\n",
"Cannot register machine: %s\n",
status.Convert(err).Message(),
),
output,
@@ -153,9 +100,7 @@ var registerNodeCmd = &cobra.Command{
return
}
SuccessOutput(
response.GetNode(),
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()), output)
SuccessOutput(response.Machine, "Machine register", output)
},
}
@@ -165,15 +110,9 @@ var listNodesCmd = &cobra.Command{
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
showTags, err := cmd.Flags().GetBool("tags")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
@@ -182,11 +121,11 @@ var listNodesCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.ListNodesRequest{
User: user,
request := &v1.ListMachinesRequest{
Namespace: namespace,
}
response, err := client.ListNodes(ctx, request)
response, err := client.ListMachines(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -198,12 +137,12 @@ var listNodesCmd = &cobra.Command{
}
if output != "" {
SuccessOutput(response.GetNodes(), "", output)
SuccessOutput(response.Machines, "", output)
return
}
tableData, err := nodesToPtables(user, showTags, response.GetNodes())
tableData, err := nodesToPtables(namespace, response.Machines)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -225,7 +164,7 @@ var listNodesCmd = &cobra.Command{
var expireNodeCmd = &cobra.Command{
Use: "expire",
Short: "Expire (log out) a node in your network",
Short: "Expire (log out) a machine in your network",
Long: "Expiring a node will keep the node in the database and force it to reauthenticate.",
Aliases: []string{"logout", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) {
@@ -246,16 +185,16 @@ var expireNodeCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.ExpireNodeRequest{
NodeId: identifier,
request := &v1.ExpireMachineRequest{
MachineId: identifier,
}
response, err := client.ExpireNode(ctx, request)
response, err := client.ExpireMachine(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot expire node: %s\n",
"Cannot expire machine: %s\n",
status.Convert(err).Message(),
),
output,
@@ -264,55 +203,7 @@ var expireNodeCmd = &cobra.Command{
return
}
SuccessOutput(response.GetNode(), "Node expired", output)
},
}
var renameNodeCmd = &cobra.Command{
Use: "rename NEW_NAME",
Short: "Renames a node in your network",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
newName := ""
if len(args) > 0 {
newName = args[0]
}
request := &v1.RenameNodeRequest{
NodeId: identifier,
NewName: newName,
}
response, err := client.RenameNode(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot rename node: %s\n",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node renamed", output)
SuccessOutput(response.Machine, "Machine expired", output)
},
}
@@ -338,11 +229,11 @@ var deleteNodeCmd = &cobra.Command{
defer cancel()
defer conn.Close()
getRequest := &v1.GetNodeRequest{
NodeId: identifier,
getRequest := &v1.GetMachineRequest{
MachineId: identifier,
}
getResponse, err := client.GetNode(ctx, getRequest)
getResponse, err := client.GetMachine(ctx, getRequest)
if err != nil {
ErrorOutput(
err,
@@ -356,8 +247,8 @@ var deleteNodeCmd = &cobra.Command{
return
}
deleteRequest := &v1.DeleteNodeRequest{
NodeId: identifier,
deleteRequest := &v1.DeleteMachineRequest{
MachineId: identifier,
}
confirm := false
@@ -366,7 +257,7 @@ var deleteNodeCmd = &cobra.Command{
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetNode().GetName(),
getResponse.GetMachine().Name,
),
}
err = survey.AskOne(prompt, &confirm)
@@ -376,7 +267,7 @@ var deleteNodeCmd = &cobra.Command{
}
if confirm || force {
response, err := client.DeleteNode(ctx, deleteRequest)
response, err := client.DeleteMachine(ctx, deleteRequest)
if output != "" {
SuccessOutput(response, "", output)
@@ -405,199 +296,54 @@ var deleteNodeCmd = &cobra.Command{
},
}
var moveNodeCmd = &cobra.Command{
Use: "move",
Short: "Move node to another user",
Aliases: []string{"mv"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting user: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
getRequest := &v1.GetNodeRequest{
NodeId: identifier,
}
_, err = client.GetNode(ctx, getRequest)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Error getting node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
moveRequest := &v1.MoveNodeRequest{
NodeId: identifier,
User: user,
}
moveResponse, err := client.MoveNode(ctx, moveRequest)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Error moving node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output)
},
}
var backfillNodeIPsCmd = &cobra.Command{
Use: "backfillips",
Short: "Backfill IPs missing from nodes",
Long: `
Backfill IPs can be used to add/remove IPs from nodes
based on the current configuration of Headscale.
If there are nodes that does not have IPv4 or IPv6
even if prefixes for both are configured in the config,
this command can be used to assign IPs of the sort to
all nodes that are missing.
If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer
be assigned to nodes.`,
Run: func(cmd *cobra.Command, args []string) {
var err error
output, _ := cmd.Flags().GetString("output")
confirm := false
prompt := &survey.Confirm{
Message: "Are you sure that you want to assign/remove IPs to/from nodes?",
}
err = survey.AskOne(prompt, &confirm)
if err != nil {
return
}
if confirm {
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Error backfilling IPs: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(changes, "Node IPs backfilled successfully", output)
}
},
}
func nodesToPtables(
currentUser string,
showTags bool,
nodes []*v1.Node,
currentNamespace string,
machines []*v1.Machine,
) (pterm.TableData, error) {
tableHeader := []string{
"ID",
"Hostname",
"Name",
"MachineKey",
"NodeKey",
"User",
"IP addresses",
"Ephemeral",
"Last seen",
"Expiration",
"Connected",
"Expired",
tableData := pterm.TableData{
{
"ID",
"Name",
"NodeKey",
"Namespace",
"IP addresses",
"Ephemeral",
"Last seen",
"Online",
"Expired",
},
}
if showTags {
tableHeader = append(tableHeader, []string{
"ForcedTags",
"InvalidTags",
"ValidTags",
}...)
}
tableData := pterm.TableData{tableHeader}
for _, node := range nodes {
for _, machine := range machines {
var ephemeral bool
if node.GetPreAuthKey() != nil && node.GetPreAuthKey().GetEphemeral() {
if machine.PreAuthKey != nil && machine.PreAuthKey.Ephemeral {
ephemeral = true
}
var lastSeen time.Time
var lastSeenTime string
if node.GetLastSeen() != nil {
lastSeen = node.GetLastSeen().AsTime()
if machine.LastSeen != nil {
lastSeen = machine.LastSeen.AsTime()
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05")
}
var expiry time.Time
var expiryTime string
if node.GetExpiry() != nil {
expiry = node.GetExpiry().AsTime()
expiryTime = expiry.Format("2006-01-02 15:04:05")
} else {
expiryTime = "N/A"
}
var machineKey key.MachinePublic
err := machineKey.UnmarshalText(
[]byte(node.GetMachineKey()),
)
if err != nil {
machineKey = key.MachinePublic{}
if machine.Expiry != nil {
expiry = machine.Expiry.AsTime()
}
var nodeKey key.NodePublic
err = nodeKey.UnmarshalText(
[]byte(node.GetNodeKey()),
err := nodeKey.UnmarshalText(
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
)
if err != nil {
return nil, err
}
var online string
if node.GetOnline() {
if lastSeen.After(
time.Now().Add(-5 * time.Minute),
) { // TODO: Find a better way to reliably show if online
online = pterm.LightGreen("online")
} else {
online = pterm.LightRed("offline")
@@ -610,124 +356,28 @@ func nodesToPtables(
expired = pterm.LightRed("yes")
}
var forcedTags string
for _, tag := range node.GetForcedTags() {
forcedTags += "," + tag
}
forcedTags = strings.TrimLeft(forcedTags, ",")
var invalidTags string
for _, tag := range node.GetInvalidTags() {
if !contains(node.GetForcedTags(), tag) {
invalidTags += "," + pterm.LightRed(tag)
}
}
invalidTags = strings.TrimLeft(invalidTags, ",")
var validTags string
for _, tag := range node.GetValidTags() {
if !contains(node.GetForcedTags(), tag) {
validTags += "," + pterm.LightGreen(tag)
}
}
validTags = strings.TrimLeft(validTags, ",")
var user string
if currentUser == "" || (currentUser == node.GetUser().GetName()) {
user = pterm.LightMagenta(node.GetUser().GetName())
var namespace string
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
namespace = pterm.LightMagenta(machine.Namespace.Name)
} else {
// Shared into this user
user = pterm.LightYellow(node.GetUser().GetName())
}
var IPV4Address string
var IPV6Address string
for _, addr := range node.GetIpAddresses() {
if netip.MustParseAddr(addr).Is4() {
IPV4Address = addr
} else {
IPV6Address = addr
}
}
nodeData := []string{
strconv.FormatUint(node.GetId(), util.Base10),
node.GetName(),
node.GetGivenName(),
machineKey.ShortString(),
nodeKey.ShortString(),
user,
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
strconv.FormatBool(ephemeral),
lastSeenTime,
expiryTime,
online,
expired,
}
if showTags {
nodeData = append(nodeData, []string{forcedTags, invalidTags, validTags}...)
// Shared into this namespace
namespace = pterm.LightYellow(machine.Namespace.Name)
}
tableData = append(
tableData,
nodeData,
[]string{
strconv.FormatUint(machine.Id, headscale.Base10),
machine.Name,
nodeKey.ShortString(),
namespace,
strings.Join(machine.IpAddresses, ", "),
strconv.FormatBool(ephemeral),
lastSeenTime,
online,
expired,
},
)
}
return tableData, nil
}
var tagCmd = &cobra.Command{
Use: "tag",
Short: "Manage the tags of a node",
Aliases: []string{"tags", "t"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
// retrieve flags from CLI
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
tagsToSet, err := cmd.Flags().GetStringSlice("tags")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err),
output,
)
return
}
// Sending tags to node
request := &v1.SetTagsRequest{
NodeId: identifier,
Tags: tagsToSet,
}
resp, err := client.SetTags(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error while sending tags to headscale: %s", err),
output,
)
return
}
if resp != nil {
SuccessOutput(
resp.GetNode(),
"Node updated",
output,
)
}
},
}

View File

@@ -3,11 +3,9 @@ package cli
import (
"fmt"
"strconv"
"strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/prometheus/common/model"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
@@ -15,19 +13,13 @@ import (
)
const (
DefaultPreAuthKeyExpiry = "1h"
DefaultPreAuthKeyExpiry = 1 * time.Hour
)
func init() {
rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
pakNamespaceFlag.Hidden = true
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
if err != nil {
log.Fatal().Err(err).Msg("")
}
@@ -39,9 +31,7 @@ func init() {
createPreAuthKeyCmd.PersistentFlags().
Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
createPreAuthKeyCmd.Flags().
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
createPreAuthKeyCmd.Flags().
StringSlice("tags", []string{}, "Tags to automatically assign to node")
DurationP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
}
var preauthkeysCmd = &cobra.Command{
@@ -52,14 +42,14 @@ var preauthkeysCmd = &cobra.Command{
var listPreAuthKeys = &cobra.Command{
Use: "list",
Short: "List the preauthkeys for this user",
Short: "List the preauthkeys for this namespace",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
@@ -69,7 +59,7 @@ var listPreAuthKeys = &cobra.Command{
defer conn.Close()
request := &v1.ListPreAuthKeysRequest{
User: user,
Namespace: namespace,
}
response, err := client.ListPreAuthKeys(ctx, request)
@@ -84,46 +74,35 @@ var listPreAuthKeys = &cobra.Command{
}
if output != "" {
SuccessOutput(response.GetPreAuthKeys(), "", output)
SuccessOutput(response.PreAuthKeys, "", output)
return
}
tableData := pterm.TableData{
{
"ID",
"Key",
"Reusable",
"Ephemeral",
"Used",
"Expiration",
"Created",
"Tags",
},
{"ID", "Key", "Reusable", "Ephemeral", "Used", "Expiration", "Created"},
}
for _, key := range response.GetPreAuthKeys() {
for _, key := range response.PreAuthKeys {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
expiration = ColourTime(key.Expiration.AsTime())
}
aclTags := ""
for _, tag := range key.GetAclTags() {
aclTags += "," + tag
var reusable string
if key.GetEphemeral() {
reusable = "N/A"
} else {
reusable = fmt.Sprintf("%v", key.GetReusable())
}
aclTags = strings.TrimLeft(aclTags, ",")
tableData = append(tableData, []string{
key.GetId(),
key.GetKey(),
strconv.FormatBool(key.GetReusable()),
reusable,
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
aclTags,
})
}
@@ -142,53 +121,37 @@ var listPreAuthKeys = &cobra.Command{
var createPreAuthKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new preauthkey in the specified user",
Short: "Creates a new preauthkey in the specified namespace",
Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags")
log.Trace().
Bool("reusable", reusable).
Bool("ephemeral", ephemeral).
Str("user", user).
Str("namespace", namespace).
Msg("Preparing to create preauthkey")
request := &v1.CreatePreAuthKeyRequest{
User: user,
Namespace: namespace,
Reusable: reusable,
Ephemeral: ephemeral,
AclTags: tags,
}
durationStr, _ := cmd.Flags().GetString("expiration")
duration, _ := cmd.Flags().GetDuration("expiration")
expiration := time.Now().UTC().Add(duration)
duration, err := model.ParseDuration(durationStr)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
return
}
expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace().
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
log.Trace().Dur("expiration", duration).Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration)
@@ -207,7 +170,7 @@ var createPreAuthKeyCmd = &cobra.Command{
return
}
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
SuccessOutput(response.PreAuthKey, response.PreAuthKey.Key, output)
},
}
@@ -224,9 +187,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
@@ -236,8 +199,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
defer conn.Close()
request := &v1.ExpirePreAuthKeyRequest{
User: user,
Key: args[0],
Namespace: namespace,
Key: args[0],
}
response, err := client.ExpirePreAuthKey(ctx, request)

View File

@@ -3,91 +3,17 @@ package cli
import (
"fmt"
"os"
"runtime"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"github.com/tcnksm/go-latest"
)
const (
deprecateNamespaceMessage = "use --user"
)
var cfgFile string = ""
func init() {
if len(os.Args) > 1 &&
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
return
}
cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
rootCmd.PersistentFlags().
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
rootCmd.PersistentFlags().
Bool("force", false, "Disable prompts and forces the execution")
}
func initConfig() {
if cfgFile == "" {
cfgFile = os.Getenv("HEADSCALE_CONFIG")
}
if cfgFile != "" {
err := types.LoadConfig(cfgFile, true)
if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile)
}
} else {
err := types.LoadConfig("", false)
if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config")
}
}
cfg, err := types.GetHeadscaleConfig()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Failed to get headscale configuration")
}
machineOutput := HasMachineOutputFlag()
zerolog.SetGlobalLevel(cfg.Log.Level)
// If the user has requested a "node" readable format,
// then disable login so the output remains valid.
if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled)
}
if cfg.Log.Format == types.JSONLogFormat {
log.Logger = log.Output(os.Stdout)
}
if !cfg.DisableUpdateCheck && !machineOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
Version != "dev" {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
}
res, err := latest.Check(githubTag, Version)
if err == nil && res.Outdated {
//nolint
log.Warn().Msgf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
Version,
)
}
}
}
}
var rootCmd = &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",

View File

@@ -3,45 +3,35 @@ package cli
import (
"fmt"
"log"
"net/netip"
"strconv"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
const (
Base10 = 10
)
func init() {
rootCmd.AddCommand(routesCmd)
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err := listRoutesCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(listRoutesCmd)
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err := enableRouteCmd.MarkFlagRequired("route")
enableRouteCmd.Flags().
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = enableRouteCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(enableRouteCmd)
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = disableRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(disableRouteCmd)
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = deleteRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(deleteRouteCmd)
nodeCmd.AddCommand(routesCmd)
}
var routesCmd = &cobra.Command{
@@ -52,7 +42,7 @@ var routesCmd = &cobra.Command{
var listRoutesCmd = &cobra.Command{
Use: "list",
Short: "List all routes",
Short: "List routes advertised and enabled by a given node",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
@@ -72,51 +62,28 @@ var listRoutesCmd = &cobra.Command{
defer cancel()
defer conn.Close()
var routes []*v1.Route
if machineID == 0 {
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetRoutes(), "", output)
return
}
routes = response.GetRoutes()
} else {
response, err := client.GetNodeRoutes(ctx, &v1.GetNodeRoutesRequest{
NodeId: machineID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get routes for node %d: %s", machineID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetRoutes(), "", output)
return
}
routes = response.GetRoutes()
request := &v1.GetMachineRouteRequest{
MachineId: machineID,
}
tableData := routesToPtables(routes)
response, err := client.GetMachineRoute(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
tableData := routesToPtables(response.Routes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -138,12 +105,16 @@ var listRoutesCmd = &cobra.Command{
var enableRouteCmd = &cobra.Command{
Use: "enable",
Short: "Set a route as enabled",
Long: `This command will make as enabled a given route.`,
Short: "Set the enabled routes for a given node",
Long: `This command will take a list of routes that will _replace_
the current set of routes on a given node.
If you would like to disable a route, simply run the command again, but
omit the route you do not want to enable.
`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
machineID, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
@@ -154,43 +125,11 @@ var enableRouteCmd = &cobra.Command{
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
RouteId: routeID,
})
routes, err := cmd.Flags().GetStringSlice("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
var disableRouteCmd = &cobra.Command{
Use: "disable",
Short: "Set as disabled a given route",
Long: `This command will make as disabled a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
@@ -201,13 +140,19 @@ var disableRouteCmd = &cobra.Command{
defer cancel()
defer conn.Close()
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
RouteId: routeID,
})
request := &v1.EnableMachineRoutesRequest{
MachineId: machineID,
Routes: routes,
}
response, err := client.EnableMachineRoutes(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
fmt.Sprintf(
"Cannot register machine: %s\n",
status.Convert(err).Message(),
),
output,
)
@@ -215,84 +160,50 @@ var disableRouteCmd = &cobra.Command{
}
if output != "" {
SuccessOutput(response, "", output)
SuccessOutput(response.Routes, "", output)
return
}
},
}
var deleteRouteCmd = &cobra.Command{
Use: "delete",
Short: "Delete a given route",
Long: `This command will delete a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
tableData := routesToPtables(response.Routes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
routeID, err := cmd.Flags().GetUint64("route")
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
// routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes []*v1.Route) pterm.TableData {
tableData := pterm.TableData{{"ID", "Node", "Prefix", "Advertised", "Enabled", "Primary"}}
func routesToPtables(routes *v1.Routes) pterm.TableData {
tableData := pterm.TableData{{"Route", "Enabled"}}
for _, route := range routes {
var isPrimaryStr string
prefix, err := netip.ParsePrefix(route.GetPrefix())
if err != nil {
log.Printf("Error parsing prefix %s: %s", route.GetPrefix(), err)
for _, route := range routes.GetAdvertisedRoutes() {
enabled := isStringInSlice(routes.EnabledRoutes, route)
continue
}
if prefix == types.ExitRouteV4 || prefix == types.ExitRouteV6 {
isPrimaryStr = "-"
} else {
isPrimaryStr = strconv.FormatBool(route.GetIsPrimary())
}
tableData = append(tableData,
[]string{
strconv.FormatUint(route.GetId(), Base10),
route.GetNode().GetGivenName(),
route.GetPrefix(),
strconv.FormatBool(route.GetAdvertised()),
strconv.FormatBool(route.GetEnabled()),
isPrimaryStr,
})
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
}
return tableData
}
func isStringInSlice(strs []string, s string) bool {
for _, s2 := range strs {
if s == s2 {
return true
}
}
return false
}

View File

@@ -16,12 +16,12 @@ var serveCmd = &cobra.Command{
return nil
},
Run: func(cmd *cobra.Command, args []string) {
app, err := getHeadscaleApp()
h, err := getHeadscaleApp()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
err = app.Serve()
err = h.Serve()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error starting server")
}

View File

@@ -4,68 +4,422 @@ import (
"context"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io/fs"
"net/url"
"os"
"reflect"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/viper"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"gopkg.in/yaml.v3"
"gopkg.in/yaml.v2"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
const (
PermissionFallback = 0o700
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
SocketWritePermissions = 0o666
)
func getHeadscaleApp() (*hscontrol.Headscale, error) {
cfg, err := types.GetHeadscaleConfig()
if err != nil {
return nil, fmt.Errorf(
"failed to load configuration while creating headscale instance: %w",
err,
)
func LoadConfig(path string) error {
viper.SetConfigName("config")
if path == "" {
viper.AddConfigPath("/etc/headscale/")
viper.AddConfigPath("$HOME/.headscale")
viper.AddConfigPath(".")
} else {
// For testing
viper.AddConfigPath(path)
}
app, err := hscontrol.NewHeadscale(cfg)
viper.SetEnvPrefix("headscale")
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
viper.AutomaticEnv()
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
viper.SetDefault("tls_letsencrypt_challenge_type", "HTTP-01")
viper.SetDefault("tls_client_auth_mode", "relaxed")
viper.SetDefault("log_level", "info")
viper.SetDefault("dns_config", nil)
viper.SetDefault("derp.server.enabled", false)
viper.SetDefault("derp.server.stun.enabled", true)
viper.SetDefault("unix_socket", "/var/run/headscale.sock")
viper.SetDefault("unix_socket_permission", "0o770")
viper.SetDefault("grpc_listen_addr", ":50443")
viper.SetDefault("grpc_allow_insecure", false)
viper.SetDefault("cli.timeout", "5s")
viper.SetDefault("cli.insecure", false)
viper.SetDefault("oidc.strip_email_domain", true)
if err := viper.ReadInConfig(); err != nil {
return fmt.Errorf("fatal error reading config file: %w", err)
}
// Collect any validation errors and return them all at once
var errorText string
if (viper.GetString("tls_letsencrypt_hostname") != "") &&
((viper.GetString("tls_cert_path") != "") || (viper.GetString("tls_key_path") != "")) {
errorText += "Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both\n"
}
if (viper.GetString("tls_letsencrypt_hostname") != "") &&
(viper.GetString("tls_letsencrypt_challenge_type") == "TLS-ALPN-01") &&
(!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) {
// this is only a warning because there could be something sitting in front of headscale that redirects the traffic (e.g. an iptables rule)
log.Warn().
Msg("Warning: when using tls_letsencrypt_hostname with TLS-ALPN-01 as challenge type, headscale must be reachable on port 443, i.e. listen_addr should probably end in :443")
}
if (viper.GetString("tls_letsencrypt_challenge_type") != "HTTP-01") &&
(viper.GetString("tls_letsencrypt_challenge_type") != "TLS-ALPN-01") {
errorText += "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are HTTP-01 and TLS-ALPN-01\n"
}
if !strings.HasPrefix(viper.GetString("server_url"), "http://") &&
!strings.HasPrefix(viper.GetString("server_url"), "https://") {
errorText += "Fatal config error: server_url must start with https:// or http://\n"
}
_, authModeValid := headscale.LookupTLSClientAuthMode(
viper.GetString("tls_client_auth_mode"),
)
if !authModeValid {
errorText += fmt.Sprintf(
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
viper.GetString("tls_client_auth_mode"),
headscale.DisabledClientAuth,
headscale.RelaxedClientAuth,
headscale.EnforcedClientAuth)
}
if errorText != "" {
//nolint
return errors.New(strings.TrimSuffix(errorText, "\n"))
} else {
return nil
}
}
func GetDERPConfig() headscale.DERPConfig {
serverEnabled := viper.GetBool("derp.server.enabled")
serverRegionID := viper.GetInt("derp.server.region_id")
serverRegionCode := viper.GetString("derp.server.region_code")
serverRegionName := viper.GetString("derp.server.region_name")
stunAddr := viper.GetString("derp.server.stun_listen_addr")
if serverEnabled && stunAddr == "" {
log.Fatal().Msg("derp.server.stun_listen_addr must be set if derp.server.enabled is true")
}
urlStrs := viper.GetStringSlice("derp.urls")
urls := make([]url.URL, len(urlStrs))
for index, urlStr := range urlStrs {
urlAddr, err := url.Parse(urlStr)
if err != nil {
log.Error().
Str("url", urlStr).
Err(err).
Msg("Failed to parse url, ignoring...")
}
urls[index] = *urlAddr
}
paths := viper.GetStringSlice("derp.paths")
autoUpdate := viper.GetBool("derp.auto_update_enabled")
updateFrequency := viper.GetDuration("derp.update_frequency")
return headscale.DERPConfig{
ServerEnabled: serverEnabled,
ServerRegionID: serverRegionID,
ServerRegionCode: serverRegionCode,
ServerRegionName: serverRegionName,
STUNAddr: stunAddr,
URLs: urls,
Paths: paths,
AutoUpdate: autoUpdate,
UpdateFrequency: updateFrequency,
}
}
func GetDNSConfig() (*tailcfg.DNSConfig, string) {
if viper.IsSet("dns_config") {
dnsConfig := &tailcfg.DNSConfig{}
if viper.IsSet("dns_config.nameservers") {
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
nameservers := make([]netaddr.IP, len(nameserversStr))
resolvers := make([]dnstype.Resolver, len(nameserversStr))
for index, nameserverStr := range nameserversStr {
nameserver, err := netaddr.ParseIP(nameserverStr)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse nameserver IP: %s", nameserverStr)
}
nameservers[index] = nameserver
resolvers[index] = dnstype.Resolver{
Addr: nameserver.String(),
}
}
dnsConfig.Nameservers = nameservers
dnsConfig.Resolvers = resolvers
}
if viper.IsSet("dns_config.restricted_nameservers") {
if len(dnsConfig.Nameservers) > 0 {
dnsConfig.Routes = make(map[string][]dnstype.Resolver)
restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers",
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make(
[]dnstype.Resolver,
len(restrictedNameservers),
)
for index, nameserverStr := range restrictedNameservers {
nameserver, err := netaddr.ParseIP(nameserverStr)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
}
restrictedResolvers[index] = dnstype.Resolver{
Addr: nameserver.String(),
}
}
dnsConfig.Routes[domain] = restrictedResolvers
}
} else {
log.Warn().
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
}
}
if viper.IsSet("dns_config.domains") {
dnsConfig.Domains = viper.GetStringSlice("dns_config.domains")
}
if viper.IsSet("dns_config.magic_dns") {
magicDNS := viper.GetBool("dns_config.magic_dns")
if len(dnsConfig.Nameservers) > 0 {
dnsConfig.Proxied = magicDNS
} else if magicDNS {
log.Warn().
Msg("Warning: dns_config.magic_dns is set, but no nameservers are configured. Ignoring magic_dns.")
}
}
var baseDomain string
if viper.IsSet("dns_config.base_domain") {
baseDomain = viper.GetString("dns_config.base_domain")
} else {
baseDomain = "headscale.net" // does not really matter when MagicDNS is not enabled
}
return dnsConfig, baseDomain
}
return nil, ""
}
func absPath(path string) string {
// If a relative path is provided, prefix it with the the directory where
// the config file was found.
if (path != "") && !strings.HasPrefix(path, string(os.PathSeparator)) {
dir, _ := filepath.Split(viper.ConfigFileUsed())
if dir != "" {
path = filepath.Join(dir, path)
}
}
return path
}
func getHeadscaleConfig() headscale.Config {
dnsConfig, baseDomain := GetDNSConfig()
derpConfig := GetDERPConfig()
configuredPrefixes := viper.GetStringSlice("ip_prefixes")
parsedPrefixes := make([]netaddr.IPPrefix, 0, len(configuredPrefixes)+1)
legacyPrefixField := viper.GetString("ip_prefix")
if len(legacyPrefixField) > 0 {
log.
Warn().
Msgf(
"%s, %s",
"use of 'ip_prefix' for configuration is deprecated",
"please see 'ip_prefixes' in the shipped example.",
)
legacyPrefix, err := netaddr.ParseIPPrefix(legacyPrefixField)
if err != nil {
panic(fmt.Errorf("failed to parse ip_prefix: %w", err))
}
parsedPrefixes = append(parsedPrefixes, legacyPrefix)
}
for i, prefixInConfig := range configuredPrefixes {
prefix, err := netaddr.ParseIPPrefix(prefixInConfig)
if err != nil {
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
}
parsedPrefixes = append(parsedPrefixes, prefix)
}
prefixes := make([]netaddr.IPPrefix, 0, len(parsedPrefixes))
{
// dedup
normalizedPrefixes := make(map[string]int, len(parsedPrefixes))
for i, p := range parsedPrefixes {
normalized, _ := p.Range().Prefix()
normalizedPrefixes[normalized.String()] = i
}
// convert back to list
for _, i := range normalizedPrefixes {
prefixes = append(prefixes, parsedPrefixes[i])
}
}
if len(prefixes) < 1 {
prefixes = append(prefixes, netaddr.MustParseIPPrefix("100.64.0.0/10"))
log.Warn().
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
}
tlsClientAuthMode, _ := headscale.LookupTLSClientAuthMode(
viper.GetString("tls_client_auth_mode"),
)
return headscale.Config{
ServerURL: viper.GetString("server_url"),
Addr: viper.GetString("listen_addr"),
MetricsAddr: viper.GetString("metrics_listen_addr"),
GRPCAddr: viper.GetString("grpc_listen_addr"),
GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"),
IPPrefixes: prefixes,
PrivateKeyPath: absPath(viper.GetString("private_key_path")),
BaseDomain: baseDomain,
DERP: derpConfig,
EphemeralNodeInactivityTimeout: viper.GetDuration(
"ephemeral_node_inactivity_timeout",
),
DBtype: viper.GetString("db_type"),
DBpath: absPath(viper.GetString("db_path")),
DBhost: viper.GetString("db_host"),
DBport: viper.GetInt("db_port"),
DBname: viper.GetString("db_name"),
DBuser: viper.GetString("db_user"),
DBpass: viper.GetString("db_pass"),
TLSLetsEncryptHostname: viper.GetString("tls_letsencrypt_hostname"),
TLSLetsEncryptListen: viper.GetString("tls_letsencrypt_listen"),
TLSLetsEncryptCacheDir: absPath(
viper.GetString("tls_letsencrypt_cache_dir"),
),
TLSLetsEncryptChallengeType: viper.GetString("tls_letsencrypt_challenge_type"),
TLSCertPath: absPath(viper.GetString("tls_cert_path")),
TLSKeyPath: absPath(viper.GetString("tls_key_path")),
TLSClientAuthMode: tlsClientAuthMode,
DNSConfig: dnsConfig,
ACMEEmail: viper.GetString("acme_email"),
ACMEURL: viper.GetString("acme_url"),
UnixSocket: viper.GetString("unix_socket"),
UnixSocketPermission: GetFileMode("unix_socket_permission"),
OIDC: headscale.OIDCConfig{
Issuer: viper.GetString("oidc.issuer"),
ClientID: viper.GetString("oidc.client_id"),
ClientSecret: viper.GetString("oidc.client_secret"),
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
},
CLI: headscale.CLIConfig{
Address: viper.GetString("cli.address"),
APIKey: viper.GetString("cli.api_key"),
Timeout: viper.GetDuration("cli.timeout"),
Insecure: viper.GetBool("cli.insecure"),
},
}
}
func getHeadscaleApp() (*headscale.Headscale, error) {
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
// to avoid races
minInactivityTimeout, _ := time.ParseDuration("65s")
if viper.GetDuration("ephemeral_node_inactivity_timeout") <= minInactivityTimeout {
// TODO: Find a better way to return this text
//nolint
err := fmt.Errorf(
"ephemeral_node_inactivity_timeout (%s) is set too low, must be more than %s",
viper.GetString("ephemeral_node_inactivity_timeout"),
minInactivityTimeout,
)
return nil, err
}
cfg := getHeadscaleConfig()
app, err := headscale.NewHeadscale(cfg)
if err != nil {
return nil, err
}
// We are doing this here, as in the future could be cool to have it also hot-reload
if cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath)
if viper.GetString("acl_policy_path") != "" {
aclPath := absPath(viper.GetString("acl_policy_path"))
err = app.LoadACLPolicy(aclPath)
if err != nil {
log.Fatal().
Str("path", aclPath).
Err(err).
Msg("Could not load the ACL policy")
}
app.ACLPolicy = pol
}
return app, nil
}
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg, err := types.GetHeadscaleConfig()
if err != nil {
log.Fatal().
Err(err).
Caller().
Msgf("Failed to load configuration")
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
}
cfg := getHeadscaleConfig()
log.Debug().
Dur("timeout", cfg.CLI.Timeout).
@@ -79,7 +433,7 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
address := cfg.CLI.Address
// If the address is not set, we assume that we are on the server hosting hscontrol.
// If the address is not set, we assume that we are on the server hosting headscale.
if address == "" {
log.Debug().
Str("socket", cfg.UnixSocket).
@@ -87,23 +441,10 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
address = cfg.UnixSocket
// Try to give the user better feedback if we cannot write to the headscale
// socket.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
if err != nil {
if os.IsPermission(err) {
log.Fatal().
Err(err).
Str("socket", cfg.UnixSocket).
Msgf("Unable to read/write to headscale socket, do you have the correct permissions?")
}
}
socket.Close()
grpcOptions = append(
grpcOptions,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer),
grpc.WithContextDialer(headscale.GrpcSocketDialer),
)
} else {
// If we are not connecting to a local server, require an API key for authentication
@@ -139,7 +480,6 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
conn, err := grpc.DialContext(ctx, address, grpcOptions...)
if err != nil {
log.Fatal().Caller().Err(err).Msgf("Could not connect: %v", err)
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
}
client := v1.NewHeadscaleServiceClient(conn)
@@ -154,17 +494,17 @@ func SuccessOutput(result interface{}, override string, outputFormat string) {
case "json":
jsonBytes, err = json.MarshalIndent(result, "", "\t")
if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output")
log.Fatal().Err(err)
}
case "json-line":
jsonBytes, err = json.Marshal(result)
if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output")
log.Fatal().Err(err)
}
case "yaml":
jsonBytes, err = yaml.Marshal(result)
if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output")
log.Fatal().Err(err)
}
default:
//nolint
@@ -213,12 +553,13 @@ func (tokenAuth) RequireTransportSecurity() bool {
return true
}
func contains[T string](ts []T, t T) bool {
for _, v := range ts {
if reflect.DeepEqual(v, t) {
return true
}
func GetFileMode(key string) fs.FileMode {
modeStr := viper.GetString(key)
mode, err := strconv.ParseUint(modeStr, headscale.Base8, headscale.BitSize64)
if err != nil {
return PermissionFallback
}
return false
return fs.FileMode(mode)
}

View File

@@ -1,13 +1,17 @@
package main
import (
"fmt"
"os"
"runtime"
"time"
"github.com/jagottsicher/termcolor"
"github.com/efekarakus/termcolor"
"github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/spf13/viper"
"github.com/tcnksm/go-latest"
)
func main() {
@@ -34,10 +38,49 @@ func main() {
zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
log.Logger = log.Output(zerolog.ConsoleWriter{
Out: os.Stderr,
Out: os.Stdout,
TimeFormat: time.RFC3339,
NoColor: !colors,
})
if err := cli.LoadConfig(""); err != nil {
log.Fatal().Caller().Err(err)
}
machineOutput := cli.HasMachineOutputFlag()
logLevel := viper.GetString("log_level")
level, err := zerolog.ParseLevel(logLevel)
if err != nil {
zerolog.SetGlobalLevel(zerolog.DebugLevel)
} else {
zerolog.SetGlobalLevel(level)
}
// If the user has requested a "machine" readable format,
// then disable login so the output remains valid.
if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled)
}
if !viper.GetBool("disable_check_updates") && !machineOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
cli.Version != "dev" {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
}
res, err := latest.Check(githubTag, cli.Version)
if err == nil && res.Outdated {
//nolint
fmt.Printf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
cli.Version,
)
}
}
}
cli.Execute()
}

View File

@@ -2,13 +2,13 @@ package main
import (
"io/fs"
"io/ioutil"
"os"
"path/filepath"
"strings"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/spf13/viper"
"gopkg.in/check.v1"
)
@@ -27,53 +27,8 @@ func (s *Suite) SetUpSuite(c *check.C) {
func (s *Suite) TearDownSuite(c *check.C) {
}
func (*Suite) TestConfigFileLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
if err != nil {
c.Fatal(err)
}
defer os.RemoveAll(tmpDir)
path, err := os.Getwd()
if err != nil {
c.Fatal(err)
}
cfgFile := filepath.Join(tmpDir, "config.yaml")
// Symlink the example config file
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
cfgFile,
)
if err != nil {
c.Fatal(err)
}
// Load example config, it should load without validation errors
err = types.LoadConfig(cfgFile, true)
c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert(
util.GetFileMode("unix_socket_permission"),
check.Equals,
fs.FileMode(0o770),
)
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
}
func (*Suite) TestConfigLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil {
c.Fatal(err)
}
@@ -94,30 +49,28 @@ func (*Suite) TestConfigLoading(c *check.C) {
}
// Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false)
err = cli.LoadConfig(tmpDir)
c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert(
util.GetFileMode("unix_socket_permission"),
cli.GetFileMode("unix_socket_permission"),
check.Equals,
fs.FileMode(0o770),
)
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false)
}
func (*Suite) TestDNSConfigLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil {
c.Fatal(err)
}
@@ -138,10 +91,10 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
}
// Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false)
err = cli.LoadConfig(tmpDir)
c.Assert(err, check.IsNil)
dnsConfig, baseDomain := types.GetDNSConfig()
dnsConfig, baseDomain := cli.GetDNSConfig()
c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1")
c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1")
@@ -152,28 +105,26 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
func writeConfig(c *check.C, tmpDir string, configYaml []byte) {
// Populate a custom config file
configFile := filepath.Join(tmpDir, "config.yaml")
err := os.WriteFile(configFile, configYaml, 0o600)
err := ioutil.WriteFile(configFile, configYaml, 0o600)
if err != nil {
c.Fatalf("Couldn't write file %s", configFile)
}
}
func (*Suite) TestTLSConfigValidation(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil {
c.Fatal(err)
}
// defer os.RemoveAll(tmpDir)
configYaml := []byte(`---
tls_letsencrypt_hostname: example.com
tls_letsencrypt_challenge_type: ""
tls_cert_path: abc.pem
noise:
private_key_path: noise_private.key`)
configYaml := []byte(
"---\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"\"\ntls_cert_path: \"abc.pem\"",
)
writeConfig(c, tmpDir, configYaml)
// Check configuration validation errors (1)
err = types.LoadConfig(tmpDir, false)
err = cli.LoadConfig(tmpDir)
c.Assert(err, check.NotNil)
// check.Matches can not handle multiline strings
tmp := strings.ReplaceAll(err.Error(), "\n", "***")
@@ -194,14 +145,10 @@ noise:
)
// Check configuration validation errors (2)
configYaml = []byte(`---
noise:
private_key_path: noise_private.key
server_url: http://127.0.0.1:8080
tls_letsencrypt_hostname: example.com
tls_letsencrypt_challenge_type: TLS-ALPN-01
`)
configYaml = []byte(
"---\nserver_url: \"http://127.0.0.1:8080\"\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"TLS-ALPN-01\"",
)
writeConfig(c, tmpDir, configYaml)
err = types.LoadConfig(tmpDir, false)
err = cli.LoadConfig(tmpDir)
c.Assert(err, check.IsNil)
}

View File

@@ -14,9 +14,7 @@ server_url: http://127.0.0.1:8080
# Address to listen to / bind to on the server
#
# For production:
# listen_addr: 0.0.0.0:8080
listen_addr: 127.0.0.1:8080
listen_addr: 0.0.0.0:8080
# Address to listen to /metrics, you may want
# to keep this endpoint private to your internal
@@ -29,10 +27,7 @@ metrics_listen_addr: 127.0.0.1:9090
# remotely with the CLI
# Note: Remote access _only_ works if you have
# valid certificates.
#
# For production:
# grpc_listen_addr: 0.0.0.0:50443
grpc_listen_addr: 127.0.0.1:50443
grpc_listen_addr: 0.0.0.0:50443
# Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will
@@ -40,31 +35,18 @@ grpc_listen_addr: 127.0.0.1:50443
# are doing.
grpc_allow_insecure: false
# The Noise section includes specific configuration for the
# TS2021 Noise protocol
noise:
# The Noise private key is used to encrypt the
# traffic between headscale and Tailscale clients when
# using the new Noise-based protocol.
private_key_path: /var/lib/headscale/noise_private.key
# Private key used encrypt the traffic between headscale
# and Tailscale clients.
# The private key file which will be
# autogenerated if it's missing
private_key_path: /var/lib/headscale/private.key
# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
# It must be within IP ranges supported by the Tailscale
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
# See below:
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# Any other range is NOT supported, and it will cause unexpected issues.
prefixes:
v6: fd7a:115c:a1e0::/48
v4: 100.64.0.0/10
# Strategy used for allocation of IPs to nodes, available options:
# - sequential (default): assigns the next free IP from the previous given IP.
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
allocation: sequential
ip_prefixes:
- fd7a:115c:a1e0::/48
- 100.64.0.0/10
# DERP is a relay system that Tailscale uses when a direct
# connection cannot be established.
@@ -87,28 +69,12 @@ derp:
region_code: "headscale"
region_name: "Headscale Embedded DERP"
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
# Listens in UDP at the configured address for STUN connections to help on NAT traversal.
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
#
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
stun_listen_addr: "0.0.0.0:3478"
# Private key used to encrypt the traffic between headscale DERP
# and Tailscale clients.
# The private key file will be autogenerated if it's missing.
#
private_key_path: /var/lib/headscale/derp_server_private.key
# This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
# it enables the creation of your very own DERP map entry using a locally available file with the parameter DERP.paths
# If you enable the DERP server and set this to false, it is required to add the DERP server to the DERP map using DERP.paths
automatically_add_embedded_derp_region: true
# For better connection stability (especially when using an Exit-Node and DNS is not working),
# it is possible to optionall add the public IPv4 and IPv6 address to the Derp-Map using:
ipv4: 1.2.3.4
ipv6: 2001:db8::1
# List of externally available DERP maps encoded in JSON
urls:
- https://controlplane.tailscale.com/derpmap/default
@@ -137,28 +103,17 @@ disable_check_updates: false
# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m
database:
type: sqlite
# SQLite config
db_type: sqlite3
db_path: /var/lib/headscale/db.sqlite
# SQLite config
sqlite:
path: /var/lib/headscale/db.sqlite
# # Postgres config
# postgres:
# # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# host: localhost
# port: 5432
# name: headscale
# user: foo
# pass: bar
# max_open_conns: 10
# max_idle_conns: 10
# conn_max_idle_time_secs: 3600
# # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
# # in the 'ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# ssl: false
# # Postgres config
# db_type: postgres
# db_host: localhost
# db_port: 5432
# db_name: headscale
# db_user: foo
# db_pass: bar
### TLS configuration
#
@@ -176,9 +131,15 @@ acme_email: ""
# Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: ""
# Client (Tailscale/Browser) authentication mode (mTLS)
# Acceptable values:
# - disabled: client authentication disabled
# - relaxed: client certificate is required but not verified
# - enforced: client certificate is required and verified
tls_client_auth_mode: relaxed
# Path to store certificates and metadata needed by
# letsencrypt
# For production:
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types:
@@ -186,7 +147,7 @@ tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# See [docs/tls.md](docs/tls.md) for more information
tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listening on:
# verification endpoint, and it will be listning on:
# :http = port 80
tls_letsencrypt_listen: ":http"
@@ -194,10 +155,7 @@ tls_letsencrypt_listen: ":http"
tls_cert_path: ""
tls_key_path: ""
log:
# Output formatting for logs: text or json
format: text
level: info
log_level: info
# Path to a file containg ACL policies.
# ACLs can be defined as YAML or HUJSON.
@@ -214,25 +172,10 @@ acl_policy_path: ""
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
#
dns_config:
# Whether to prefer using Headscale provided DNS or use local.
override_local_dns: true
# List of DNS servers to expose to clients.
nameservers:
- 1.1.1.1
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
# "abc123" is example NextDNS ID, replace with yours.
#
# With metadata sharing:
# nameservers:
# - https://dns.nextdns.io/abc123
#
# Without metadata sharing:
# nameservers:
# - 2a07:a8c0::ab:c123
# - 2a07:a8c1::ab:c123
# Split DNS (see https://tailscale.com/kb/1054/dns/),
# list of search domains and the DNS to query for each one.
#
@@ -246,17 +189,6 @@ dns_config:
# Search domains to inject.
domains: []
# Extra DNS records
# so far only A-records are supported (on the tailscale side)
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
# extra_records:
# - name: "grafana.myvpn.example.com"
# type: "A"
# value: "100.64.0.3"
#
# # you can also put it in one line
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Only works if there is at least a nameserver defined.
magic_dns: true
@@ -264,12 +196,13 @@ dns_config:
# Defines the base domain to create the hostnames for MagicDNS.
# `base_domain` must be a FQDNs, without the trailing dot.
# The FQDN of the hosts will be
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
base_domain: example.com
# Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like:
unix_socket: /var/run/headscale/headscale.sock
# Note: for local development, you probably want to change this to:
# unix_socket: ./headscale.sock
unix_socket: /var/run/headscale.sock
unix_socket_permission: "0770"
#
# headscale supports experimental OpenID connect support,
@@ -277,62 +210,13 @@ unix_socket_permission: "0770"
# help us test it.
# OpenID Connect
# oidc:
# only_start_if_oidc_is_available: true
# issuer: "https://your-oidc.issuer.com/path"
# client_id: "your-oidc-client-id"
# client_secret: "your-oidc-client-secret"
# # Alternatively, set `client_secret_path` to read the secret from the file.
# # It resolves environment variables, making integration to systemd's
# # `LoadCredential` straightforward:
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
# # client_secret and client_secret_path are mutually exclusive.
#
# # The amount of time from a node is authenticated with OpenID until it
# # expires and needs to reauthenticate.
# # Setting the value to "0" will mean no expiry.
# expiry: 180d
#
# # Use the expiry from the token received from OpenID when the user logged
# # in, this will typically lead to frequent need to reauthenticate and should
# # only been enabled if you know what you are doing.
# # Note: enabling this will cause `oidc.expiry` to be ignored.
# use_expiry_from_token: false
#
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
#
# scope: ["openid", "profile", "email", "custom"]
# extra_params:
# domain_hint: example.com
#
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# # authentication request will be rejected.
#
# allowed_domains:
# - example.com
# # Note: Groups from keycloak have a leading '/'
# allowed_groups:
# - /headscale
# allowed_users:
# - alice@example.com
#
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# user: `first-name.last-name.example.com`
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# namespace: `first-name.last-name.example.com`
#
# strip_email_domain: true
# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
# to instruct tailscale nodes to log their activity to a remote server.
logtail:
# Enable logtail for this headscales clients.
# As there is currently no support for overriding the log server in headscale, this is
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
enabled: false
# Enabling this option makes devices prefer a random port for WireGuard traffic over the
# default static port 41641. This option is intended as a workaround for some buggy
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
randomize_client_port: false

252
db.go Normal file
View File

@@ -0,0 +1,252 @@
package headscale
import (
"database/sql/driver"
"encoding/json"
"errors"
"fmt"
"time"
"github.com/glebarez/sqlite"
"github.com/rs/zerolog/log"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"inet.af/netaddr"
"tailscale.com/tailcfg"
)
const (
dbVersion = "1"
errValueNotFound = Error("not found")
)
// KV is a key-value store in a psql table. For future use...
type KV struct {
Key string
Value string
}
func (h *Headscale) initDB() error {
db, err := h.openDB()
if err != nil {
return err
}
h.db = db
if h.dbType == Postgres {
db.Exec(`create extension if not exists "uuid-ossp";`)
}
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
// If the Machine table has a column for registered,
// find all occourences of "false" and drop them. Then
// remove the column.
if db.Migrator().HasColumn(&Machine{}, "registered") {
log.Info().
Msg(`Database has legacy "registered" column in machine, removing...`)
machines := Machines{}
if err := h.db.Not("registered").Find(&machines).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for _, machine := range machines {
log.Info().
Str("machine", machine.Name).
Str("machine_key", machine.MachineKey).
Msg("Deleting unregistered machine")
if err := h.db.Delete(&Machine{}, machine.ID).Error; err != nil {
log.Error().
Err(err).
Str("machine", machine.Name).
Str("machine_key", machine.MachineKey).
Msg("Error deleting unregistered machine")
}
}
err := db.Migrator().DropColumn(&Machine{}, "registered")
if err != nil {
log.Error().Err(err).Msg("Error dropping registered column")
}
}
err = db.AutoMigrate(&Machine{})
if err != nil {
return err
}
err = db.AutoMigrate(&KV{})
if err != nil {
return err
}
err = db.AutoMigrate(&Namespace{})
if err != nil {
return err
}
err = db.AutoMigrate(&PreAuthKey{})
if err != nil {
return err
}
_ = db.Migrator().DropTable("shared_machines")
err = db.AutoMigrate(&APIKey{})
if err != nil {
return err
}
err = h.setValue("db_version", dbVersion)
return err
}
func (h *Headscale) openDB() (*gorm.DB, error) {
var db *gorm.DB
var err error
var log logger.Interface
if h.dbDebug {
log = logger.Default
} else {
log = logger.Default.LogMode(logger.Silent)
}
switch h.dbType {
case Sqlite:
db, err = gorm.Open(
sqlite.Open(h.dbString+"?_synchronous=1&_journal_mode=WAL"),
&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
},
)
db.Exec("PRAGMA foreign_keys=ON")
// The pure Go SQLite library does not handle locking in
// the same way as the C based one and we cant use the gorm
// connection pool as of 2022/02/23.
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(1)
sqlDB.SetMaxOpenConns(1)
sqlDB.SetConnMaxIdleTime(time.Hour)
case Postgres:
db, err = gorm.Open(postgres.Open(h.dbString), &gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
})
}
if err != nil {
return nil, err
}
return db, nil
}
// getValue returns the value for the given key in KV.
func (h *Headscale) getValue(key string) (string, error) {
var row KV
if result := h.db.First(&row, "key = ?", key); errors.Is(
result.Error,
gorm.ErrRecordNotFound,
) {
return "", errValueNotFound
}
return row.Value, nil
}
// setValue sets value for the given key in KV.
func (h *Headscale) setValue(key string, value string) error {
keyValue := KV{
Key: key,
Value: value,
}
if _, err := h.getValue(key); err == nil {
h.db.Model(&keyValue).Where("key = ?", key).Update("value", value)
return nil
}
h.db.Create(keyValue)
return nil
}
// This is a "wrapper" type around tailscales
// Hostinfo to allow us to add database "serialization"
// methods. This allows us to use a typed values throughout
// the code and not have to marshal/unmarshal and error
// check all over the code.
type HostInfo tailcfg.Hostinfo
func (hi *HostInfo) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, hi)
case string:
return json.Unmarshal([]byte(value), hi)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (hi HostInfo) Value() (driver.Value, error) {
bytes, err := json.Marshal(hi)
return string(bytes), err
}
type IPPrefixes []netaddr.IPPrefix
func (i *IPPrefixes) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i IPPrefixes) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}
type StringList []string
func (i *StringList) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i StringList) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}

View File

@@ -1,16 +1,17 @@
package derp
package headscale
import (
"context"
"encoding/json"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"gopkg.in/yaml.v3"
"gopkg.in/yaml.v2"
"tailscale.com/tailcfg"
)
@@ -31,16 +32,16 @@ func loadDERPMapFromPath(path string) (*tailcfg.DERPMap, error) {
}
func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
ctx, cancel := context.WithTimeout(context.Background(), types.HTTPTimeout)
ctx, cancel := context.WithTimeout(context.Background(), HTTPReadTimeout)
defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, addr.String(), nil)
req, err := http.NewRequestWithContext(ctx, "GET", addr.String(), nil)
if err != nil {
return nil, err
}
client := http.Client{
Timeout: types.HTTPTimeout,
Timeout: HTTPReadTimeout,
}
resp, err := client.Do(req)
@@ -49,7 +50,7 @@ func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
@@ -80,7 +81,7 @@ func mergeDERPMaps(derpMaps []*tailcfg.DERPMap) *tailcfg.DERPMap {
return &result
}
func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap {
func GetDERPMap(cfg DERPConfig) *tailcfg.DERPMap {
derpMaps := make([]*tailcfg.DERPMap, 0)
for _, path := range cfg.Paths {
@@ -132,3 +133,35 @@ func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap {
return derpMap
}
func (h *Headscale) scheduledDERPMapUpdateWorker(cancelChan <-chan struct{}) {
log.Info().
Dur("frequency", h.cfg.DERP.UpdateFrequency).
Msg("Setting up a DERPMap update worker")
ticker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
for {
select {
case <-cancelChan:
return
case <-ticker.C:
log.Info().Msg("Fetching DERPMap updates")
h.DERPMap = GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled {
h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
}
namespaces, err := h.ListNamespaces()
if err != nil {
log.Error().
Err(err).
Msg("Failed to fetch namespaces")
}
for _, namespace := range namespaces {
h.setLastStateChangeToNow(namespace.Name)
}
}
}
}

231
derp_server.go Normal file
View File

@@ -0,0 +1,231 @@
package headscale
import (
"context"
"fmt"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/rs/zerolog/log"
"tailscale.com/derp"
"tailscale.com/net/stun"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
// fastStartHeader is the header (with value "1") that signals to the HTTP
// server that the DERP HTTP client does not want the HTTP 101 response
// headers and it will begin writing & reading the DERP protocol immediately
// following its HTTP request.
const fastStartHeader = "Derp-Fast-Start"
type DERPServer struct {
tailscaleDERP *derp.Server
region tailcfg.DERPRegion
}
func (h *Headscale) NewDERPServer() (*DERPServer, error) {
server := derp.NewServer(key.NodePrivate(*h.privateKey), log.Info().Msgf)
region, err := h.generateRegionLocalDERP()
if err != nil {
return nil, err
}
return &DERPServer{server, region}, nil
}
func (h *Headscale) generateRegionLocalDERP() (tailcfg.DERPRegion, error) {
serverURL, err := url.Parse(h.cfg.ServerURL)
if err != nil {
return tailcfg.DERPRegion{}, err
}
var host string
var port int
host, portStr, err := net.SplitHostPort(serverURL.Host)
if err != nil {
if serverURL.Scheme == "https" {
host = serverURL.Host
port = 443
} else {
host = serverURL.Host
port = 80
}
} else {
port, err = strconv.Atoi(portStr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
}
localDERPregion := tailcfg.DERPRegion{
RegionID: h.cfg.DERP.ServerRegionID,
RegionCode: h.cfg.DERP.ServerRegionCode,
RegionName: h.cfg.DERP.ServerRegionName,
Avoid: false,
Nodes: []*tailcfg.DERPNode{
{
Name: fmt.Sprintf("%d", h.cfg.DERP.ServerRegionID),
RegionID: h.cfg.DERP.ServerRegionID,
HostName: host,
DERPPort: port,
},
},
}
_, portSTUNStr, err := net.SplitHostPort(h.cfg.DERP.STUNAddr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
portSTUN, err := strconv.Atoi(portSTUNStr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
localDERPregion.Nodes[0].STUNPort = portSTUN
return localDERPregion, nil
}
func (h *Headscale) DERPHandler(ctx *gin.Context) {
log.Trace().Caller().Msgf("/derp request from %v", ctx.ClientIP())
up := strings.ToLower(ctx.Request.Header.Get("Upgrade"))
if up != "websocket" && up != "derp" {
if up != "" {
log.Warn().Caller().Msgf("Weird websockets connection upgrade: %q", up)
}
ctx.String(http.StatusUpgradeRequired, "DERP requires connection upgrade")
return
}
fastStart := ctx.Request.Header.Get(fastStartHeader) == "1"
hijacker, ok := ctx.Writer.(http.Hijacker)
if !ok {
log.Error().Caller().Msg("DERP requires Hijacker interface from Gin")
ctx.String(http.StatusInternalServerError, "HTTP does not support general TCP support")
return
}
netConn, conn, err := hijacker.Hijack()
if err != nil {
log.Error().Caller().Err(err).Msgf("Hijack failed")
ctx.String(http.StatusInternalServerError, "HTTP does not support general TCP support")
return
}
if !fastStart {
pubKey := h.privateKey.Public()
pubKeyStr := pubKey.UntypedHexString() // nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+
"Connection: Upgrade\r\n"+
"Derp-Version: %v\r\n"+
"Derp-Public-Key: %s\r\n\r\n",
derp.ProtocolVersion,
pubKeyStr)
}
h.DERPServer.tailscaleDERP.Accept(netConn, conn, netConn.RemoteAddr().String())
}
// DERPProbeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries.
func (h *Headscale) DERPProbeHandler(ctx *gin.Context) {
switch ctx.Request.Method {
case "HEAD", "GET":
ctx.Writer.Header().Set("Access-Control-Allow-Origin", "*")
default:
ctx.String(http.StatusMethodNotAllowed, "bogus probe method")
}
}
// DERPBootstrapDNSHandler implements the /bootsrap-dns endpoint
// Described in https://github.com/tailscale/tailscale/issues/1405,
// this endpoint provides a way to help a client when it fails to start up
// because its DNS are broken.
// The initial implementation is here https://github.com/tailscale/tailscale/pull/1406
// They have a cache, but not clear if that is really necessary at Headscale, uh, scale.
// An example implementation is found here https://derp.tailscale.com/bootstrap-dns
func (h *Headscale) DERPBootstrapDNSHandler(ctx *gin.Context) {
dnsEntries := make(map[string][]net.IP)
resolvCtx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var r net.Resolver
for _, region := range h.DERPMap.Regions {
for _, node := range region.Nodes { // we don't care if we override some nodes
addrs, err := r.LookupIP(resolvCtx, "ip", node.HostName)
if err != nil {
log.Trace().Caller().Err(err).Msgf("bootstrap DNS lookup failed %q", node.HostName)
continue
}
dnsEntries[node.HostName] = addrs
}
}
ctx.JSON(http.StatusOK, dnsEntries)
}
// ServeSTUN starts a STUN server on the configured addr.
func (h *Headscale) ServeSTUN() {
packetConn, err := net.ListenPacket("udp", h.cfg.DERP.STUNAddr)
if err != nil {
log.Fatal().Msgf("failed to open STUN listener: %v", err)
}
log.Info().Msgf("STUN server started at %s", packetConn.LocalAddr())
udpConn, ok := packetConn.(*net.UDPConn)
if !ok {
log.Fatal().Msg("STUN listener is not a UDP listener")
}
serverSTUNListener(context.Background(), udpConn)
}
func serverSTUNListener(ctx context.Context, packetConn *net.UDPConn) {
var buf [64 << 10]byte
var (
bytesRead int
udpAddr *net.UDPAddr
err error
)
for {
bytesRead, udpAddr, err = packetConn.ReadFromUDP(buf[:])
if err != nil {
if ctx.Err() != nil {
return
}
log.Error().Caller().Err(err).Msgf("STUN ReadFrom")
time.Sleep(time.Second)
continue
}
log.Trace().Caller().Msgf("STUN request from %v", udpAddr)
pkt := buf[:bytesRead]
if !stun.Is(pkt) {
log.Trace().Caller().Msgf("UDP packet is not STUN")
continue
}
txid, err := stun.ParseBindingRequest(pkt)
if err != nil {
log.Trace().Caller().Err(err).Msgf("STUN parse error")
continue
}
res := stun.Response(txid, udpAddr.IP, uint16(udpAddr.Port))
_, err = packetConn.WriteTo(res, udpAddr)
if err != nil {
log.Trace().Caller().Err(err).Msgf("Issue writing to UDP")
continue
}
}
}

View File

@@ -1,87 +1,23 @@
package util
package headscale
import (
"errors"
"fmt"
"net/netip"
"regexp"
"strings"
"github.com/spf13/viper"
"go4.org/netipx"
"github.com/fatih/set"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/util/dnsname"
)
const (
ByteSize = 8
ipv4AddressLength = 32
ipv6AddressLength = 128
// value related to RFC 1123 and 952.
LabelHostnameLength = 63
ByteSize = 8
)
var invalidCharsInUserRegex = regexp.MustCompile("[^a-z0-9-.]+")
var ErrInvalidUserName = errors.New("invalid user name")
func NormalizeToFQDNRulesConfigFromViper(name string) (string, error) {
strip := viper.GetBool("oidc.strip_email_domain")
return NormalizeToFQDNRules(name, strip)
}
// NormalizeToFQDNRules will replace forbidden chars in user
// it can also return an error if the user doesn't respect RFC 952 and 1123.
func NormalizeToFQDNRules(name string, stripEmailDomain bool) (string, error) {
name = strings.ToLower(name)
name = strings.ReplaceAll(name, "'", "")
atIdx := strings.Index(name, "@")
if stripEmailDomain && atIdx > 0 {
name = name[:atIdx]
} else {
name = strings.ReplaceAll(name, "@", ".")
}
name = invalidCharsInUserRegex.ReplaceAllString(name, "-")
for _, elt := range strings.Split(name, ".") {
if len(elt) > LabelHostnameLength {
return "", fmt.Errorf(
"label %v is more than 63 chars: %w",
elt,
ErrInvalidUserName,
)
}
}
return name, nil
}
func CheckForFQDNRules(name string) error {
if len(name) > LabelHostnameLength {
return fmt.Errorf(
"DNS segment must not be over 63 chars. %v doesn't comply with this rule: %w",
name,
ErrInvalidUserName,
)
}
if strings.ToLower(name) != name {
return fmt.Errorf(
"DNS segment should be lowercase. %v doesn't comply with this rule: %w",
name,
ErrInvalidUserName,
)
}
if invalidCharsInUserRegex.MatchString(name) {
return fmt.Errorf(
"DNS segment should only be composed of lowercase ASCII letters numbers, hyphen and dots. %v doesn't comply with theses rules: %w",
name,
ErrInvalidUserName,
)
}
return nil
}
const (
ipv4AddressLength = 32
ipv6AddressLength = 128
)
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
@@ -103,9 +39,35 @@ func CheckForFQDNRules(name string) error {
// From the netmask we can find out the wildcard bits (the bits that are not set in the netmask).
// This allows us to then calculate the subnets included in the subsequent class block and generate the entries.
func GenerateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
func generateMagicDNSRootDomains(ipPrefixes []netaddr.IPPrefix) []dnsname.FQDN {
fqdns := make([]dnsname.FQDN, 0, len(ipPrefixes))
for _, ipPrefix := range ipPrefixes {
var generateDNSRoot func(netaddr.IPPrefix) []dnsname.FQDN
switch ipPrefix.IP().BitLen() {
case ipv4AddressLength:
generateDNSRoot = generateIPv4DNSRootDomain
case ipv6AddressLength:
generateDNSRoot = generateIPv6DNSRootDomain
default:
panic(
fmt.Sprintf(
"unsupported IP version with address length %d",
ipPrefix.IP().BitLen(),
),
)
}
fqdns = append(fqdns, generateDNSRoot(ipPrefix)...)
}
return fqdns
}
func generateIPv4DNSRootDomain(ipPrefix netaddr.IPPrefix) []dnsname.FQDN {
// Conversion to the std lib net.IPnet, a bit easier to operate
netRange := netipx.PrefixIPNet(ipPrefix)
netRange := ipPrefix.IPNet()
maskBits, _ := netRange.Mask.Size()
// lastOctet is the last IP byte covered by the mask
@@ -139,31 +101,11 @@ func GenerateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
return fqdns
}
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
// server (listening in 100.100.100.100 udp/53) should be used for.
//
// Tailscale.com includes in the list:
// - the `BaseDomain` of the user
// - the reverse DNS entry for IPv6 (0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa., see below more on IPv6)
// - the reverse DNS entries for the IPv4 subnets covered by the user's `IPPrefix`.
// In the public SaaS this is [64-127].100.in-addr.arpa.
//
// The main purpose of this function is then generating the list of IPv4 entries. For the 100.64.0.0/10, this
// is clear, and could be hardcoded. But we are allowing any range as `IPPrefix`, so we need to find out the
// subnets when we have 172.16.0.0/16 (i.e., [0-255].16.172.in-addr.arpa.), or any other subnet.
//
// How IN-ADDR.ARPA domains work is defined in RFC1035 (section 3.5). Tailscale.com seems to adhere to this,
// and do not make use of RFC2317 ("Classless IN-ADDR.ARPA delegation") - hence generating the entries for the next
// class block only.
// From the netmask we can find out the wildcard bits (the bits that are not set in the netmask).
// This allows us to then calculate the subnets included in the subsequent class block and generate the entries.
func GenerateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
func generateIPv6DNSRootDomain(ipPrefix netaddr.IPPrefix) []dnsname.FQDN {
const nibbleLen = 4
maskBits, _ := netipx.PrefixIPNet(ipPrefix).Mask.Size()
expanded := ipPrefix.Addr().StringExpanded()
maskBits, _ := ipPrefix.IPNet().Mask.Size()
expanded := ipPrefix.IP().StringExpanded()
nibbleStr := strings.Map(func(r rune) rune {
if r == ':' {
return -1
@@ -208,3 +150,44 @@ func GenerateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
return fqdns
}
func getMapResponseDNSConfig(
dnsConfigOrig *tailcfg.DNSConfig,
baseDomain string,
machine Machine,
peers Machines,
) *tailcfg.DNSConfig {
var dnsConfig *tailcfg.DNSConfig
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
dnsConfig = dnsConfigOrig.Clone()
dnsConfig.Domains = append(
dnsConfig.Domains,
fmt.Sprintf(
"%s.%s",
machine.Namespace.Name,
baseDomain,
),
)
namespaceSet := set.New(set.ThreadSafe)
namespaceSet.Add(machine.Namespace)
for _, p := range peers {
namespaceSet.Add(p.Namespace)
}
for _, ns := range namespaceSet.List() {
namespace, ok := ns.(Namespace)
if !ok {
dnsConfig = dnsConfigOrig
continue
}
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
dnsConfig.Routes[dnsRoute] = nil
}
} else {
dnsConfig = dnsConfigOrig
}
return dnsConfig
}

386
dns_test.go Normal file
View File

@@ -0,0 +1,386 @@
package headscale
import (
"fmt"
"gopkg.in/check.v1"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("100.64.0.0/10"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "64.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "100.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "127.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
func (s *Suite) TestMagicDNSRootDomains172(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("172.16.0.0/16"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "0.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "255.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
// Happens when netmask is a multiple of 4 bits (sounds likely).
func (s *Suite) TestMagicDNSRootDomainsIPv6Single(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("fd7a:115c:a1e0::/48"),
}
domains := generateMagicDNSRootDomains(prefixes)
c.Assert(len(domains), check.Equals, 1)
c.Assert(
domains[0].WithTrailingDot(),
check.Equals,
"0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa.",
)
}
func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("fd7a:115c:a1e0::/50"),
}
domains := generateMagicDNSRootDomains(prefixes)
yieldsRoot := func(dom string) bool {
for _, candidate := range domains {
if candidate.WithTrailingDot() == dom {
return true
}
}
return false
}
c.Assert(len(domains), check.Equals, 4)
c.Assert(yieldsRoot("0.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("1.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("2.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("3.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
namespaceShared1, err := app.CreateNamespace("shared1")
c.Assert(err, check.IsNil)
namespaceShared2, err := app.CreateNamespace("shared2")
c.Assert(err, check.IsNil)
namespaceShared3, err := app.CreateNamespace("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Name: "test_get_shared_nodes_1",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Name)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_2",
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Name)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_3",
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Name)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_4",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.4")},
AuthKeyID: uint(PreAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: true,
}
peersOfMachineInShared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachineInShared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
_, ok := dnsConfig.Routes[domainRouteShared1]
c.Assert(ok, check.Equals, true)
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared2]
c.Assert(ok, check.Equals, true)
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared3]
c.Assert(ok, check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
namespaceShared1, err := app.CreateNamespace("shared1")
c.Assert(err, check.IsNil)
namespaceShared2, err := app.CreateNamespace("shared2")
c.Assert(err, check.IsNil)
namespaceShared3, err := app.CreateNamespace("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Name: "test_get_shared_nodes_1",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Name)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_2",
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Name)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_3",
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Name)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_4",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.4")},
AuthKeyID: uint(preAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: false,
}
peersOfMachine1Shared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachine1Shared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 0)
c.Assert(len(dnsConfig.Domains), check.Equals, 1)
}

52
docs/README.md Normal file
View File

@@ -0,0 +1,52 @@
# headscale documentation
This page contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
## Official documentation
### How-to
- [Running headscale on Linux](running-headscale-linux.md)
- [Control headscale remotely](remote-cli.md)
- [Using a Windows client with headscale](windows-client.md)
### References
- [Configuration](../config-example.yaml)
- [Glossary](glossary.md)
- [TLS](tls.md)
## Community documentation
Community documentation is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
- [Running headscale in a container](running-headscale-container.md)
## Misc
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use namespaces (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the Namespace borders are no longer applied. All machines
whichever the Namespace have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
The [ACLs](acls.md) document should help understand a fictional case of setting
up ACLs in a small company. All concepts presented in this document could be
applied outside of business oriented usage.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.

View File

@@ -1,31 +1,16 @@
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACLs use case example
# ACLs use case example
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
We have a small company with a boss, an admin, two developers and an intern.
The boss should have access to all servers but not to the user's hosts. Admin
The boss should have access to all servers but not to the users hosts. Admin
should also have access to all hosts except that their permissions should be
limited to maintaining the hosts (for example purposes). The developers can do
anything they want on dev hosts but only watch on productions hosts. Intern
anything they want on dev hosts, but only watch on productions hosts. Intern
can only interact with the development servers.
There's an additional server that acts as a router, connecting the VPN users
to an internal network `10.20.0.0/16`. Developers must have access to those
internal resources.
Each user have at least a device connected to the network and we have some
servers.
@@ -34,27 +19,28 @@ servers.
- app-server1.prod
- app-server1.dev
- billing.internal
- router.internal
![ACL implementation example](images/headscale-acl-network.png)
## Setup of the network
## ACL setup
Let's create the namespaces. Each user should have his own namespace. The users
here are represented as namespaces.
Note: Users will be created automatically when users authenticate with the
Headscale server.
```bash
headscale namespaces create boss
headscale namespaces create admin1
headscale namespaces create dev1
headscale namespaces create dev2
headscale namespaces create intern1
```
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
or YAML. Check the [test ACLs](../tests/acls) for further information.
When registering the servers we will need to add the flag
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
We don't need to create namespaces for the servers because the servers will be
tagged. When registering the servers we will need to add the flag
`--advertised-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
registering the server should be allowed to do it. Since anyone can add tags to
a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the user that is
and only valid tags are applied. A tag is valid if the namespace that is
registering it is allowed to do it.
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
Here are the ACL's to implement the same permissions as above:
```json
@@ -84,20 +70,12 @@ Here are the ACL's to implement the same permissions as above:
// interns cannot add servers
},
// hosts should be defined using its IP addresses and a subnet mask.
// to define a single host, use a /32 mask. You cannot use DNS entries here,
// as they're prone to be hijacked by replacing their IP addresses.
// see https://github.com/tailscale/tailscale/issues/3800 for more information.
"Hosts": {
"postgresql.internal": "10.20.0.2/32",
"webservers.internal": "10.20.10.1/29"
},
"acls": [
// boss have access to all servers
{
"action": "accept",
"src": ["group:boss"],
"dst": [
"users": ["group:boss"],
"ports": [
"tag:prod-databases:*",
"tag:prod-app-servers:*",
"tag:internal:*",
@@ -106,12 +84,11 @@ Here are the ACL's to implement the same permissions as above:
]
},
// admin have only access to administrative ports of the servers, in tcp/22
// admin have only access to administrative ports of the servers
{
"action": "accept",
"src": ["group:admin"],
"proto": "tcp",
"dst": [
"users": ["group:admin"],
"ports": [
"tag:prod-databases:22",
"tag:prod-app-servers:22",
"tag:internal:22",
@@ -120,70 +97,45 @@ Here are the ACL's to implement the same permissions as above:
]
},
// we also allow admin to ping the servers
{
"action": "accept",
"src": ["group:admin"],
"proto": "icmp",
"dst": [
"tag:prod-databases:*",
"tag:prod-app-servers:*",
"tag:internal:*",
"tag:dev-databases:*",
"tag:dev-app-servers:*"
]
},
// developers have access to databases servers and application servers on all ports
// they can only view the applications servers in prod and have no access to databases servers in production
{
"action": "accept",
"src": ["group:dev"],
"dst": [
"users": ["group:dev"],
"ports": [
"tag:dev-databases:*",
"tag:dev-app-servers:*",
"tag:prod-app-servers:80,443"
]
},
// developers have access to the internal network through the router.
// the internal network is composed of HTTPS endpoints and Postgresql
// database servers. There's an additional rule to allow traffic to be
// forwarded to the internal subnet, 10.20.0.0/16. See this issue
// https://github.com/juanfont/headscale/issues/502
{
"action": "accept",
"src": ["group:dev"],
"dst": ["10.20.0.0/16:443,5432", "router.internal:0"]
},
// servers should be able to talk to database in tcp/5432. Database should not be able to initiate connections to
// servers should be able to talk to database. Database should not be able to initiate connections to
// applications servers
{
"action": "accept",
"src": ["tag:dev-app-servers"],
"proto": "tcp",
"dst": ["tag:dev-databases:5432"]
"users": ["tag:dev-app-servers"],
"ports": ["tag:dev-databases:5432"]
},
{
"action": "accept",
"src": ["tag:prod-app-servers"],
"dst": ["tag:prod-databases:5432"]
"users": ["tag:prod-app-servers"],
"ports": ["tag:prod-databases:5432"]
},
// interns have access to dev-app-servers only in reading mode
{
"action": "accept",
"src": ["group:intern"],
"dst": ["tag:dev-app-servers:80,443"]
"users": ["group:intern"],
"ports": ["tag:dev-app-servers:80,443"]
},
// We still have to allow internal users communications since nothing guarantees that each user have
// their own users.
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
{ "action": "accept", "src": ["admin1"], "dst": ["admin1:*"] },
{ "action": "accept", "src": ["intern1"], "dst": ["intern1:*"] }
// We still have to allow internal namespaces communications since nothing guarantees that each user have
// their own namespaces.
{ "action": "accept", "users": ["boss"], "ports": ["boss:*"] },
{ "action": "accept", "users": ["dev1"], "ports": ["dev1:*"] },
{ "action": "accept", "users": ["dev2"], "ports": ["dev2:*"] },
{ "action": "accept", "users": ["admin1"], "ports": ["admin1:*"] },
{ "action": "accept", "users": ["intern1"], "ports": ["intern1:*"] }
]
}
```

View File

@@ -1,19 +0,0 @@
# Connecting an Android client
## Goal
This documentation has the goal of showing how a user can use the official Android [Tailscale](https://tailscale.com) client with `headscale`.
## Installation
Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/).
Ensure that the installed version is at least 1.30.0, as that is the first release to support custom URLs.
## Configuring the headscale URL
After opening the app, the kebab menu icon (three dots) on the top bar on the right must be repeatedly opened and closed until the _Change server_ option appears in the menu. This is where you can enter your headscale URL.
A screen recording of this process can be seen in the `tailscale-android` PR which implemented this functionality: <https://github.com/tailscale/tailscale-android/pull/55>
After saving and restarting the app, selecting the regular _Sign in_ option (non-SSO) should open up the headscale authentication page.

View File

@@ -1,92 +0,0 @@
# Setting custom DNS records
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
An example use case is to serve apps on the same host via a reverse proxy like NGINX, in this case a Prometheus monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the hostname and portnum combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
## Setup
### 1. Change the configuration
1. Change the `config.yaml` to contain the desired records like so:
```yaml
dns_config:
...
extra_records:
- name: "prometheus.myvpn.example.com"
type: "A"
value: "100.64.0.3"
- name: "grafana.myvpn.example.com"
type: "A"
value: "100.64.0.3"
...
```
1. Restart your headscale instance.
!!! warning
Beware of the limitations listed later on!
### 2. Verify that the records are set
You can use a DNS querying tool of your choice on one of your hosts to verify that your newly set records are actually available in MagicDNS, here we used [`dig`](https://man.archlinux.org/man/dig.1.en):
```
$ dig grafana.myvpn.example.com
; <<>> DiG 9.18.10 <<>> grafana.myvpn.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44054
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;grafana.myvpn.example.com. IN A
;; ANSWER SECTION:
grafana.myvpn.example.com. 593 IN A 100.64.0.3
;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Sat Dec 31 11:46:55 CET 2022
;; MSG SIZE rcvd: 66
```
### 3. Optional: Setup the reverse proxy
The motivating example here was to be able to access internal monitoring services on the same host without specifying a port:
```
server {
listen 80;
listen [::]:80;
server_name grafana.myvpn.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
## Limitations
[Not all types of records are supported](https://github.com/tailscale/tailscale/blob/6edf357b96b28ee1be659a70232c0135b2ffedfd/ipn/ipnlocal/local.go#L2989-L3007), especially no CNAME records.

View File

@@ -1,49 +0,0 @@
# Exit Nodes
## On the node
Register the node and make it advertise itself as an exit node:
```console
$ sudo tailscale up --login-server https://my-server.com --advertise-exit-node
```
If the node is already registered, it can advertise exit capabilities like this:
```console
$ sudo tailscale set --advertise-exit-node
```
To use a node as an exit node, IP forwarding must be enabled on the node. Check the official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to enable IP fowarding.
## On the control server
```console
$ # list nodes
$ headscale routes list
ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | false | -
4 | phobos | ::/0 | true | false | -
$ # enable routes for phobos
$ headscale routes enable -r 3
$ headscale routes enable -r 4
$ # Check node list again. The routes are now enabled.
$ headscale routes list
ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | true | -
4 | phobos | ::/0 | true | true | -
```
## On the client
The exit node can now be used with:
```console
$ sudo tailscale set --exit-node phobos
```
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes/?q=exit#step-3-use-the-exit-node) for how to do it on your device.

View File

@@ -1,57 +0,0 @@
---
hide:
- navigation
---
# Frequently Asked Questions
## What is the design goal of headscale?
`headscale` aims to implement a self-hosted, open source alternative to the [Tailscale](https://tailscale.com/)
control server.
`headscale`'s goal is to provide self-hosters and hobbyists with an open-source
server they can use for their projects and labs.
It implements a narrow scope, a _single_ Tailnet, suitable for a personal use, or a small
open-source organisation.
## How can I contribute?
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please also submit a fix to the documentation.
## Why is 'acknowledged contribution' the chosen model?
Both maintainers have full-time jobs and families, and we want to avoid burnout. We also want to avoid frustration from contributors when their PRs are not accepted.
We are more than happy to exchange emails, or to have dedicated calls before a PR is submitted.
## When/Why is Feature X going to be implemented?
We don't know. We might be working on it. If you want to help, please send us a PR.
Please be aware that there are a number of reasons why we might not accept specific contributions:
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
- Given that we are reverse-engineering Tailscale to satify our own curiosity, we might be interested in implementing the feature ourselves.
- You are not sending unit and integration tests with it.
## Do you support Y method of deploying Headscale?
We currently support deploying `headscale` using our binaries and the DEB packages. Both can be found in the
[GitHub releases page](https://github.com/juanfont/headscale/releases).
In addition to that, there are semi-official RPM packages by the Fedora infra team https://copr.fedorainfracloud.org/coprs/jonathanspw/headscale/
For convenience, we also build Docker images with `headscale`. But **please be aware that we don't officially support deploying `headscale` using Docker**. We have a [Discord channel](https://discord.com/channels/896711691637780480/1070619770942148618) where you can ask for Docker-specific help to the community.
## Why is my reverse proxy not working with Headscale?
We don't know. We don't use reverse proxies with `headscale` ourselves, so we don't have any experience with them. We have [community documentation](https://headscale.net/reverse-proxy/) on how to configure various reverse proxies, and a dedicated [Discord channel](https://discord.com/channels/896711691637780480/1070619818346164324) where you can ask for help to the community.
## Can I use headscale and tailscale on the same machine?
Running headscale on a machine that is also in the tailnet can cause problems with subnet routers, traffic relay nodes, and MagicDNS. It might work, but it is not supported.

View File

@@ -1,6 +1,6 @@
# Glossary
| Term | Description |
| --------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
| Namespace | A namespace was a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User (This is now called user) |
| Term | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------- |
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
| Namespace | A namespace is a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User |

View File

@@ -1,30 +0,0 @@
# Connecting an iOS client
## Goal
This documentation has the goal of showing how a user can use the official iOS [Tailscale](https://tailscale.com) client with `headscale`.
## Installation
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
Ensure that the installed version is at least 1.38.1, as that is the first release to support alternate control servers.
## Configuring the headscale URL
!!! info "Apple devices"
An endpoint with information on how to connect your Apple devices
(currently macOS only) is available at `/apple` on your running instance.
Ensure that the tailscale app is logged out before proceeding.
Go to iOS settings, scroll down past game center and tv provider to the tailscale app and select it. The headscale URL can be entered into the _"ALTERNATE COORDINATION SERVER URL"_ box.
> **Note**
>
> If the app was previously logged into tailscale, toggle on the _Reset Keychain_ switch.
Restart the app by closing it from the iOS app switcher, open the app and select the regular _Sign in_ option (non-SSO), and it should open up to the headscale authentication page.
Enter your credentials and log in. Headscale should now be working on your iOS device.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -1,43 +0,0 @@
---
hide:
- navigation
- toc
---
# headscale
`headscale` is an open source, self-hosted implementation of the Tailscale control server.
This page contains the documentation for the latest version of headscale. Please also check our [FAQ](/faq/).
Join our [Discord](https://discord.gg/c84AZQhmpx) server for a chat and community support.
## Design goal
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
control server.
Headscale's goal is to provide self-hosters and hobbyists with an open-source
server they can use for their projects and labs.
It implements a narrower scope, a single Tailnet, suitable for a personal use, or a small
open-source organisation.
## Supporting headscale
If you like `headscale` and find it useful, there is a sponsorship and donation
buttons available in the repo.
## Contributing
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
## About
`headscale` is maintained by [Kristoffer Dalby](https://kradalby.no/) and [Juan Font](https://font.eu).

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

Some files were not shown because too many files have changed in this diff Show More