Compare commits

..

1 Commits

Author SHA1 Message Date
Juan Font
b36df48d78 Working on migrate DERP tests to scenarios 2023-04-07 22:06:44 +00:00
286 changed files with 24306 additions and 29309 deletions

View File

@@ -6,24 +6,19 @@ labels: ["bug"]
assignees: "" assignees: ""
--- ---
<!-- <!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
to make sure that it is truly a bug.
The issue tracker is not the place to ask for support or how to set up Headscale.
Bug reports without the sufficient information will be closed. **Bug description**
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Bug description
<!-- A clear and concise description of what the bug is. Describe the expected bahavior <!-- A clear and concise description of what the bug is. Describe the expected bahavior
and how it is currently different. If you are unsure if it is a bug, consider discussing and how it is currently different. If you are unsure if it is a bug, consider discussing
it on our Discord server first. --> it on our Discord server first. -->
## Environment **To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example: <!-- Please add relevant information about your system. For example:
- Version of headscale used - Version of headscale used
@@ -33,33 +28,3 @@ All bug reports needs to be in English.
- The relevant config parameters you used - The relevant config parameters you used
- Log output - Log output
--> -->
- OS:
- Headscale version:
- Tailscale version:
<!--
We do not support running Headscale in a container nor behind a (reverse) proxy.
If either of these are true for your environment, ask the community in Discord
instead of filing a bug report.
-->
- [ ] Headscale is behind a (reverse) proxy
- [ ] Headscale runs in a container
## To Reproduce
<!-- Steps to reproduce the behavior. -->
## Logs and attachments
<!-- Please attach files with:
- Client netmap dump (see below)
- ACL configuration
- Headscale configuration
Dump the netmap of tailscale clients:
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
Please provide information describing the netmap, which client, which headscale version etc.
-->

View File

@@ -6,21 +6,12 @@ labels: ["enhancement"]
assignees: "" assignees: ""
--- ---
<!-- <!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
We typically have a clear roadmap for what we want to improve and reserve the right
to close feature requests that does not fit in the roadmap, or fit with the scope
of the project, or we actually want to implement ourselves.
Headscale is a multinational community across the globe. Our language is English. **Feature request**
All bug reports needs to be in English.
-->
## Why
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
## Description
<!-- A clear and precise description of what new or changed feature you want. --> <!-- A clear and precise description of what new or changed feature you want. -->
<!-- Please include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->

30
.github/ISSUE_TEMPLATE/other_issue.md vendored Normal file
View File

@@ -0,0 +1,30 @@
---
name: "Other issue"
about: "Report a different issue"
title: ""
labels: ["bug"]
assignees: ""
---
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
<!-- If you have a question, please consider using our Discord for asking questions -->
**Issue description**
<!-- Please add your issue description. -->
**To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->

View File

@@ -1,15 +1,3 @@
<!--
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
-->
<!-- Please tick if the following things apply. You… --> <!-- Please tick if the following things apply. You… -->
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing) - [ ] read the [CONTRIBUTING guidelines](README.md#contributing)

26
.github/renovate.json vendored
View File

@@ -6,27 +6,31 @@
"onboarding": false, "onboarding": false,
"extends": ["config:base", ":rebaseStalePrs"], "extends": ["config:base", ":rebaseStalePrs"],
"ignorePresets": [":prHourlyLimit2"], "ignorePresets": [":prHourlyLimit2"],
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"], "enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
"includeForks": true, "includeForks": true,
"repositories": ["juanfont/headscale"], "repositories": ["juanfont/headscale"],
"platform": "github", "platform": "github",
"packageRules": [ "packageRules": [
{ {
"matchDatasources": ["go"], "matchDatasources": ["go"],
"groupName": "Go modules", "groupName": "Go modules",
"groupSlug": "gomod", "groupSlug": "gomod",
"separateMajorMinor": false "separateMajorMinor": false
}, },
{ {
"matchDatasources": ["docker"], "matchDatasources": ["docker"],
"groupName": "Dockerfiles", "groupName": "Dockerfiles",
"groupSlug": "dockerfiles" "groupSlug": "dockerfiles"
} }
], ],
"regexManagers": [ "regexManagers": [
{ {
"fileMatch": [".github/workflows/.*.yml$"], "fileMatch": [
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"], ".github/workflows/.*.yml$"
],
"matchStrings": [
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
],
"datasourceTemplate": "golang-version", "datasourceTemplate": "golang-version",
"depNameTemplate": "actions/go-version" "depNameTemplate": "actions/go-version"
} }

View File

@@ -16,29 +16,29 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: write-all permissions: write-all
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v34
with: with:
filters: | files: |
files: *.nix
- '*.nix' go.*
- 'go.*' **/*.go
- '**/*.go' integration_test/
- 'integration_test/' config-example.yaml
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main - uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Run build - name: Run build
id: build id: build
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: | run: |
nix build |& tee build-result nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}" BUILD_STATUS="${PIPESTATUS[0]}"
@@ -64,8 +64,8 @@ jobs:
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}' body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
}) })
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v3
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
with: with:
name: headscale-linux name: headscale-linux
path: result/bin/headscale path: result/bin/headscale

View File

@@ -1,41 +0,0 @@
name: Check integration tests workflow
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Generate and check integration tests
if: steps.changed-files.outputs.files == 'true'
run: |
nix develop --command bash -c "cd cmd/gh-action-integration-generator/ && go generate"
git diff --exit-code .github/workflows/test-integration.yaml
- name: Show missing tests
if: failure()
run: |
git diff .github/workflows/test-integration.yaml

35
.github/workflows/contributors.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Contributors
on:
push:
branches:
- main
workflow_dispatch:
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Delete upstream contributor branch
# Allow continue on failure to account for when the
# upstream branch is deleted or does not exist.
continue-on-error: true
run: git push origin --delete update-contributors
- name: Create up-to-date contributors branch
run: git checkout -B update-contributors
- name: Push empty contributors branch
run: git push origin update-contributors
- name: Switch back to main
run: git checkout main
- uses: BobAnkh/add-contributors@v0.2.2
with:
CONTRIBUTOR: "## Contributors"
COLUMN_PER_ROW: "6"
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: "100"
FONT_SIZE: "14"
PATH: "/README.md"
COMMIT_MESSAGE: "docs(README): update contributors"
AVATAR_SHAPE: "round"
BRANCH: "update-contributors"
PULL_REQUEST: "main"

View File

@@ -1,27 +0,0 @@
name: Test documentation build
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict

View File

@@ -1,52 +0,0 @@
name: Build documentation
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
permissions:
pages: write
id-token: write
runs-on: ubuntu-latest
needs: build
steps:
- name: Configure Pages
uses: actions/configure-pages@v4
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4

View File

@@ -1,5 +1,6 @@
name: GitHub Actions Version Updater name: GitHub Actions Version Updater
# Controls when the action will run.
on: on:
schedule: schedule:
# Automatically run on every Sunday # Automatically run on every Sunday
@@ -10,13 +11,13 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
# [Required] Access token with `workflow` scope. # [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }} token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater - name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@v0.8.1 uses: saadmk11/github-actions-version-updater@v0.7.1
with: with:
# [Required] Access token with `workflow` scope. # [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }} token: ${{ secrets.WORKFLOW_SECRET }}

View File

@@ -1,6 +1,7 @@
---
name: Lint name: Lint
on: [pull_request] on: [push, pull_request]
concurrency: concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
@@ -10,66 +11,70 @@ jobs:
golangci-lint: golangci-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v34
with: with:
filters: | files: |
files: *.nix
- '*.nix' go.*
- 'go.*' **/*.go
- '**/*.go' integration_test/
- 'integration_test/' config-example.yaml
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: golangci-lint - name: golangci-lint
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- golangci-lint run --new-from-rev=${{github.event.pull_request.base.sha}} --out-format=github-actions . uses: golangci/golangci-lint-action@v2
with:
version: v1.51.2
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
# blocked because new linters has appared and other
# parts of the code is affected.
only-new-issues: true
prettier-lint: prettier-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v14.1
with: with:
filters: | files: |
files: *.nix
- '*.nix' **/*.md
- '**/*.md' **/*.yml
- '**/*.yml' **/*.yaml
- '**/*.yaml' **/*.ts
- '**/*.ts' **/*.js
- '**/*.js' **/*.sass
- '**/*.sass' **/*.css
- '**/*.css' **/*.scss
- '**/*.scss' **/*.html
- '**/*.html'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Prettify code - name: Prettify code
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- prettier --no-error-on-unmatched-pattern --ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html} uses: creyD/prettier_action@v4.3
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
only_changed: false
dry: true
proto-lint: proto-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- uses: DeterminateSystems/nix-installer-action@main - uses: bufbuild/buf-setup-action@v1.7.0
- uses: DeterminateSystems/magic-nix-cache-action@main - uses: bufbuild/buf-lint-action@v1
with:
- name: Buf lint input: "proto"
run: nix develop --command -- buf lint proto

138
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,138 @@
---
name: Release Docker
on:
push:
tags:
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -12,27 +12,13 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Login to DockerHub - uses: cachix/install-nix-action@v16
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- name: Run goreleaser - name: Run goreleaser
run: nix develop --command -- goreleaser release --clean run: nix develop --command -- goreleaser release --rm-dist
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,23 +0,0 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:
days-before-issue-stale: 90
days-before-issue-close: 7
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 90 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,35 @@
name: Integration Test CLI
on: [pull_request]
jobs:
integration-test-cli:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 10
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run CLI integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- make test_integration_cli

View File

@@ -0,0 +1,35 @@
name: Integration Test DERP
on: [pull_request]
jobs:
integration-test-derp:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 10
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run Embedded DERP server integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- make test_integration_derp

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowStarDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowStarDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUser80Dst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUser80Dst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUserDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUserDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDenyAllPort80
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDenyAllPort80$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLHostsInNetMapTable
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLHostsInNetMapTable$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthKeyLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestCreateTailscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestCreateTailscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnablingRoutes
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnablingRoutes$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestExpireNode
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestExpireNode$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHeadscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHeadscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByHostname
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByHostname$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByIP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandReusableEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandWithoutExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestResolveMagicDNS
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestResolveMagicDNS$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHIsBlockedInACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHIsBlockedInACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleUsersAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHNoSSHConfigured
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHNoSSHConfigured$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneUserAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneUserAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSUserOnlyIsolation
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSUserOnlyIsolation$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTaildrop
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTaildrop$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTailscaleNodesJoiningHeadcale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestUserCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestUserCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -1,114 +0,0 @@
name: Integration Tests
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
integration-test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
test:
- TestACLHostsInNetMapTable
- TestACLAllowUser80Dst
- TestACLDenyAllPort80
- TestACLAllowUserDst
- TestACLAllowStarDst
- TestACLNamedHostsCanReachBySubnet
- TestACLNamedHostsCanReach
- TestACLDevice1CanAccessDevice2
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndRelogin
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
- TestPreAuthKeyCommandReusableEphemeral
- TestApiKeyCommand
- TestNodeTagCommand
- TestNodeAdvertiseTagNoACLCommand
- TestNodeAdvertiseTagWithACLCommand
- TestNodeCommand
- TestNodeExpireCommand
- TestNodeRenameCommand
- TestNodeMoveCommand
- TestDERPServerScenario
- TestPingAllByIP
- TestPingAllByIPPublicDERP
- TestAuthKeyLogoutAndRelogin
- TestEphemeral
- TestPingAllByHostname
- TestTaildrop
- TestResolveMagicDNS
- TestExpireNode
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- TestEnablingRoutes
- TestHASubnetRouterFailover
- TestEnableDisableAutoApprovedRoute
- TestSubnetRouteACL
- TestHeadscale
- TestCreateTailscale
- TestTailscaleNodesJoiningHeadcale
- TestSSHOneUserToAll
- TestSSHMultipleUsersAllToAll
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
database: [postgres, sqlite]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: satackey/action-docker-layer-caching@main
if: steps.changed-files.outputs.files == 'true'
continue-on-error: true
- name: Run Integration Test
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.files == 'true'
env:
USE_POSTGRES: ${{ matrix.database == 'postgres' && '1' || '0' }}
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
--env HEADSCALE_INTEGRATION_POSTGRES=${{env.USE_POSTGRES}} \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^${{ matrix.test }}$"
- uses: actions/upload-artifact@v4
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ matrix.test }}-${{matrix.database}}-logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v4
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ matrix.test }}-${{matrix.database}}-pprof
path: "control_logs/*.pprof.tar"

View File

@@ -11,27 +11,24 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v34
with: with:
filters: | files: |
files: *.nix
- '*.nix' go.*
- 'go.*' **/*.go
- '**/*.go' integration_test/
- 'integration_test/' config-example.yaml
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main - uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Run tests - name: Run tests
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --check run: nix develop --check

View File

@@ -1,18 +0,0 @@
name: update-flake-lock
on:
workflow_dispatch: # allows manual triggering
schedule:
- cron: "0 0 * * 0" # runs weekly on Sunday at 00:00
jobs:
lockfile:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@main
- name: Update flake.lock
uses: DeterminateSystems/update-flake-lock@main
with:
pr-title: "Update flake.lock"

12
.gitignore vendored
View File

@@ -1,7 +1,3 @@
ignored/
tailscale/
.vscode/
# Binaries for programs and plugins # Binaries for programs and plugins
*.exe *.exe
*.exe~ *.exe~
@@ -16,7 +12,7 @@ tailscale/
*.out *.out
# Dependency directories (remove the comment below to include it) # Dependency directories (remove the comment below to include it)
vendor/ # vendor/
dist/ dist/
/headscale /headscale
@@ -39,9 +35,3 @@ result
.direnv/ .direnv/
integration_test/etc/config.dump.yaml integration_test/etc/config.dump.yaml
# MkDocs
.cache
/site
__debug_bin

View File

@@ -10,8 +10,6 @@ issues:
linters: linters:
enable-all: true enable-all: true
disable: disable:
- depguard
- exhaustivestruct - exhaustivestruct
- revive - revive
- lll - lll
@@ -32,7 +30,6 @@ linters:
- exhaustruct - exhaustruct
- nolintlint - nolintlint
- musttag # causes issues with imported libs - musttag # causes issues with imported libs
- depguard
# deprecated # deprecated
- structcheck # replaced by unused - structcheck # replaced by unused

View File

@@ -1,15 +1,14 @@
--- ---
before: before:
hooks: hooks:
- go mod tidy -compat=1.22 - go mod tidy -compat=1.20
- go mod vendor
release: release:
prerelease: auto prerelease: auto
builds: builds:
- id: headscale - id: headscale
main: ./cmd/headscale main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}" mod_timestamp: "{{ .CommitTimestamp }}"
env: env:
- CGO_ENABLED=0 - CGO_ENABLED=0
@@ -32,16 +31,19 @@ builds:
archives: archives:
- id: golang-cross - id: golang-cross
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}' builds:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
format: binary format: binary
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}"
format: tar.gz
files:
- "vendor/"
nfpms: nfpms:
# Configure nFPM for .deb and .rpm releases # Configure nFPM for .deb and .rpm releases
# #
@@ -63,6 +65,7 @@ nfpms:
bindir: /usr/bin bindir: /usr/bin
formats: formats:
- deb - deb
- rpm
contents: contents:
- src: ./config-example.yaml - src: ./config-example.yaml
dst: /etc/headscale/config.yaml dst: /etc/headscale/config.yaml
@@ -70,7 +73,7 @@ nfpms:
file_info: file_info:
mode: 0644 mode: 0644
- src: ./docs/packaging/headscale.systemd.service - src: ./docs/packaging/headscale.systemd.service
dst: /usr/lib/systemd/system/headscale.service dst: /etc/systemd/system/headscale.service
- dst: /var/lib/headscale - dst: /var/lib/headscale
type: dir type: dir
- dst: /var/run/headscale - dst: /var/run/headscale
@@ -79,108 +82,6 @@ nfpms:
postinstall: ./docs/packaging/postinstall.sh postinstall: ./docs/packaging/postinstall.sh
postremove: ./docs/packaging/postremove.sh postremove: ./docs/packaging/postremove.sh
kos:
- id: ghcr
repository: ghcr.io/juanfont/headscale
# bare tells KO to only use the repository
# for tagging and naming the container.
bare: true
base_image: gcr.io/distroless/base-debian12
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
- id: dockerhub
build: headscale
base_image: gcr.io/distroless/base-debian12
repository: headscale/headscale
bare: true
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
- id: ghcr-debug
repository: ghcr.io/juanfont/headscale
bare: true
base_image: "debian:12"
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
- id: dockerhub-debug
build: headscale
base_image: "debian:12"
repository: headscale/headscale
bare: true
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
checksum: checksum:
name_template: "checksums.txt" name_template: "checksums.txt"
snapshot: snapshot:

View File

@@ -1,6 +1 @@
.github/workflows/test-integration-v2* .github/workflows/test-integration-v2*
docs/dns-records.md
docs/running-headscale-container.md
docs/running-headscale-linux-manual.md
docs/running-headscale-linux.md
docs/running-headscale-openbsd.md

View File

@@ -1,95 +1,12 @@
# CHANGELOG # CHANGELOG
## 0.23.0 (2023-XX-XX) ## 0.22.0 (2023-XX-XX)
This release is mainly a code reorganisation and refactoring, significantly improving the maintainability of the codebase. This should allow us to improve further and make it easier for the maintainers to keep on top of the project.
**Please remember to always back up your database between versions**
#### Here is a short summary of the broad topics of changes:
Code has been organised into modules, reducing use of global variables/objects, isolating concerns and “putting the right things in the logical place”.
The new [policy](https://github.com/juanfont/headscale/tree/main/hscontrol/policy) and [mapper](https://github.com/juanfont/headscale/tree/main/hscontrol/mapper) package, containing the ACL/Policy logic and the logic for creating the data served to clients (the network “map”) has been rewritten and improved. This change has allowed us to finish SSH support and add additional tests throughout the code to ensure correctness.
The [“poller”, or streaming logic](https://github.com/juanfont/headscale/blob/main/hscontrol/poll.go) has been rewritten and instead of keeping track of the latest updates, checking at a fixed interval, it now uses go channels, implemented in our new [notifier](https://github.com/juanfont/headscale/tree/main/hscontrol/notifier) package and it allows us to send updates to connected clients immediately. This should both improve performance and potential latency before a client picks up an update.
Headscale now supports sending “delta” updates, thanks to the new mapper and poller logic, allowing us to only inform nodes about new nodes, changed nodes and removed nodes. Previously we sent the entire state of the network every time an update was due.
While we have a pretty good [test harness](https://github.com/search?q=repo%3Ajuanfont%2Fheadscale+path%3A_test.go&type=code) for validating our changes, we have rewritten over [10000 lines of code](https://github.com/juanfont/headscale/compare/b01f1f1867136d9b2d7b1392776eb363b482c525...main) and bugs are expected. We need help testing this release. In addition, while we think the performance should in general be better, there might be regressions in parts of the platform, particularly where we prioritised correctness over speed.
There are also several bugfixes that has been encountered and fixed as part of implementing these changes, particularly
after improving the test harness as part of adopting [#1460](https://github.com/juanfont/headscale/pull/1460).
### BREAKING
- Code reorganisation, a lot of code has moved, please review the following PRs accordingly [#1473](https://github.com/juanfont/headscale/pull/1473)
- Change the structure of database configuration, see [config-example.yaml](./config-example.yaml) for the new structure. [#1700](https://github.com/juanfont/headscale/pull/1700)
- Old structure has been remove and the configuration _must_ be converted.
- Adds additional configuration for PostgreSQL for setting max open, idle conection and idle connection lifetime.
- API: Machine is now Node [#1553](https://github.com/juanfont/headscale/pull/1553)
- Remove support for older Tailscale clients [#1611](https://github.com/juanfont/headscale/pull/1611)
- The latest supported client is 1.38
- Headscale checks that _at least_ one DERP is defined at start [#1564](https://github.com/juanfont/headscale/pull/1564)
- If no DERP is configured, the server will fail to start, this can be because it cannot load the DERPMap from file or url.
- Embedded DERP server requires a private key [#1611](https://github.com/juanfont/headscale/pull/1611)
- Add a filepath entry to [`derp.server.private_key_path`](https://github.com/juanfont/headscale/blob/b35993981297e18393706b2c963d6db882bba6aa/config-example.yaml#L95)
- Docker images are now built with goreleaser (ko) [#1716](https://github.com/juanfont/headscale/pull/1716) [#1763](https://github.com/juanfont/headscale/pull/1763)
- Entrypoint of container image has changed from shell to headscale, require change from `headscale serve` to `serve`
- `/var/lib/headscale` and `/var/run/headscale` is no longer created automatically, see [container docs](./docs/running-headscale-container.md)
- Prefixes are now defined per v4 and v6 range. [#1756](https://github.com/juanfont/headscale/pull/1756)
- `ip_prefixes` option is now `prefixes.v4` and `prefixes.v6`
- `prefixes.allocation` can be set to assign IPs at `sequential` or `random`. [#1869](https://github.com/juanfont/headscale/pull/1869)
### Changes ### Changes
- Use versioned migrations [#1644](https://github.com/juanfont/headscale/pull/1644) - Add `.deb` and `.rpm` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484)
- SSH support [#1487](https://github.com/juanfont/headscale/pull/1487)
- State management has been improved [#1492](https://github.com/juanfont/headscale/pull/1492)
- Use error group handling to ensure tests actually pass [#1535](https://github.com/juanfont/headscale/pull/1535) based on [#1460](https://github.com/juanfont/headscale/pull/1460)
- Fix hang on SIGTERM [#1492](https://github.com/juanfont/headscale/pull/1492) taken from [#1480](https://github.com/juanfont/headscale/pull/1480)
- Send logs to stderr by default [#1524](https://github.com/juanfont/headscale/pull/1524)
- Fix [TS-2023-006](https://tailscale.com/security-bulletins/#ts-2023-006) security UPnP issue [#1563](https://github.com/juanfont/headscale/pull/1563)
- Turn off gRPC logging [#1640](https://github.com/juanfont/headscale/pull/1640) fixes [#1259](https://github.com/juanfont/headscale/issues/1259)
- Added the possibility to manually create a DERP-map entry which can be customized, instead of automatically creating it. [#1565](https://github.com/juanfont/headscale/pull/1565)
- Add support for deleting api keys [#1702](https://github.com/juanfont/headscale/pull/1702)
- Add command to backfill IP addresses for nodes missing IPs from configured prefixes. [#1869](https://github.com/juanfont/headscale/pull/1869)
- Log available update as warning [#1877](https://github.com/juanfont/headscale/pull/1877)
## 0.22.3 (2023-05-12)
### Changes
- Added missing ca-certificates in Docker image [#1463](https://github.com/juanfont/headscale/pull/1463)
## 0.22.2 (2023-05-10)
### Changes
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
- Profiles are continously generated in our integration tests.
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
- Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
## 0.22.1 (2023-04-20)
### Changes
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
## 0.22.0 (2023-04-20)
### Changes
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297) - Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279) - Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323) - Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
## 0.21.0 (2023-03-20) ## 0.21.0 (2023-03-20)

23
Dockerfile Normal file
View File

@@ -0,0 +1,23 @@
# Builder image
FROM docker.io/golang:1.20-bullseye AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale
# Production image
FROM gcr.io/distroless/base-debian11
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -1,8 +1,5 @@
# This Dockerfile and the images produced are for testing headscale, # Builder image
# and are in no way endorsed by Headscale's maintainers as an FROM docker.io/golang:1.20-bullseye AS build
# official nor supported release or distribution.
FROM docker.io/golang:1.22-bookworm AS build
ARG VERSION=dev ARG VERSION=dev
ENV GOPATH /go ENV GOPATH /go
WORKDIR /go/src/headscale WORKDIR /go/src/headscale
@@ -12,21 +9,15 @@ RUN go mod download
COPY . . COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN test -e /go/bin/headscale RUN test -e /go/bin/headscale
# Debug image # Debug image
FROM docker.io/golang:1.22-bookworm FROM docker.io/golang:1.20.0-bullseye
COPY --from=build /go/bin/headscale /bin/headscale COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC ENV TZ UTC
RUN apt-get update \
&& apt-get install --no-install-recommends --yes less jq \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
RUN mkdir -p /var/run/headscale
# Need to reset the entrypoint or everything will run as a busybox script # Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT [] ENTRYPOINT []
EXPOSE 8080/tcp EXPOSE 8080/tcp

19
Dockerfile.tailscale Normal file
View File

@@ -0,0 +1,19 @@
FROM ubuntu:latest
ARG TAILSCALE_VERSION=*
ARG TAILSCALE_CHANNEL=stable
RUN apt-get update \
&& apt-get install -y gnupg curl ssh \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
&& rm -rf /var/lib/apt/lists/*
RUN adduser --shell=/bin/bash ssh-it-user
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -1,11 +1,7 @@
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM golang:latest FROM golang:latest
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y dnsutils git iptables ssh ca-certificates \ && apt-get install -y ca-certificates dnsutils git iptables ssh \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN useradd --shell=/bin/bash --create-home ssh-it-user RUN useradd --shell=/bin/bash --create-home ssh-it-user
@@ -14,8 +10,15 @@ RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR /go/tailscale WORKDIR /go/tailscale
RUN git checkout main \ RUN git checkout main
&& sh build_dist.sh tailscale.com/cmd/tailscale \
&& sh build_dist.sh tailscale.com/cmd/tailscaled \ RUN sh build_dist.sh tailscale.com/cmd/tailscale
&& cp tailscale /usr/local/bin/ \ RUN sh build_dist.sh tailscale.com/cmd/tailscaled
&& cp tailscaled /usr/local/bin/
RUN cp tailscale /usr/local/bin/
RUN cp tailscaled /usr/local/bin/
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -10,6 +10,8 @@ ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
else else
endif endif
TAGS = -tags ts2019
# GO_SOURCES = $(wildcard *.go) # GO_SOURCES = $(wildcard *.go)
# PROTO_SOURCES = $(wildcard **/*.proto) # PROTO_SOURCES = $(wildcard **/*.proto)
GO_SOURCES = $(call rwildcard,,*.go) GO_SOURCES = $(call rwildcard,,*.go)
@@ -22,9 +24,31 @@ build:
dev: lint test build dev: lint test build
test: test:
gotestsum -- -short -coverprofile=coverage.out ./... @go test $(TAGS) -short -coverprofile=coverage.out ./...
test_integration: test_integration: test_integration_cli test_integration_derp test_integration_v2_general
test_integration_cli:
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
docker network create headscale-test || true
docker run -t --rm \
--network headscale-test \
-v ~/.cache/hs-integration-go:/go \
-v $$PWD:$$PWD -w $$PWD \
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
test_integration_derp:
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
docker network create headscale-test || true
docker run -t --rm \
--network headscale-test \
-v ~/.cache/hs-integration-go:/go \
-v $$PWD:$$PWD -w $$PWD \
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
test_integration_v2_general:
docker run \ docker run \
-t --rm \ -t --rm \
-v ~/.cache/hs-integration-go:/go \ -v ~/.cache/hs-integration-go:/go \
@@ -32,7 +56,13 @@ test_integration:
-v $$PWD:$$PWD -w $$PWD/integration \ -v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
golang:1 \ golang:1 \
go run gotest.tools/gotestsum@latest -- -failfast ./... -timeout 120m -parallel 8 go test $(TAGS) -failfast ./... -timeout 120m -parallel 8
coverprofile_func:
go tool cover -func=coverage.out
coverprofile_html:
go tool cover -html=coverage.out
lint: lint:
golangci-lint run --fix --timeout 10m golangci-lint run --fix --timeout 10m
@@ -50,4 +80,11 @@ compress: build
generate: generate:
rm -rf gen rm -rf gen
buf generate proto go run github.com/bufbuild/buf/cmd/buf generate proto
install-protobuf-plugins:
go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc

873
README.md
View File

@@ -32,18 +32,22 @@ organisation.
## Design goal ## Design goal
Headscale aims to implement a self-hosted, open source alternative to the Tailscale `headscale` aims to implement a self-hosted, open source alternative to the Tailscale
control server. control server. `headscale` has a narrower scope and an instance of `headscale`
Headscale's goal is to provide self-hosters and hobbyists with an open-source implements a _single_ Tailnet, which is typically what a single organisation, or
server they can use for their projects and labs. home/personal setup would use.
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
open-source organisation.
## Supporting Headscale `headscale` uses terms that maps to Tailscale's control server, consult the
[glossary](./docs/glossary.md) for explainations.
## Support
If you like `headscale` and find it useful, there is a sponsorship and donation If you like `headscale` and find it useful, there is a sponsorship and donation
buttons available in the repo. buttons available in the repo.
If you would like to sponsor features, bugs or prioritisation, reach out to
one of the maintainers.
## Features ## Features
- Full "base" support of Tailscale's features - Full "base" support of Tailscale's features
@@ -75,10 +79,16 @@ buttons available in the repo.
## Running headscale ## Running headscale
**Please note that we do not support nor encourage the use of reverse proxies Please have a look at the documentation under [`docs/`](docs/).
and container to run Headscale.**
Please have a look at the [`documentation`](https://headscale.net/). ## Graphical Control Panels
Headscale provides an API for complete management of your Tailnet.
These are community projects not directly affiliated with the Headscale project.
| Name | Repository Link | Description | Status |
| --------------- | ---------------------------------------------------- | ------------------------------------------------------ | ------ |
| headscale-webui | [Github](https://github.com/ifargle/headscale-webui) | A simple Headscale web UI for small-scale deployments. | Alpha |
## Talks ## Talks
@@ -87,26 +97,11 @@ Please have a look at the [`documentation`](https://headscale.net/).
## Disclaimer ## Disclaimer
This project is not associated with Tailscale Inc. 1. We have nothing to do with Tailscale, or Tailscale Inc.
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
However, one of the active maintainers for Headscale [is employed by Tailscale](https://tailscale.com/blog/opensource) and he is allowed to spend work hours contributing to the project. Contributions from this maintainer are reviewed by other maintainers.
The maintainers work together on setting the direction for the project. The underlying principle is to serve the community of self-hosters, enthusiasts and hobbyists - while having a sustainable project.
## Contributing ## Contributing
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
### Requirements
To contribute to headscale you would need the lastest version of [Go](https://golang.org) To contribute to headscale you would need the lastest version of [Go](https://golang.org)
and [Buf](https://buf.build)(Protobuf generator). and [Buf](https://buf.build)(Protobuf generator).
@@ -114,6 +109,8 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
be done with `nix develop`, which will install the tools and give you a shell. be done with `nix develop`, which will install the tools and give you a shell.
This guarantees that you will have the same dev env as `headscale` maintainers. This guarantees that you will have the same dev env as `headscale` maintainers.
PRs and suggestions are welcome.
### Code style ### Code style
To ensure we have some consistency with a growing number of contributions, To ensure we have some consistency with a growing number of contributions,
@@ -175,8 +172,820 @@ make build
## Contributors ## Contributors
<a href="https://github.com/juanfont/headscale/graphs/contributors"> <table>
<img src="https://contrib.rocks/image?repo=juanfont/headscale" /> <tr>
</a> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kradalby>
Made with [contrib.rocks](https://contrib.rocks). <img src=https://avatars.githubusercontent.com/u/98431?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kristoffer Dalby/>
<br />
<sub style="font-size:14px"><b>Kristoffer Dalby</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/juanfont>
<img src=https://avatars.githubusercontent.com/u/181059?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Juan Font/>
<br />
<sub style="font-size:14px"><b>Juan Font</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cure>
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
<br />
<sub style="font-size:14px"><b>Ward Vandewege</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/huskyii>
<img src=https://avatars.githubusercontent.com/u/5499746?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jiang Zhu/>
<br />
<sub style="font-size:14px"><b>Jiang Zhu</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tsujamin>
<img src=https://avatars.githubusercontent.com/u/2435619?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Benjamin Roberts/>
<br />
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/reynico>
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
<br />
<sub style="font-size:14px"><b>Nico</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/evenh>
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
<br />
<sub style="font-size:14px"><b>Even Holthe</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/e-zk>
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
<br />
<sub style="font-size:14px"><b>e-zk</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/arch4ngel>
<img src=https://avatars.githubusercontent.com/u/11574161?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Justin Angel/>
<br />
<sub style="font-size:14px"><b>Justin Angel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ItalyPaleAle>
<img src=https://avatars.githubusercontent.com/u/43508?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Alessandro (Ale) Segala/>
<br />
<sub style="font-size:14px"><b>Alessandro (Ale) Segala</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/unreality>
<img src=https://avatars.githubusercontent.com/u/352522?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=unreality/>
<br />
<sub style="font-size:14px"><b>unreality</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
<br />
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
<br />
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/GrigoriyMikhalkin>
<img src=https://avatars.githubusercontent.com/u/3637857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=GrigoriyMikhalkin/>
<br />
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/christian-heusel>
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
<br />
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mike-lloyd03>
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
<br />
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/iSchluff>
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
<br />
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Niek>
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
<br />
<sub style="font-size:14px"><b>Niek van der Maas</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/negbie>
<img src=https://avatars.githubusercontent.com/u/20154956?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Eugen Biegler/>
<br />
<sub style="font-size:14px"><b>Eugen Biegler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/617a7a>
<img src=https://avatars.githubusercontent.com/u/67651251?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Azz/>
<br />
<sub style="font-size:14px"><b>Azz</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/qbit>
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
<br />
<sub style="font-size:14px"><b>Aaron Bieber</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kazauwa>
<img src=https://avatars.githubusercontent.com/u/12330159?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Igor Perepilitsyn/>
<br />
<sub style="font-size:14px"><b>Igor Perepilitsyn</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Aluxima>
<img src=https://avatars.githubusercontent.com/u/16262531?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Laurent Marchaud/>
<br />
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fdelucchijr>
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
<br />
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/OrvilleQ>
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/>
<br />
<sub style="font-size:14px"><b>Orville Q. Song</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hdhoang>
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
<br />
<sub style="font-size:14px"><b>hdhoang</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/bravechamp>
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
<br />
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/deonthomasgy>
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
<br />
<sub style="font-size:14px"><b>Deon Thomas</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/madjam002>
<img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
<br />
<sub style="font-size:14px"><b>Jamie Greeff</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ChibangLW>
<img src=https://avatars.githubusercontent.com/u/22293464?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ChibangLW/>
<br />
<sub style="font-size:14px"><b>ChibangLW</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mevansam>
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
<br />
<sub style="font-size:14px"><b>Mevan Samaratunga</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dragetd>
<img src=https://avatars.githubusercontent.com/u/3639577?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael G./>
<br />
<sub style="font-size:14px"><b>Michael G.</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ptman>
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
<br />
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/samson4649>
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
<br />
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kevin1sMe>
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
<br />
<sub style="font-size:14px"><b>kevinlin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/QZAiXH>
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
<br />
<sub style="font-size:14px"><b>Snack</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/artemklevtsov>
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
<br />
<sub style="font-size:14px"><b>Artem Klevtsov</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cmars>
<img src=https://avatars.githubusercontent.com/u/23741?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Casey Marshall/>
<br />
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dbevacqua>
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
<br />
<sub style="font-size:14px"><b>dbevacqua</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/joshuataylor>
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
<br />
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/CNLHC>
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
<br />
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/motiejus>
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
<br />
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pvinis>
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
<br />
<sub style="font-size:14px"><b>Pavlos Vinieratos</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/SilverBut>
<img src=https://avatars.githubusercontent.com/u/6560655?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Silver Bullet/>
<br />
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/snh>
<img src=https://avatars.githubusercontent.com/u/2051768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Steven Honson/>
<br />
<sub style="font-size:14px"><b>Steven Honson</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ratsclub>
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
<br />
<sub style="font-size:14px"><b>Victor Freire</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lachy2849>
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
<br />
<sub style="font-size:14px"><b>lachy2849</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/t56k>
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
<br />
<sub style="font-size:14px"><b>thomas</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aberoham>
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
<br />
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/iFargle>
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
<br />
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/puzpuzpuz>
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
<br />
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/theryecatcher>
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
<br />
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/apognu>
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
<br />
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aofei>
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
<br />
<sub style="font-size:14px"><b>Aofei Sheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/arnarg>
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
<br />
<sub style="font-size:14px"><b>Arnar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/awoimbee>
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
<br />
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/avirut>
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
<br />
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stensonb>
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
<br />
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/yangchuansheng>
<img src=https://avatars.githubusercontent.com/u/15308462?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt= Carson Yang/>
<br />
<sub style="font-size:14px"><b> Carson Yang</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kundel>
<img src=https://avatars.githubusercontent.com/u/10158899?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kundel/>
<br />
<sub style="font-size:14px"><b>kundel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fatih-acar>
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
<br />
<sub style="font-size:14px"><b>fatih-acar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fkr>
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
<br />
<sub style="font-size:14px"><b>Felix Kronlage-Dammers</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/felixonmars>
<img src=https://avatars.githubusercontent.com/u/1006477?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Yan/>
<br />
<sub style="font-size:14px"><b>Felix Yan</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JJGadgets>
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
<br />
<sub style="font-size:14px"><b>JJGadgets</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hrtkpf>
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
<br />
<sub style="font-size:14px"><b>hrtkpf</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimt>
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
<br />
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jsiebens>
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
<br />
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/johnae>
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
<br />
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ShadowJonathan>
<img src=https://avatars.githubusercontent.com/u/22740616?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan de Jong/>
<br />
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/foxtrot>
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
<br />
<sub style="font-size:14px"><b>Marc</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magf>
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
<br />
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mikejsavage>
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
<br />
<sub style="font-size:14px"><b>Michael Savage</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/piec>
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
<br />
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Donran>
<img src=https://avatars.githubusercontent.com/u/4838348?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pontus N/>
<br />
<sub style="font-size:14px"><b>Pontus N</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nnsee>
<img src=https://avatars.githubusercontent.com/u/36747857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Rasmus Moorats/>
<br />
<sub style="font-size:14px"><b>Rasmus Moorats</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/rcursaru>
<img src=https://avatars.githubusercontent.com/u/16259641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=rcursaru/>
<br />
<sub style="font-size:14px"><b>rcursaru</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/renovate-bot>
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mend Renovate/>
<br />
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ryanfowler>
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
<br />
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/linsomniac>
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
<br />
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/shaananc>
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
<br />
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stefanvanburen>
<img src=https://avatars.githubusercontent.com/u/622527?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan VanBuren/>
<br />
<sub style="font-size:14px"><b>Stefan VanBuren</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/sophware>
<img src=https://avatars.githubusercontent.com/u/41669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sophware/>
<br />
<sub style="font-size:14px"><b>sophware</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/m-tanner-dev0>
<img src=https://avatars.githubusercontent.com/u/97977342?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tanner/>
<br />
<sub style="font-size:14px"><b>Tanner</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Teteros>
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
<br />
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gitter-badger>
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
<br />
<sub style="font-size:14px"><b>The Gitter Badger</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tianon>
<img src=https://avatars.githubusercontent.com/u/161631?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tianon Gravi/>
<br />
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/thetillhoff>
<img src=https://avatars.githubusercontent.com/u/25052289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Till Hoffmann/>
<br />
<sub style="font-size:14px"><b>Till Hoffmann</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/woudsma>
<img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/>
<br />
<sub style="font-size:14px"><b>Tjerk Woudsma</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/y0ngb1n>
<img src=https://avatars.githubusercontent.com/u/25719408?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yang Bin/>
<br />
<sub style="font-size:14px"><b>Yang Bin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gozssky>
<img src=https://avatars.githubusercontent.com/u/17199941?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yujie Xia/>
<br />
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/newellz2>
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
<br />
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zekker6>
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
<br />
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zhzy0077>
<img src=https://avatars.githubusercontent.com/u/8717471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zhiyuan Zheng/>
<br />
<sub style="font-size:14px"><b>Zhiyuan Zheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Bpazy>
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ziyuan Han/>
<br />
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/caelansar>
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
<br />
<sub style="font-size:14px"><b>caelansar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/derelm>
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
<br />
<sub style="font-size:14px"><b>derelm</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dnaq>
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
<br />
<sub style="font-size:14px"><b>dnaq</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nning>
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
<br />
<sub style="font-size:14px"><b>henning mueller</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ignoramous>
<img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/>
<br />
<sub style="font-size:14px"><b>ignoramous</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimyag>
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
<br />
<sub style="font-size:14px"><b>jimyag</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magichuihui>
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
<br />
<sub style="font-size:14px"><b>suhelen</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lion24>
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
<br />
<sub style="font-size:14px"><b>sharkonet</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ma6174>
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
<br />
<sub style="font-size:14px"><b>ma6174</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/manju-rn>
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
<br />
<sub style="font-size:14px"><b>manju-rn</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
<br />
<sub style="font-size:14px"><b>pernila</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/phpmalik>
<img src=https://avatars.githubusercontent.com/u/26834645?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=phpmalik/>
<br />
<sub style="font-size:14px"><b>phpmalik</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Wakeful-Cloud>
<img src=https://avatars.githubusercontent.com/u/38930607?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Wakeful-Cloud/>
<br />
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/xpzouying>
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
<br />
<sub style="font-size:14px"><b>zy</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/atorregrosa-smd>
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
<br />
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
</a>
</td>
</tr>
</table>

780
acls.go Normal file
View File

@@ -0,0 +1,780 @@
package headscale
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/netip"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/rs/zerolog/log"
"github.com/tailscale/hujson"
"go4.org/netipx"
"gopkg.in/yaml.v3"
"tailscale.com/envknob"
"tailscale.com/tailcfg"
)
const (
errEmptyPolicy = Error("empty policy")
errInvalidAction = Error("invalid action")
errInvalidGroup = Error("invalid group")
errInvalidTag = Error("invalid tag")
errInvalidPortFormat = Error("invalid port format")
errWildcardIsNeeded = Error("wildcard as port is required for the protocol")
)
const (
Base8 = 8
Base10 = 10
BitSize16 = 16
BitSize32 = 32
BitSize64 = 64
portRangeBegin = 0
portRangeEnd = 65535
expectedTokenItems = 2
)
// For some reason golang.org/x/net/internal/iana is an internal package.
const (
protocolICMP = 1 // Internet Control Message
protocolIGMP = 2 // Internet Group Management
protocolIPv4 = 4 // IPv4 encapsulation
protocolTCP = 6 // Transmission Control
protocolEGP = 8 // Exterior Gateway Protocol
protocolIGP = 9 // any private interior gateway (used by Cisco for their IGRP)
protocolUDP = 17 // User Datagram
protocolGRE = 47 // Generic Routing Encapsulation
protocolESP = 50 // Encap Security Payload
protocolAH = 51 // Authentication Header
protocolIPv6ICMP = 58 // ICMP for IPv6
protocolSCTP = 132 // Stream Control Transmission Protocol
ProtocolFC = 133 // Fibre Channel
)
var featureEnableSSH = envknob.RegisterBool("HEADSCALE_EXPERIMENTAL_FEATURE_SSH")
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
func (h *Headscale) LoadACLPolicy(path string) error {
log.Debug().
Str("func", "LoadACLPolicy").
Str("path", path).
Msg("Loading ACL policy from path")
policyFile, err := os.Open(path)
if err != nil {
return err
}
defer policyFile.Close()
var policy ACLPolicy
policyBytes, err := io.ReadAll(policyFile)
if err != nil {
return err
}
switch filepath.Ext(path) {
case ".yml", ".yaml":
log.Debug().
Str("path", path).
Bytes("file", policyBytes).
Msg("Loading ACLs from YAML")
err := yaml.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
log.Trace().
Interface("policy", policy).
Msg("Loaded policy from YAML")
default:
ast, err := hujson.Parse(policyBytes)
if err != nil {
return err
}
ast.Standardize()
policyBytes = ast.Pack()
err = json.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
}
if policy.IsZero() {
return errEmptyPolicy
}
h.aclPolicy = &policy
return h.UpdateACLRules()
}
func (h *Headscale) UpdateACLRules() error {
machines, err := h.ListMachines()
if err != nil {
return err
}
if h.aclPolicy == nil {
return errEmptyPolicy
}
rules, err := generateACLRules(machines, *h.aclPolicy, h.cfg.OIDC.StripEmaildomain)
if err != nil {
return err
}
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
h.aclRules = rules
// Precompute a map of which sources can reach each destination, this is
// to provide quicker lookup when we calculate the peerlist for the map
// response to nodes.
aclPeerCacheMap := generateACLPeerCacheMap(rules)
h.aclPeerCacheMapRW.Lock()
h.aclPeerCacheMap = aclPeerCacheMap
h.aclPeerCacheMapRW.Unlock()
if featureEnableSSH() {
sshRules, err := h.generateSSHRules()
if err != nil {
return err
}
log.Trace().Interface("SSH", sshRules).Msg("SSH rules generated")
if h.sshPolicy == nil {
h.sshPolicy = &tailcfg.SSHPolicy{}
}
h.sshPolicy.Rules = sshRules
} else if h.aclPolicy != nil && len(h.aclPolicy.SSHs) > 0 {
log.Info().Msg("SSH ACLs has been defined, but HEADSCALE_EXPERIMENTAL_FEATURE_SSH is not enabled, this is a unstable feature, check docs before activating")
}
return nil
}
// generateACLPeerCacheMap takes a list of Tailscale filter rules and generates a map
// of which Sources ("*" and IPs) can access destinations. This is to speed up the
// process of generating MapResponses when deciding which Peers to inform nodes about.
func generateACLPeerCacheMap(rules []tailcfg.FilterRule) map[string]map[string]struct{} {
aclCachePeerMap := make(map[string]map[string]struct{})
for _, rule := range rules {
for _, srcIP := range rule.SrcIPs {
for _, ip := range expandACLPeerAddr(srcIP) {
if data, ok := aclCachePeerMap[ip]; ok {
for _, dstPort := range rule.DstPorts {
for _, dstIP := range expandACLPeerAddr(dstPort.IP) {
data[dstIP] = struct{}{}
}
}
} else {
dstPortsMap := make(map[string]struct{}, len(rule.DstPorts))
for _, dstPort := range rule.DstPorts {
for _, dstIP := range expandACLPeerAddr(dstPort.IP) {
dstPortsMap[dstIP] = struct{}{}
}
}
aclCachePeerMap[ip] = dstPortsMap
}
}
}
}
log.Trace().Interface("ACL Cache Map", aclCachePeerMap).Msg("ACL Peer Cache Map generated")
return aclCachePeerMap
}
// expandACLPeerAddr takes a "tailcfg.FilterRule" "IP" and expands it into
// something our cache logic can look up, which is "*" or single IP addresses.
// This is probably quite inefficient, but it is a result of
// "make it work, then make it fast", and a lot of the ACL stuff does not
// work, but people have tried to make it fast.
func expandACLPeerAddr(srcIP string) []string {
if ip, err := netip.ParseAddr(srcIP); err == nil {
return []string{ip.String()}
}
if cidr, err := netip.ParsePrefix(srcIP); err == nil {
addrs := []string{}
ipRange := netipx.RangeOfPrefix(cidr)
from := ipRange.From()
too := ipRange.To()
if from == too {
return []string{from.String()}
}
for from != too && from.Less(too) {
addrs = append(addrs, from.String())
from = from.Next()
}
addrs = append(addrs, too.String()) // Add the last IP address in the range
return addrs
}
// probably "*" or other string based "IP"
return []string{srcIP}
}
func generateACLRules(
machines []Machine,
aclPolicy ACLPolicy,
stripEmaildomain bool,
) ([]tailcfg.FilterRule, error) {
rules := []tailcfg.FilterRule{}
for index, acl := range aclPolicy.ACLs {
if acl.Action != "accept" {
return nil, errInvalidAction
}
srcIPs := []string{}
for innerIndex, src := range acl.Sources {
srcs, err := generateACLPolicySrc(machines, aclPolicy, src, stripEmaildomain)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
return nil, err
}
srcIPs = append(srcIPs, srcs...)
}
protocols, needsWildcard, err := parseProtocol(acl.Protocol)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d. protocol unknown %s", index, acl.Protocol)
return nil, err
}
destPorts := []tailcfg.NetPortRange{}
for innerIndex, dest := range acl.Destinations {
dests, err := generateACLPolicyDest(
machines,
aclPolicy,
dest,
needsWildcard,
stripEmaildomain,
)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Destination %d", index, innerIndex)
return nil, err
}
destPorts = append(destPorts, dests...)
}
rules = append(rules, tailcfg.FilterRule{
SrcIPs: srcIPs,
DstPorts: destPorts,
IPProto: protocols,
})
}
return rules, nil
}
func (h *Headscale) generateSSHRules() ([]*tailcfg.SSHRule, error) {
rules := []*tailcfg.SSHRule{}
if h.aclPolicy == nil {
return nil, errEmptyPolicy
}
machines, err := h.ListMachines()
if err != nil {
return nil, err
}
acceptAction := tailcfg.SSHAction{
Message: "",
Reject: false,
Accept: true,
SessionDuration: 0,
AllowAgentForwarding: false,
HoldAndDelegate: "",
AllowLocalPortForwarding: true,
}
rejectAction := tailcfg.SSHAction{
Message: "",
Reject: true,
Accept: false,
SessionDuration: 0,
AllowAgentForwarding: false,
HoldAndDelegate: "",
AllowLocalPortForwarding: false,
}
for index, sshACL := range h.aclPolicy.SSHs {
action := rejectAction
switch sshACL.Action {
case "accept":
action = acceptAction
case "check":
checkAction, err := sshCheckAction(sshACL.CheckPeriod)
if err != nil {
log.Error().
Msgf("Error parsing SSH %d, check action with unparsable duration '%s'", index, sshACL.CheckPeriod)
} else {
action = *checkAction
}
default:
log.Error().
Msgf("Error parsing SSH %d, unknown action '%s'", index, sshACL.Action)
return nil, err
}
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
for innerIndex, rawSrc := range sshACL.Sources {
expandedSrcs, err := expandAlias(
machines,
*h.aclPolicy,
rawSrc,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
log.Error().
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
return nil, err
}
for _, expandedSrc := range expandedSrcs {
principals = append(principals, &tailcfg.SSHPrincipal{
NodeIP: expandedSrc,
})
}
}
userMap := make(map[string]string, len(sshACL.Users))
for _, user := range sshACL.Users {
userMap[user] = "="
}
rules = append(rules, &tailcfg.SSHRule{
RuleExpires: nil,
Principals: principals,
SSHUsers: userMap,
Action: &action,
})
}
return rules, nil
}
func sshCheckAction(duration string) (*tailcfg.SSHAction, error) {
sessionLength, err := time.ParseDuration(duration)
if err != nil {
return nil, err
}
return &tailcfg.SSHAction{
Message: "",
Reject: false,
Accept: true,
SessionDuration: sessionLength,
AllowAgentForwarding: false,
HoldAndDelegate: "",
AllowLocalPortForwarding: true,
}, nil
}
func generateACLPolicySrc(
machines []Machine,
aclPolicy ACLPolicy,
src string,
stripEmaildomain bool,
) ([]string, error) {
return expandAlias(machines, aclPolicy, src, stripEmaildomain)
}
func generateACLPolicyDest(
machines []Machine,
aclPolicy ACLPolicy,
dest string,
needsWildcard bool,
stripEmaildomain bool,
) ([]tailcfg.NetPortRange, error) {
tokens := strings.Split(dest, ":")
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
return nil, errInvalidPortFormat
}
var alias string
// We can have here stuff like:
// git-server:*
// 192.168.1.0/24:22
// tag:montreal-webserver:80,443
// tag:api-server:443
// example-host-1:*
if len(tokens) == expectedTokenItems {
alias = tokens[0]
} else {
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
}
expanded, err := expandAlias(
machines,
aclPolicy,
alias,
stripEmaildomain,
)
if err != nil {
return nil, err
}
ports, err := expandPorts(tokens[len(tokens)-1], needsWildcard)
if err != nil {
return nil, err
}
dests := []tailcfg.NetPortRange{}
for _, d := range expanded {
for _, p := range *ports {
pr := tailcfg.NetPortRange{
IP: d,
Ports: p,
}
dests = append(dests, pr)
}
}
return dests, nil
}
// parseProtocol reads the proto field of the ACL and generates a list of
// protocols that will be allowed, following the IANA IP protocol number
// https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
//
// If the ACL proto field is empty, it allows ICMPv4, ICMPv6, TCP, and UDP,
// as per Tailscale behaviour (see tailcfg.FilterRule).
//
// Also returns a boolean indicating if the protocol
// requires all the destinations to use wildcard as port number (only TCP,
// UDP and SCTP support specifying ports).
func parseProtocol(protocol string) ([]int, bool, error) {
switch protocol {
case "":
return nil, false, nil
case "igmp":
return []int{protocolIGMP}, true, nil
case "ipv4", "ip-in-ip":
return []int{protocolIPv4}, true, nil
case "tcp":
return []int{protocolTCP}, false, nil
case "egp":
return []int{protocolEGP}, true, nil
case "igp":
return []int{protocolIGP}, true, nil
case "udp":
return []int{protocolUDP}, false, nil
case "gre":
return []int{protocolGRE}, true, nil
case "esp":
return []int{protocolESP}, true, nil
case "ah":
return []int{protocolAH}, true, nil
case "sctp":
return []int{protocolSCTP}, false, nil
case "icmp":
return []int{protocolICMP, protocolIPv6ICMP}, true, nil
default:
protocolNumber, err := strconv.Atoi(protocol)
if err != nil {
return nil, false, err
}
needsWildcard := protocolNumber != protocolTCP &&
protocolNumber != protocolUDP &&
protocolNumber != protocolSCTP
return []int{protocolNumber}, needsWildcard, nil
}
}
// expandalias has an input of either
// - a user
// - a group
// - a tag
// - a host
// and transform these in IPAddresses.
func expandAlias(
machines []Machine,
aclPolicy ACLPolicy,
alias string,
stripEmailDomain bool,
) ([]string, error) {
ips := []string{}
if alias == "*" {
return []string{"*"}, nil
}
log.Debug().
Str("alias", alias).
Msg("Expanding")
if strings.HasPrefix(alias, "group:") {
users, err := expandGroup(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
for _, n := range users {
nodes := filterMachinesByUser(machines, n)
for _, node := range nodes {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
}
return ips, nil
}
if strings.HasPrefix(alias, "tag:") {
// check for forced tags
for _, machine := range machines {
if contains(machine.ForcedTags, alias) {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
}
// find tag owners
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
if err != nil {
if errors.Is(err, errInvalidTag) {
if len(ips) == 0 {
return ips, fmt.Errorf(
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
errInvalidTag,
alias,
)
}
return ips, nil
} else {
return ips, err
}
}
// filter out machines per tag owner
for _, user := range owners {
machines := filterMachinesByUser(machines, user)
for _, machine := range machines {
hi := machine.GetHostInfo()
if contains(hi.RequestTags, alias) {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
}
}
return ips, nil
}
// if alias is a user
nodes := filterMachinesByUser(machines, alias)
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
for _, n := range nodes {
ips = append(ips, n.IPAddresses.ToStringSlice()...)
}
if len(ips) > 0 {
return ips, nil
}
// if alias is an host
if h, ok := aclPolicy.Hosts[alias]; ok {
return []string{h.String()}, nil
}
// if alias is an IP
ip, err := netip.ParseAddr(alias)
if err == nil {
return []string{ip.String()}, nil
}
// if alias is an CIDR
cidr, err := netip.ParsePrefix(alias)
if err == nil {
return []string{cidr.String()}, nil
}
log.Warn().Msgf("No IPs found with the alias %v", alias)
return ips, nil
}
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
// that are correctly tagged since they should not be listed as being in the user
// we assume in this function that we only have nodes from 1 user.
func excludeCorrectlyTaggedNodes(
aclPolicy ACLPolicy,
nodes []Machine,
user string,
stripEmailDomain bool,
) []Machine {
out := []Machine{}
tags := []string{}
for tag := range aclPolicy.TagOwners {
owners, _ := expandTagOwners(aclPolicy, user, stripEmailDomain)
ns := append(owners, user)
if contains(ns, user) {
tags = append(tags, tag)
}
}
// for each machine if tag is in tags list, don't append it.
for _, machine := range nodes {
hi := machine.GetHostInfo()
found := false
for _, t := range hi.RequestTags {
if contains(tags, t) {
found = true
break
}
}
if len(machine.ForcedTags) > 0 {
found = true
}
if !found {
out = append(out, machine)
}
}
return out
}
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
if portsStr == "*" {
return &[]tailcfg.PortRange{
{First: portRangeBegin, Last: portRangeEnd},
}, nil
}
if needsWildcard {
return nil, errWildcardIsNeeded
}
ports := []tailcfg.PortRange{}
for _, portStr := range strings.Split(portsStr, ",") {
rang := strings.Split(portStr, "-")
switch len(rang) {
case 1:
port, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(port),
Last: uint16(port),
})
case expectedTokenItems:
start, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
last, err := strconv.ParseUint(rang[1], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(start),
Last: uint16(last),
})
default:
return nil, errInvalidPortFormat
}
}
return &ports, nil
}
func filterMachinesByUser(machines []Machine, user string) []Machine {
out := []Machine{}
for _, machine := range machines {
if machine.User.Name == user {
out = append(out, machine)
}
}
return out
}
// expandTagOwners will return a list of user. An owner can be either a user or a group
// a group cannot be composed of groups.
func expandTagOwners(
aclPolicy ACLPolicy,
tag string,
stripEmailDomain bool,
) ([]string, error) {
var owners []string
ows, ok := aclPolicy.TagOwners[tag]
if !ok {
return []string{}, fmt.Errorf(
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
errInvalidTag,
tag,
)
}
for _, owner := range ows {
if strings.HasPrefix(owner, "group:") {
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
if err != nil {
return []string{}, err
}
owners = append(owners, gs...)
} else {
owners = append(owners, owner)
}
}
return owners, nil
}
// expandGroup will return the list of user inside the group
// after some validation.
func expandGroup(
aclPolicy ACLPolicy,
group string,
stripEmailDomain bool,
) ([]string, error) {
outGroups := []string{}
aclGroups, ok := aclPolicy.Groups[group]
if !ok {
return []string{}, fmt.Errorf(
"group %v isn't registered. %w",
group,
errInvalidGroup,
)
}
for _, group := range aclGroups {
if strings.HasPrefix(group, "group:") {
return []string{}, fmt.Errorf(
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
errInvalidGroup,
)
}
grp, err := NormalizeToFQDNRules(group, stripEmailDomain)
if err != nil {
return []string{}, fmt.Errorf(
"failed to normalize group %q, err: %w",
group,
errInvalidGroup,
)
}
outGroups = append(outGroups, grp)
}
return outGroups, nil
}

1693
acls_test.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,4 @@
package policy package headscale
import ( import (
"encoding/json" "encoding/json"
@@ -111,8 +111,8 @@ func (hosts *Hosts) UnmarshalYAML(data []byte) error {
} }
// IsZero is perhaps a bit naive here. // IsZero is perhaps a bit naive here.
func (pol ACLPolicy) IsZero() bool { func (policy ACLPolicy) IsZero() bool {
if len(pol.Groups) == 0 && len(pol.Hosts) == 0 && len(pol.ACLs) == 0 { if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
return true return true
} }

View File

@@ -1,101 +1,29 @@
package hscontrol package headscale
import ( import (
"bytes" "bytes"
"encoding/json" "encoding/json"
"errors"
"fmt"
"html/template" "html/template"
"net/http" "net/http"
"strconv"
"time" "time"
"github.com/gorilla/mux" "github.com/gorilla/mux"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
"tailscale.com/types/key" "tailscale.com/types/key"
) )
const ( const (
// The CapabilityVersion is used by Tailscale clients to indicate
// their codebase version. Tailscale clients can communicate over TS2021
// from CapabilityVersion 28, but we only have good support for it
// since https://github.com/tailscale/tailscale/pull/4323 (Noise in any HTTPS port).
//
// Related to this change, there is https://github.com/tailscale/tailscale/pull/5379,
// where CapabilityVersion 39 is introduced to indicate #4323 was merged.
//
// See also https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go
NoiseCapabilityVersion = 39
// TODO(juan): remove this once https://github.com/juanfont/headscale/issues/727 is fixed. // TODO(juan): remove this once https://github.com/juanfont/headscale/issues/727 is fixed.
registrationHoldoff = time.Second * 5 registrationHoldoff = time.Second * 5
reservedResponseHeaderSize = 4 reservedResponseHeaderSize = 4
RegisterMethodAuthKey = "authkey"
RegisterMethodOIDC = "oidc"
RegisterMethodCLI = "cli"
ErrRegisterMethodCLIDoesNotSupportExpire = Error(
"machines registered with CLI does not support expire",
)
) )
var ErrRegisterMethodCLIDoesNotSupportExpire = errors.New(
"machines registered with CLI does not support expire",
)
var ErrNoCapabilityVersion = errors.New("no capability version set")
func parseCabailityVersion(req *http.Request) (tailcfg.CapabilityVersion, error) {
clientCapabilityStr := req.URL.Query().Get("v")
if clientCapabilityStr == "" {
return 0, ErrNoCapabilityVersion
}
clientCapabilityVersion, err := strconv.Atoi(clientCapabilityStr)
if err != nil {
return 0, fmt.Errorf("failed to parse capability version: %w", err)
}
return tailcfg.CapabilityVersion(clientCapabilityVersion), nil
}
// KeyHandler provides the Headscale pub key
// Listens in /key.
func (h *Headscale) KeyHandler(
writer http.ResponseWriter,
req *http.Request,
) {
// New Tailscale clients send a 'v' parameter to indicate the CurrentCapabilityVersion
capVer, err := parseCabailityVersion(req)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("could not get capability version")
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
writer.WriteHeader(http.StatusInternalServerError)
return
}
log.Debug().
Str("handler", "/key").
Int("cap_ver", int(capVer)).
Msg("New noise client")
// TS2021 (Tailscale v2 protocol) requires to have a different key
if capVer >= NoiseCapabilityVersion {
resp := tailcfg.OverTLSPublicKeyResponse{
PublicKey: h.noisePrivateKey.Public(),
}
writer.Header().Set("Content-Type", "application/json")
writer.WriteHeader(http.StatusOK)
err = json.NewEncoder(writer).Encode(resp)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
}
func (h *Headscale) HealthHandler( func (h *Headscale) HealthHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
@@ -125,7 +53,7 @@ func (h *Headscale) HealthHandler(
} }
} }
if err := h.db.PingDB(req.Context()); err != nil { if err := h.pingDB(req.Context()); err != nil {
respond(err) respond(err)
return return
@@ -165,16 +93,33 @@ func (h *Headscale) RegisterWebAPI(
req *http.Request, req *http.Request,
) { ) {
vars := mux.Vars(req) vars := mux.Vars(req)
machineKeyStr := vars["mkey"] nodeKeyStr, ok := vars["nkey"]
if !NodePublicKeyRegex.Match([]byte(nodeKeyStr)) {
log.Warn().Str("node_key", nodeKeyStr).Msg("Invalid node key passed to registration url")
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
writer.WriteHeader(http.StatusUnauthorized)
_, err := writer.Write([]byte("Unauthorized"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
// We need to make sure we dont open for XSS style injections, if the parameter that // We need to make sure we dont open for XSS style injections, if the parameter that
// is passed as a key is not parsable/validated as a NodePublic key, then fail to render // is passed as a key is not parsable/validated as a NodePublic key, then fail to render
// the template and log an error. // the template and log an error.
var machineKey key.MachinePublic var nodeKey key.NodePublic
err := machineKey.UnmarshalText( err := nodeKey.UnmarshalText(
[]byte(machineKeyStr), []byte(NodePublicKeyEnsurePrefix(nodeKeyStr)),
) )
if err != nil {
if !ok || nodeKeyStr == "" || err != nil {
log.Warn().Err(err).Msg("Failed to parse incoming nodekey") log.Warn().Err(err).Msg("Failed to parse incoming nodekey")
writer.Header().Set("Content-Type", "text/plain; charset=utf-8") writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
@@ -192,7 +137,7 @@ func (h *Headscale) RegisterWebAPI(
var content bytes.Buffer var content bytes.Buffer
if err := registerWebAPITemplate.Execute(&content, registerWebAPITemplateConfig{ if err := registerWebAPITemplate.Execute(&content, registerWebAPITemplateConfig{
Key: machineKey.String(), Key: nodeKeyStr,
}); err != nil { }); err != nil {
log.Error(). log.Error().
Str("func", "RegisterWebAPI"). Str("func", "RegisterWebAPI").

113
api_common.go Normal file
View File

@@ -0,0 +1,113 @@
package headscale
import (
"time"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
)
func (h *Headscale) generateMapResponse(
mapRequest tailcfg.MapRequest,
machine *Machine,
) (*tailcfg.MapResponse, error) {
log.Trace().
Str("func", "generateMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
Msg("Creating Map response")
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
if err != nil {
log.Error().
Caller().
Str("func", "generateMapResponse").
Err(err).
Msg("Cannot convert to node")
return nil, err
}
peers, err := h.getValidPeers(machine)
if err != nil {
log.Error().
Caller().
Str("func", "generateMapResponse").
Err(err).
Msg("Cannot fetch peers")
return nil, err
}
profiles := h.getMapResponseUserProfiles(*machine, peers)
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
if err != nil {
log.Error().
Caller().
Str("func", "generateMapResponse").
Err(err).
Msg("Failed to convert peers to Tailscale nodes")
return nil, err
}
dnsConfig := getMapResponseDNSConfig(
h.cfg.DNSConfig,
h.cfg.BaseDomain,
*machine,
peers,
)
now := time.Now()
resp := tailcfg.MapResponse{
KeepAlive: false,
Node: node,
// TODO: Only send if updated
DERPMap: h.DERPMap,
// TODO: Only send if updated
Peers: nodePeers,
// TODO(kradalby): Implement:
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
// PeersChanged
// PeersRemoved
// PeersChangedPatch
// PeerSeenChange
// OnlineChange
// TODO: Only send if updated
DNSConfig: dnsConfig,
// TODO: Only send if updated
Domain: h.cfg.BaseDomain,
// Do not instruct clients to collect services, we do not
// support or do anything with them
CollectServices: "false",
// TODO: Only send if updated
PacketFilter: h.aclRules,
UserProfiles: profiles,
// TODO: Only send if updated
SSHPolicy: h.sshPolicy,
ControlTime: &now,
Debug: &tailcfg.Debug{
DisableLogTail: !h.cfg.LogTail.Enabled,
RandomizeClientPort: h.cfg.RandomizeClientPort,
},
}
log.Trace().
Str("func", "generateMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
// Interface("payload", resp).
Msgf("Generated map response: %s", tailMapResponseToString(resp))
return &resp, nil
}

157
api_key.go Normal file
View File

@@ -0,0 +1,157 @@
package headscale
import (
"fmt"
"strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"golang.org/x/crypto/bcrypt"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
apiPrefixLength = 7
apiKeyLength = 32
ErrAPIKeyFailedToParse = Error("Failed to parse ApiKey")
)
// APIKey describes the datamodel for API keys used to remotely authenticate with
// headscale.
type APIKey struct {
ID uint64 `gorm:"primary_key"`
Prefix string `gorm:"uniqueIndex"`
Hash []byte
CreatedAt *time.Time
Expiration *time.Time
LastSeen *time.Time
}
// CreateAPIKey creates a new ApiKey in a user, and returns it.
func (h *Headscale) CreateAPIKey(
expiration *time.Time,
) (string, *APIKey, error) {
prefix, err := GenerateRandomStringURLSafe(apiPrefixLength)
if err != nil {
return "", nil, err
}
toBeHashed, err := GenerateRandomStringURLSafe(apiKeyLength)
if err != nil {
return "", nil, err
}
// Key to return to user, this will only be visible _once_
keyStr := prefix + "." + toBeHashed
hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost)
if err != nil {
return "", nil, err
}
key := APIKey{
Prefix: prefix,
Hash: hash,
Expiration: expiration,
}
if err := h.db.Save(&key).Error; err != nil {
return "", nil, fmt.Errorf("failed to save API key to database: %w", err)
}
return keyStr, &key, nil
}
// ListAPIKeys returns the list of ApiKeys for a user.
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
keys := []APIKey{}
if err := h.db.Find(&keys).Error; err != nil {
return nil, err
}
return keys, nil
}
// GetAPIKey returns a ApiKey for a given key.
func (h *Headscale) GetAPIKey(prefix string) (*APIKey, error) {
key := APIKey{}
if result := h.db.First(&key, "prefix = ?", prefix); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// GetAPIKeyByID returns a ApiKey for a given id.
func (h *Headscale) GetAPIKeyByID(id uint64) (*APIKey, error) {
key := APIKey{}
if result := h.db.Find(&APIKey{ID: id}).First(&key); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// DestroyAPIKey destroys a ApiKey. Returns error if the ApiKey
// does not exist.
func (h *Headscale) DestroyAPIKey(key APIKey) error {
if result := h.db.Unscoped().Delete(key); result.Error != nil {
return result.Error
}
return nil
}
// ExpireAPIKey marks a ApiKey as expired.
func (h *Headscale) ExpireAPIKey(key *APIKey) error {
if err := h.db.Model(&key).Update("Expiration", time.Now()).Error; err != nil {
return err
}
return nil
}
func (h *Headscale) ValidateAPIKey(keyStr string) (bool, error) {
prefix, hash, found := strings.Cut(keyStr, ".")
if !found {
return false, ErrAPIKeyFailedToParse
}
key, err := h.GetAPIKey(prefix)
if err != nil {
return false, fmt.Errorf("failed to validate api key: %w", err)
}
if key.Expiration.Before(time.Now()) {
return false, nil
}
if err := bcrypt.CompareHashAndPassword(key.Hash, []byte(hash)); err != nil {
return false, err
}
return true, nil
}
func (key *APIKey) toProto() *v1.ApiKey {
protoKey := v1.ApiKey{
Id: key.ID,
Prefix: key.Prefix,
}
if key.Expiration != nil {
protoKey.Expiration = timestamppb.New(*key.Expiration)
}
if key.CreatedAt != nil {
protoKey.CreatedAt = timestamppb.New(*key.CreatedAt)
}
if key.LastSeen != nil {
protoKey.LastSeen = timestamppb.New(*key.LastSeen)
}
return &protoKey
}

View File

@@ -1,4 +1,4 @@
package db package headscale
import ( import (
"time" "time"
@@ -7,7 +7,7 @@ import (
) )
func (*Suite) TestCreateAPIKey(c *check.C) { func (*Suite) TestCreateAPIKey(c *check.C) {
apiKeyStr, apiKey, err := db.CreateAPIKey(nil) apiKeyStr, apiKey, err := app.CreateAPIKey(nil)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
@@ -16,74 +16,74 @@ func (*Suite) TestCreateAPIKey(c *check.C) {
c.Assert(apiKey.Hash, check.NotNil) c.Assert(apiKey.Hash, check.NotNil)
c.Assert(apiKeyStr, check.Not(check.Equals), "") c.Assert(apiKeyStr, check.Not(check.Equals), "")
_, err = db.ListAPIKeys() _, err = app.ListAPIKeys()
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
keys, err := db.ListAPIKeys() keys, err := app.ListAPIKeys()
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(len(keys), check.Equals, 1) c.Assert(len(keys), check.Equals, 1)
} }
func (*Suite) TestAPIKeyDoesNotExist(c *check.C) { func (*Suite) TestAPIKeyDoesNotExist(c *check.C) {
key, err := db.GetAPIKey("does-not-exist") key, err := app.GetAPIKey("does-not-exist")
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
c.Assert(key, check.IsNil) c.Assert(key, check.IsNil)
} }
func (*Suite) TestValidateAPIKeyOk(c *check.C) { func (*Suite) TestValidateAPIKeyOk(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour) nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2) apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr) valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true) c.Assert(valid, check.Equals, true)
} }
func (*Suite) TestValidateAPIKeyNotOk(c *check.C) { func (*Suite) TestValidateAPIKeyNotOk(c *check.C) {
nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour) nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowMinus2) apiKeyStr, apiKey, err := app.CreateAPIKey(&nowMinus2)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr) valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, false) c.Assert(valid, check.Equals, false)
now := time.Now() now := time.Now()
apiKeyStrNow, apiKey, err := db.CreateAPIKey(&now) apiKeyStrNow, apiKey, err := app.CreateAPIKey(&now)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
validNow, err := db.ValidateAPIKey(apiKeyStrNow) validNow, err := app.ValidateAPIKey(apiKeyStrNow)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(validNow, check.Equals, false) c.Assert(validNow, check.Equals, false)
validSilly, err := db.ValidateAPIKey("nota.validkey") validSilly, err := app.ValidateAPIKey("nota.validkey")
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
c.Assert(validSilly, check.Equals, false) c.Assert(validSilly, check.Equals, false)
validWithErr, err := db.ValidateAPIKey("produceerrorkey") validWithErr, err := app.ValidateAPIKey("produceerrorkey")
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
c.Assert(validWithErr, check.Equals, false) c.Assert(validWithErr, check.Equals, false)
} }
func (*Suite) TestExpireAPIKey(c *check.C) { func (*Suite) TestExpireAPIKey(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour) nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2) apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr) valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true) c.Assert(valid, check.Equals, true)
err = db.ExpireAPIKey(apiKey) err = app.ExpireAPIKey(apiKey)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey.Expiration, check.NotNil) c.Assert(apiKey.Expiration, check.NotNil)
notValid, err := db.ValidateAPIKey(apiKeyStr) notValid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(notValid, check.Equals, false) c.Assert(notValid, check.Equals, false)
} }

View File

@@ -1,4 +1,4 @@
package hscontrol package headscale
import ( import (
"context" "context"
@@ -8,11 +8,10 @@ import (
"io" "io"
"net" "net"
"net/http" "net/http"
_ "net/http/pprof" //nolint
"os" "os"
"os/signal" "os/signal"
"path/filepath" "sort"
"runtime" "strconv"
"strings" "strings"
"sync" "sync"
"syscall" "syscall"
@@ -21,21 +20,12 @@ import (
"github.com/coreos/go-oidc/v3/oidc" "github.com/coreos/go-oidc/v3/oidc"
"github.com/gorilla/mux" "github.com/gorilla/mux"
grpcMiddleware "github.com/grpc-ecosystem/go-grpc-middleware" grpcMiddleware "github.com/grpc-ecosystem/go-grpc-middleware"
grpcRuntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime" "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/derp"
derpServer "github.com/juanfont/headscale/hscontrol/derp/server"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/hscontrol/notifier"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/patrickmn/go-cache" "github.com/patrickmn/go-cache"
zerolog "github.com/philip-bui/grpc-zerolog" zerolog "github.com/philip-bui/grpc-zerolog"
"github.com/pkg/profile"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/puzpuzpuz/xsync/v2"
zl "github.com/rs/zerolog" zl "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"golang.org/x/crypto/acme" "golang.org/x/crypto/acme"
@@ -51,82 +41,115 @@ import (
"google.golang.org/grpc/reflection" "google.golang.org/grpc/reflection"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
"gorm.io/gorm" "gorm.io/gorm"
"tailscale.com/envknob"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/dnstype" "tailscale.com/types/dnstype"
"tailscale.com/types/key" "tailscale.com/types/key"
"tailscale.com/util/dnsname"
) )
var ( const (
errSTUNAddressNotSet = errors.New("STUN address not set") errSTUNAddressNotSet = Error("STUN address not set")
errUnsupportedLetsEncryptChallengeType = errors.New( errUnsupportedDatabase = Error("unsupported DB")
errUnsupportedLetsEncryptChallengeType = Error(
"unknown value for Lets Encrypt challenge type", "unknown value for Lets Encrypt challenge type",
) )
errEmptyInitialDERPMap = errors.New(
"initial DERPMap is empty, Headscale requires at least one entry",
)
) )
const ( const (
AuthPrefix = "Bearer " AuthPrefix = "Bearer "
updateInterval = 5000 Postgres = "postgres"
privateKeyFileMode = 0o600 Sqlite = "sqlite3"
headscaleDirPerm = 0o700 updateInterval = 5000
HTTPReadTimeout = 30 * time.Second
HTTPShutdownTimeout = 3 * time.Second
privateKeyFileMode = 0o600
registerCacheExpiration = time.Minute * 15 registerCacheExpiration = time.Minute * 15
registerCacheCleanup = time.Minute * 20 registerCacheCleanup = time.Minute * 20
)
// func init() { DisabledClientAuth = "disabled"
// deadlock.Opts.DeadlockTimeout = 15 * time.Second RelaxedClientAuth = "relaxed"
// deadlock.Opts.PrintAllCurrentGoroutines = true EnforcedClientAuth = "enforced"
// } )
// Headscale represents the base app of the service. // Headscale represents the base app of the service.
type Headscale struct { type Headscale struct {
cfg *types.Config cfg *Config
db *db.HSDatabase db *gorm.DB
ipAlloc *db.IPAllocator dbString string
dbType string
dbDebug bool
privateKey *key.MachinePrivate
noisePrivateKey *key.MachinePrivate noisePrivateKey *key.MachinePrivate
DERPMap *tailcfg.DERPMap DERPMap *tailcfg.DERPMap
DERPServer *derpServer.DERPServer DERPServer *DERPServer
ACLPolicy *policy.ACLPolicy aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
aclPeerCacheMapRW sync.RWMutex
aclPeerCacheMap map[string]map[string]struct{}
sshPolicy *tailcfg.SSHPolicy
mapper *mapper.Mapper lastStateChange *xsync.MapOf[string, time.Time]
nodeNotifier *notifier.Notifier
oidcProvider *oidc.Provider oidcProvider *oidc.Provider
oauth2Config *oauth2.Config oauth2Config *oauth2.Config
registrationCache *cache.Cache registrationCache *cache.Cache
pollNetMapStreamWG sync.WaitGroup ipAllocationMutex sync.Mutex
mapSessions map[types.NodeID]*mapSession shutdownChan chan struct{}
mapSessionMu sync.Mutex pollNetMapStreamWG sync.WaitGroup
} }
var ( func NewHeadscale(cfg *Config) (*Headscale, error) {
profilingEnabled = envknob.Bool("HEADSCALE_PROFILING_ENABLED") privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
tailsqlEnabled = envknob.Bool("HEADSCALE_DEBUG_TAILSQL_ENABLED") if err != nil {
tailsqlStateDir = envknob.String("HEADSCALE_DEBUG_TAILSQL_STATE_DIR") return nil, fmt.Errorf("failed to read or create private key: %w", err)
tailsqlTSKey = envknob.String("TS_AUTHKEY")
)
func NewHeadscale(cfg *types.Config) (*Headscale, error) {
var err error
if profilingEnabled {
runtime.SetBlockProfileRate(1)
} }
// TS2021 requires to have a different key from the legacy protocol.
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath) noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read or create Noise protocol private key: %w", err) return nil, fmt.Errorf("failed to read or create Noise protocol private key: %w", err)
} }
if privateKey.Equal(*noisePrivateKey) {
return nil, fmt.Errorf("private key and noise private key are the same: %w", err)
}
var dbString string
switch cfg.DBtype {
case Postgres:
dbString = fmt.Sprintf(
"host=%s dbname=%s user=%s",
cfg.DBhost,
cfg.DBname,
cfg.DBuser,
)
if sslEnabled, err := strconv.ParseBool(cfg.DBssl); err == nil {
if !sslEnabled {
dbString += " sslmode=disable"
}
} else {
dbString += fmt.Sprintf(" sslmode=%s", cfg.DBssl)
}
if cfg.DBport != 0 {
dbString += fmt.Sprintf(" port=%d", cfg.DBport)
}
if cfg.DBpass != "" {
dbString += fmt.Sprintf(" password=%s", cfg.DBpass)
}
case Sqlite:
dbString = cfg.DBpath
default:
return nil, errUnsupportedDatabase
}
registrationCache := cache.New( registrationCache := cache.New(
registerCacheExpiration, registerCacheExpiration,
registerCacheCleanup, registerCacheCleanup,
@@ -134,21 +157,17 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
app := Headscale{ app := Headscale{
cfg: cfg, cfg: cfg,
dbType: cfg.DBtype,
dbString: dbString,
privateKey: privateKey,
noisePrivateKey: noisePrivateKey, noisePrivateKey: noisePrivateKey,
aclRules: tailcfg.FilterAllowAll, // default allowall
registrationCache: registrationCache, registrationCache: registrationCache,
pollNetMapStreamWG: sync.WaitGroup{}, pollNetMapStreamWG: sync.WaitGroup{},
nodeNotifier: notifier.NewNotifier(), lastStateChange: xsync.NewMapOf[time.Time](),
mapSessions: make(map[types.NodeID]*mapSession),
} }
app.db, err = db.NewHeadscaleDatabase( err = app.initDB()
cfg.Database,
cfg.BaseDomain)
if err != nil {
return nil, err
}
app.ipAlloc, err = db.NewIPAllocator(app.db, cfg.PrefixV4, cfg.PrefixV6, cfg.IPAllocation)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -165,16 +184,7 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
} }
if app.cfg.DNSConfig != nil && app.cfg.DNSConfig.Proxied { // if MagicDNS if app.cfg.DNSConfig != nil && app.cfg.DNSConfig.Proxied { // if MagicDNS
// TODO(kradalby): revisit why this takes a list. magicDNSDomains := generateMagicDNSRootDomains(app.cfg.IPPrefixes)
var magicDNSDomains []dnsname.FQDN
if cfg.PrefixV4 != nil {
magicDNSDomains = append(magicDNSDomains, util.GenerateIPv4DNSRootDomain(*cfg.PrefixV4)...)
}
if cfg.PrefixV6 != nil {
magicDNSDomains = append(magicDNSDomains, util.GenerateIPv6DNSRootDomain(*cfg.PrefixV6)...)
}
// we might have routes already from Split DNS // we might have routes already from Split DNS
if app.cfg.DNSConfig.Routes == nil { if app.cfg.DNSConfig.Routes == nil {
app.cfg.DNSConfig.Routes = make(map[string][]*dnstype.Resolver) app.cfg.DNSConfig.Routes = make(map[string][]*dnstype.Resolver)
@@ -185,23 +195,7 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
} }
if cfg.DERP.ServerEnabled { if cfg.DERP.ServerEnabled {
derpServerKey, err := readOrCreatePrivateKey(cfg.DERP.ServerPrivateKeyPath) embeddedDERPServer, err := app.NewDERPServer()
if err != nil {
return nil, fmt.Errorf("failed to read or create DERP server private key: %w", err)
}
if derpServerKey.Equal(*noisePrivateKey) {
return nil, fmt.Errorf(
"DERP server private key and noise private key are the same: %w",
err,
)
}
embeddedDERPServer, err := derpServer.NewDERPServer(
cfg.ServerURL,
key.NodePrivate(*derpServerKey),
&cfg.DERP,
)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -217,96 +211,122 @@ func (h *Headscale) redirect(w http.ResponseWriter, req *http.Request) {
http.Redirect(w, req, target, http.StatusFound) http.Redirect(w, req, target, http.StatusFound)
} }
// deleteExpireEphemeralNodes deletes ephemeral node records that have not been // expireEphemeralNodes deletes ephemeral machine records that have not been
// seen for longer than h.cfg.EphemeralNodeInactivityTimeout. // seen for longer than h.cfg.EphemeralNodeInactivityTimeout.
func (h *Headscale) deleteExpireEphemeralNodes(milliSeconds int64) { func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond) ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C { for range ticker.C {
var removed []types.NodeID h.expireEphemeralNodesWorker()
var changed []types.NodeID
if err := h.db.Write(func(tx *gorm.DB) error {
removed, changed = db.DeleteExpiredEphemeralNodes(tx, h.cfg.EphemeralNodeInactivityTimeout)
return nil
}); err != nil {
log.Error().Err(err).Msg("database error while expiring ephemeral nodes")
continue
}
if removed != nil {
ctx := types.NotifyCtx(context.Background(), "expire-ephemeral", "na")
h.nodeNotifier.NotifyAll(ctx, types.StateUpdate{
Type: types.StatePeerRemoved,
Removed: removed,
})
}
if changed != nil {
ctx := types.NotifyCtx(context.Background(), "expire-ephemeral", "na")
h.nodeNotifier.NotifyAll(ctx, types.StateUpdate{
Type: types.StatePeerChanged,
ChangeNodes: changed,
})
}
} }
} }
// expireExpiredMachines expires nodes that have an explicit expiry set // expireExpiredMachines expires machines that have an explicit expiry set
// after that expiry time has passed. // after that expiry time has passed.
func (h *Headscale) expireExpiredMachines(intervalMs int64) { func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
interval := time.Duration(intervalMs) * time.Millisecond ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
ticker := time.NewTicker(interval)
lastCheck := time.Unix(0, 0)
var update types.StateUpdate
var changed bool
for range ticker.C { for range ticker.C {
if err := h.db.Write(func(tx *gorm.DB) error { h.expireExpiredMachinesWorker()
lastCheck, update, changed = db.ExpireExpiredNodes(tx, lastCheck) }
}
return nil func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
}); err != nil { ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
log.Error().Err(err).Msg("database error while expiring nodes") for range ticker.C {
continue err := h.handlePrimarySubnetFailover()
} if err != nil {
log.Error().Err(err).Msg("failed to handle primary subnet failover")
if changed {
log.Trace().Interface("nodes", update.ChangePatches).Msgf("expiring nodes")
ctx := types.NotifyCtx(context.Background(), "expire-expired", "na")
h.nodeNotifier.NotifyAll(ctx, update)
} }
} }
} }
// scheduledDERPMapUpdateWorker refreshes the DERPMap stored on the global object func (h *Headscale) expireEphemeralNodesWorker() {
// at a set interval. users, err := h.ListUsers()
func (h *Headscale) scheduledDERPMapUpdateWorker(cancelChan <-chan struct{}) { if err != nil {
log.Info(). log.Error().Err(err).Msg("Error listing users")
Dur("frequency", h.cfg.DERP.UpdateFrequency).
Msg("Setting up a DERPMap update worker") return
ticker := time.NewTicker(h.cfg.DERP.UpdateFrequency) }
for _, user := range users {
machines, err := h.ListMachinesByUser(user.Name)
if err != nil {
log.Error().
Err(err).
Str("user", user.Name).
Msg("Error listing machines in user")
for {
select {
case <-cancelChan:
return return
}
case <-ticker.C: expiredFound := false
log.Info().Msg("Fetching DERPMap updates") for _, machine := range machines {
h.DERPMap = derp.GetDERPMap(h.cfg.DERP) if machine.isEphemeral() && machine.LastSeen != nil &&
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion { time.Now().
region, _ := h.DERPServer.GenerateRegion() After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
h.DERPMap.Regions[region.RegionID] = &region expiredFound = true
log.Info().
Str("machine", machine.Hostname).
Msg("Ephemeral client removed from database")
err = h.db.Unscoped().Delete(machine).Error
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Msg("🤮 Cannot delete ephemeral machine from the database")
}
} }
}
ctx := types.NotifyCtx(context.Background(), "derpmap-update", "na") if expiredFound {
h.nodeNotifier.NotifyAll(ctx, types.StateUpdate{ h.setLastStateChangeToNow()
Type: types.StateDERPUpdated, }
DERPMap: h.DERPMap, }
}) }
func (h *Headscale) expireExpiredMachinesWorker() {
users, err := h.ListUsers()
if err != nil {
log.Error().Err(err).Msg("Error listing users")
return
}
for _, user := range users {
machines, err := h.ListMachinesByUser(user.Name)
if err != nil {
log.Error().
Err(err).
Str("user", user.Name).
Msg("Error listing machines in user")
return
}
expiredFound := false
for index, machine := range machines {
if machine.isExpired() &&
machine.Expiry.After(h.getLastStateChange(user)) {
expiredFound = true
err := h.ExpireMachine(&machines[index])
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Str("name", machine.GivenName).
Msg("🤮 Cannot expire machine")
} else {
log.Info().
Str("machine", machine.Hostname).
Str("name", machine.GivenName).
Msg("Machine successfully expired")
}
}
}
if expiredFound {
h.setLastStateChangeToNow()
} }
} }
} }
@@ -330,6 +350,11 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
meta, ok := metadata.FromIncomingContext(ctx) meta, ok := metadata.FromIncomingContext(ctx)
if !ok { if !ok {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg("Retrieving metadata is failed")
return ctx, status.Errorf( return ctx, status.Errorf(
codes.InvalidArgument, codes.InvalidArgument,
"Retrieving metadata is failed", "Retrieving metadata is failed",
@@ -338,6 +363,11 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
authHeader, ok := meta["authorization"] authHeader, ok := meta["authorization"]
if !ok { if !ok {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg("Authorization token is not supplied")
return ctx, status.Errorf( return ctx, status.Errorf(
codes.Unauthenticated, codes.Unauthenticated,
"Authorization token is not supplied", "Authorization token is not supplied",
@@ -347,14 +377,25 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
token := authHeader[0] token := authHeader[0]
if !strings.HasPrefix(token, AuthPrefix) { if !strings.HasPrefix(token, AuthPrefix) {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg(`missing "Bearer " prefix in "Authorization" header`)
return ctx, status.Error( return ctx, status.Error(
codes.Unauthenticated, codes.Unauthenticated,
`missing "Bearer " prefix in "Authorization" header`, `missing "Bearer " prefix in "Authorization" header`,
) )
} }
valid, err := h.db.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix)) valid, err := h.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix))
if err != nil { if err != nil {
log.Error().
Caller().
Err(err).
Str("client_address", client.Addr.String()).
Msg("failed to validate token")
return ctx, status.Error(codes.Internal, "failed to validate token") return ctx, status.Error(codes.Internal, "failed to validate token")
} }
@@ -398,7 +439,7 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
return return
} }
valid, err := h.db.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix)) valid, err := h.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix))
if err != nil { if err != nil {
log.Error(). log.Error().
Caller(). Caller().
@@ -450,17 +491,17 @@ func (h *Headscale) ensureUnixSocketIsAbsent() error {
return os.Remove(h.cfg.UnixSocket) return os.Remove(h.cfg.UnixSocket)
} }
func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router { func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
router := mux.NewRouter() router := mux.NewRouter()
router.Use(prometheusMiddleware)
router.HandleFunc(ts2021UpgradePath, h.NoiseUpgradeHandler).Methods(http.MethodPost) router.HandleFunc(ts2021UpgradePath, h.NoiseUpgradeHandler).Methods(http.MethodPost)
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet) router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet)
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet) router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet)
router.HandleFunc("/register/{mkey}", h.RegisterWebAPI).Methods(http.MethodGet) router.HandleFunc("/register/{nkey}", h.RegisterWebAPI).Methods(http.MethodGet)
h.addLegacyHandlers(router)
router.HandleFunc("/oidc/register/{mkey}", h.RegisterOIDC).Methods(http.MethodGet) router.HandleFunc("/oidc/register/{nkey}", h.RegisterOIDC).Methods(http.MethodGet)
router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet) router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet)
router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet) router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet)
router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig). router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig).
@@ -468,16 +509,14 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router {
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet) router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet)
router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig). router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig).
Methods(http.MethodGet) Methods(http.MethodGet)
router.HandleFunc("/swagger", SwaggerUI).Methods(http.MethodGet)
// TODO(kristoffer): move swagger into a package router.HandleFunc("/swagger/v1/openapiv2.json", SwaggerAPIv1).
router.HandleFunc("/swagger", headscale.SwaggerUI).Methods(http.MethodGet)
router.HandleFunc("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1).
Methods(http.MethodGet) Methods(http.MethodGet)
if h.cfg.DERP.ServerEnabled { if h.cfg.DERP.ServerEnabled {
router.HandleFunc("/derp", h.DERPServer.DERPHandler) router.HandleFunc("/derp", h.DERPHandler)
router.HandleFunc("/derp/probe", derpServer.DERPProbeHandler) router.HandleFunc("/derp/probe", h.DERPProbeHandler)
router.HandleFunc("/bootstrap-dns", derpServer.DERPBootstrapDNSHandler(h.DERPMap)) router.HandleFunc("/bootstrap-dns", h.DERPBootstrapDNSHandler)
} }
apiRouter := router.PathPrefix("/api").Subrouter() apiRouter := router.PathPrefix("/api").Subrouter()
@@ -489,26 +528,12 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router {
return router return router
} }
// Serve launches the HTTP and gRPC server service Headscale and the API. // Serve launches a GIN server with the Headscale API.
func (h *Headscale) Serve() error { func (h *Headscale) Serve() error {
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
err := os.MkdirAll(profilePath, os.ModePerm)
if err != nil {
log.Fatal().Err(err).Msg("failed to create profiling directory")
}
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
} else {
defer profile.Start().Stop()
}
}
var err error var err error
// Fetch an initial DERP Map before we start serving // Fetch an initial DERP Map before we start serving
h.DERPMap = derp.GetDERPMap(h.cfg.DERP) h.DERPMap = GetDERPMap(h.cfg.DERP)
h.mapper = mapper.NewMapper(h.db, h.cfg, h.DERPMap, h.nodeNotifier)
if h.cfg.DERP.ServerEnabled { if h.cfg.DERP.ServerEnabled {
// When embedded DERP is enabled we always need a STUN server // When embedded DERP is enabled we always need a STUN server
@@ -516,16 +541,8 @@ func (h *Headscale) Serve() error {
return errSTUNAddressNotSet return errSTUNAddressNotSet
} }
region, err := h.DERPServer.GenerateRegion() h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
if err != nil { go h.ServeSTUN()
return fmt.Errorf("generating DERP region for embedded server: %w", err)
}
if h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
h.DERPMap.Regions[region.RegionID] = &region
}
go h.DERPServer.ServeSTUN()
} }
if h.cfg.DERP.AutoUpdate { if h.cfg.DERP.AutoUpdate {
@@ -534,15 +551,11 @@ func (h *Headscale) Serve() error {
go h.scheduledDERPMapUpdateWorker(derpMapCancelChannel) go h.scheduledDERPMapUpdateWorker(derpMapCancelChannel)
} }
if len(h.DERPMap.Regions) == 0 { go h.expireEphemeralNodes(updateInterval)
return errEmptyInitialDERPMap
}
// TODO(kradalby): These should have cancel channels and be cleaned
// up on shutdown.
go h.deleteExpireEphemeralNodes(updateInterval)
go h.expireExpiredMachines(updateInterval) go h.expireExpiredMachines(updateInterval)
go h.failoverSubnetRoutes(updateInterval)
if zl.GlobalLevel() == zl.TraceLevel { if zl.GlobalLevel() == zl.TraceLevel {
zerolog.RespLog = true zerolog.RespLog = true
} else { } else {
@@ -566,12 +579,6 @@ func (h *Headscale) Serve() error {
return fmt.Errorf("unable to remove old socket file: %w", err) return fmt.Errorf("unable to remove old socket file: %w", err)
} }
socketDir := filepath.Dir(h.cfg.UnixSocket)
err = util.EnsureDir(socketDir)
if err != nil {
return fmt.Errorf("setting up unix socket: %w", err)
}
socketListener, err := net.Listen("unix", h.cfg.UnixSocket) socketListener, err := net.Listen("unix", h.cfg.UnixSocket)
if err != nil { if err != nil {
return fmt.Errorf("failed to set up gRPC socket: %w", err) return fmt.Errorf("failed to set up gRPC socket: %w", err)
@@ -582,32 +589,29 @@ func (h *Headscale) Serve() error {
return fmt.Errorf("failed change permission of gRPC socket: %w", err) return fmt.Errorf("failed change permission of gRPC socket: %w", err)
} }
grpcGatewayMux := grpcRuntime.NewServeMux() grpcGatewayMux := runtime.NewServeMux()
// Make the grpc-gateway connect to grpc over socket // Make the grpc-gateway connect to grpc over socket
grpcGatewayConn, err := grpc.Dial( grpcGatewayConn, err := grpc.Dial(
h.cfg.UnixSocket, h.cfg.UnixSocket,
[]grpc.DialOption{ []grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer), grpc.WithContextDialer(GrpcSocketDialer),
}..., }...,
) )
if err != nil { if err != nil {
return fmt.Errorf("setting up gRPC gateway via socket: %w", err) return err
} }
// Connect to the gRPC server over localhost to skip // Connect to the gRPC server over localhost to skip
// the authentication. // the authentication.
err = v1.RegisterHeadscaleServiceHandler(ctx, grpcGatewayMux, grpcGatewayConn) err = v1.RegisterHeadscaleServiceHandler(ctx, grpcGatewayMux, grpcGatewayConn)
if err != nil { if err != nil {
return fmt.Errorf("registering Headscale API service to gRPC: %w", err) return err
} }
// Start the local gRPC server without TLS and without authentication // Start the local gRPC server without TLS and without authentication
grpcSocket := grpc.NewServer( grpcSocket := grpc.NewServer(zerolog.UnaryInterceptor())
// Uncomment to debug grpc communication.
// zerolog.UnaryInterceptor(),
)
v1.RegisterHeadscaleServiceServer(grpcSocket, newHeadscaleV1APIServer(h)) v1.RegisterHeadscaleServiceServer(grpcSocket, newHeadscaleV1APIServer(h))
reflection.Register(grpcSocket) reflection.Register(grpcSocket)
@@ -621,7 +625,9 @@ func (h *Headscale) Serve() error {
tlsConfig, err := h.getTLSSettings() tlsConfig, err := h.getTLSSettings()
if err != nil { if err != nil {
return fmt.Errorf("configuring TLS settings: %w", err) log.Error().Err(err).Msg("Failed to set up TLS configuration")
return err
} }
// //
@@ -645,8 +651,7 @@ func (h *Headscale) Serve() error {
grpc.UnaryInterceptor( grpc.UnaryInterceptor(
grpcMiddleware.ChainUnaryServer( grpcMiddleware.ChainUnaryServer(
h.grpcAuthenticationInterceptor, h.grpcAuthenticationInterceptor,
// Uncomment to debug grpc communication. zerolog.NewUnaryServerInterceptor(),
// zerolog.NewUnaryServerInterceptor(),
), ),
), ),
} }
@@ -680,17 +685,18 @@ func (h *Headscale) Serve() error {
// HTTP setup // HTTP setup
// //
// This is the regular router that we expose // This is the regular router that we expose
// over our main Addr // over our main Addr. It also serves the legacy Tailcale API
router := h.createRouter(grpcGatewayMux) router := h.createRouter(grpcGatewayMux)
httpServer := &http.Server{ httpServer := &http.Server{
Addr: h.cfg.Addr, Addr: h.cfg.Addr,
Handler: router, Handler: router,
ReadTimeout: types.HTTPTimeout, ReadTimeout: HTTPReadTimeout,
// Go does not handle timeouts in HTTP very well, and there is
// Long polling should not have any timeout, this is overriden // no good way to handle streaming timeouts, therefore we need to
// further down the chain // keep this at unlimited and be careful to clean up connections
WriteTimeout: types.HTTPTimeout, // https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/#aboutstreaming
WriteTimeout: 0,
} }
var httpListener net.Listener var httpListener net.Listener
@@ -709,59 +715,30 @@ func (h *Headscale) Serve() error {
log.Info(). log.Info().
Msgf("listening and serving HTTP on: %s", h.cfg.Addr) Msgf("listening and serving HTTP on: %s", h.cfg.Addr)
debugMux := http.NewServeMux() promMux := http.NewServeMux()
debugMux.Handle("/debug/pprof/", http.DefaultServeMux) promMux.Handle("/metrics", promhttp.Handler())
debugMux.HandleFunc("/debug/notifier", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write([]byte(h.nodeNotifier.String()))
})
debugMux.HandleFunc("/debug/mapresp", func(w http.ResponseWriter, r *http.Request) {
h.mapSessionMu.Lock()
defer h.mapSessionMu.Unlock()
var b strings.Builder promHTTPServer := &http.Server{
b.WriteString("mapresponders:\n")
for k, v := range h.mapSessions {
fmt.Fprintf(&b, "\t%d: %p\n", k, v)
}
w.WriteHeader(http.StatusOK)
w.Write([]byte(b.String()))
})
debugMux.Handle("/metrics", promhttp.Handler())
debugHTTPServer := &http.Server{
Addr: h.cfg.MetricsAddr, Addr: h.cfg.MetricsAddr,
Handler: debugMux, Handler: promMux,
ReadTimeout: types.HTTPTimeout, ReadTimeout: HTTPReadTimeout,
WriteTimeout: 0, WriteTimeout: 0,
} }
debugHTTPListener, err := net.Listen("tcp", h.cfg.MetricsAddr) var promHTTPListener net.Listener
promHTTPListener, err = net.Listen("tcp", h.cfg.MetricsAddr)
if err != nil { if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err) return fmt.Errorf("failed to bind to TCP address: %w", err)
} }
errorGroup.Go(func() error { return debugHTTPServer.Serve(debugHTTPListener) }) errorGroup.Go(func() error { return promHTTPServer.Serve(promHTTPListener) })
log.Info(). log.Info().
Msgf("listening and serving debug and metrics on: %s", h.cfg.MetricsAddr) Msgf("listening and serving metrics on: %s", h.cfg.MetricsAddr)
var tailsqlContext context.Context
if tailsqlEnabled {
if h.cfg.Database.Type != types.DatabaseSqlite {
log.Fatal().
Str("type", h.cfg.Database.Type).
Msgf("tailsql only support %q", types.DatabaseSqlite)
}
if tailsqlTSKey == "" {
log.Fatal().Msg("tailsql requires TS_AUTHKEY to be set")
}
tailsqlContext = context.Background()
go runTailSQLService(ctx, util.TSLogfWrapper(), tailsqlStateDir, h.cfg.Database.Sqlite.Path)
}
// Handle common process-killing signals so we can gracefully shut down: // Handle common process-killing signals so we can gracefully shut down:
h.shutdownChan = make(chan struct{})
sigc := make(chan os.Signal, 1) sigc := make(chan os.Signal, 1)
signal.Notify(sigc, signal.Notify(sigc,
syscall.SIGHUP, syscall.SIGHUP,
@@ -782,21 +759,16 @@ func (h *Headscale) Serve() error {
// TODO(kradalby): Reload config on SIGHUP // TODO(kradalby): Reload config on SIGHUP
if h.cfg.ACL.PolicyPath != "" { if h.cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(h.cfg.ACL.PolicyPath) aclPath := AbsolutePathFromConfigPath(h.cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath) err := h.LoadACLPolicy(aclPath)
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to reload ACL policy") log.Error().Err(err).Msg("Failed to reload ACL policy")
} }
h.ACLPolicy = pol
log.Info(). log.Info().
Str("path", aclPath). Str("path", aclPath).
Msg("ACL policy successfully reloaded, notifying nodes of change") Msg("ACL policy successfully reloaded, notifying nodes of change")
ctx := types.NotifyCtx(context.Background(), "acl-sighup", "na") h.setLastStateChangeToNow()
h.nodeNotifier.NotifyAll(ctx, types.StateUpdate{
Type: types.StateFullUpdate,
})
} }
default: default:
@@ -804,14 +776,15 @@ func (h *Headscale) Serve() error {
Str("signal", sig.String()). Str("signal", sig.String()).
Msg("Received signal to stop, shutting down gracefully") Msg("Received signal to stop, shutting down gracefully")
close(h.shutdownChan)
h.pollNetMapStreamWG.Wait() h.pollNetMapStreamWG.Wait()
// Gracefully shut down servers // Gracefully shut down servers
ctx, cancel := context.WithTimeout( ctx, cancel := context.WithTimeout(
context.Background(), context.Background(),
types.HTTPShutdownTimeout, HTTPShutdownTimeout,
) )
if err := debugHTTPServer.Shutdown(ctx); err != nil { if err := promHTTPServer.Shutdown(ctx); err != nil {
log.Error().Err(err).Msg("Failed to shutdown prometheus http") log.Error().Err(err).Msg("Failed to shutdown prometheus http")
} }
if err := httpServer.Shutdown(ctx); err != nil { if err := httpServer.Shutdown(ctx); err != nil {
@@ -824,12 +797,8 @@ func (h *Headscale) Serve() error {
grpcListener.Close() grpcListener.Close()
} }
if tailsqlContext != nil {
tailsqlContext.Done()
}
// Close network listeners // Close network listeners
debugHTTPListener.Close() promHTTPListener.Close()
httpListener.Close() httpListener.Close()
grpcGatewayConn.Close() grpcGatewayConn.Close()
@@ -837,7 +806,11 @@ func (h *Headscale) Serve() error {
socketListener.Close() socketListener.Close()
// Close db connections // Close db connections
err = h.db.Close() db, err := h.db.DB()
if err != nil {
log.Error().Err(err).Msg("Failed to get db handle")
}
err = db.Close()
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to close db") log.Error().Err(err).Msg("Failed to close db")
} }
@@ -847,8 +820,7 @@ func (h *Headscale) Serve() error {
// And we're done: // And we're done:
cancel() cancel()
os.Exit(0)
return
} }
} }
} }
@@ -880,13 +852,13 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
} }
switch h.cfg.TLS.LetsEncrypt.ChallengeType { switch h.cfg.TLS.LetsEncrypt.ChallengeType {
case types.TLSALPN01ChallengeType: case tlsALPN01ChallengeType:
// Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737) // Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737)
// The RFC requires that the validation is done on port 443; in other words, headscale // The RFC requires that the validation is done on port 443; in other words, headscale
// must be reachable on port 443. // must be reachable on port 443.
return certManager.TLSConfig(), nil return certManager.TLSConfig(), nil
case types.HTTP01ChallengeType: case http01ChallengeType:
// Configuration via autocert with HTTP-01. This requires listening on // Configuration via autocert with HTTP-01. This requires listening on
// port 80 for the certificate validation in addition to the headscale // port 80 for the certificate validation in addition to the headscale
// service, which can be configured to run on any other port. // service, which can be configured to run on any other port.
@@ -894,7 +866,7 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
server := &http.Server{ server := &http.Server{
Addr: h.cfg.TLS.LetsEncrypt.Listen, Addr: h.cfg.TLS.LetsEncrypt.Listen,
Handler: certManager.HTTPHandler(http.HandlerFunc(h.redirect)), Handler: certManager.HTTPHandler(http.HandlerFunc(h.redirect)),
ReadTimeout: types.HTTPTimeout, ReadTimeout: HTTPReadTimeout,
} }
go func() { go func() {
@@ -933,6 +905,60 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
} }
} }
func (h *Headscale) setLastStateChangeToNow() {
var err error
now := time.Now().UTC()
users, err := h.ListUsers()
if err != nil {
log.Error().
Caller().
Err(err).
Msg("failed to fetch all users, failing to update last changed state.")
}
for _, user := range users {
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix()))
if h.lastStateChange == nil {
h.lastStateChange = xsync.NewMapOf[time.Time]()
}
h.lastStateChange.Store(user.Name, now)
}
}
func (h *Headscale) getLastStateChange(users ...User) time.Time {
times := []time.Time{}
// getLastStateChange takes a list of users as a "filter", if no users
// are past, then use the entier list of users and look for the last update
if len(users) > 0 {
for _, user := range users {
if lastChange, ok := h.lastStateChange.Load(user.Name); ok {
times = append(times, lastChange)
}
}
} else {
h.lastStateChange.Range(func(key string, value time.Time) bool {
times = append(times, value)
return true
})
}
sort.Slice(times, func(i, j int) bool {
return times[i].After(times[j])
})
log.Trace().Msgf("Latest times %#v", times)
if len(times) == 0 {
return time.Now().UTC()
} else {
return times[0]
}
}
func notFoundHandler( func notFoundHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
@@ -949,12 +975,6 @@ func notFoundHandler(
} }
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) { func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
dir := filepath.Dir(path)
err := util.EnsureDir(dir)
if err != nil {
return nil, fmt.Errorf("ensuring private key directory: %w", err)
}
privateKey, err := os.ReadFile(path) privateKey, err := os.ReadFile(path)
if errors.Is(err, os.ErrNotExist) { if errors.Is(err, os.ErrNotExist) {
log.Info().Str("path", path).Msg("No private key file at path, creating...") log.Info().Str("path", path).Msg("No private key file at path, creating...")
@@ -971,8 +991,7 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
err = os.WriteFile(path, machineKeyStr, privateKeyFileMode) err = os.WriteFile(path, machineKeyStr, privateKeyFileMode)
if err != nil { if err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf(
"failed to save private key to disk at path %q: %w", "failed to save private key to disk: %w",
path,
err, err,
) )
} }
@@ -983,9 +1002,16 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
} }
trimmedPrivateKey := strings.TrimSpace(string(privateKey)) trimmedPrivateKey := strings.TrimSpace(string(privateKey))
privateKeyEnsurePrefix := PrivateKeyEnsurePrefix(trimmedPrivateKey)
var machineKey key.MachinePrivate var machineKey key.MachinePrivate
if err = machineKey.UnmarshalText([]byte(trimmedPrivateKey)); err != nil { if err = machineKey.UnmarshalText([]byte(privateKeyEnsurePrefix)); err != nil {
log.Info().
Str("path", path).
Msg("This might be due to a legacy (headscale pre-0.12) private key. " +
"If the key is in WireGuard format, delete the key and restart headscale. " +
"A new key will automatically be generated. All Tailscale clients will have to be restarted")
return nil, fmt.Errorf("failed to parse private key: %w", err) return nil, fmt.Errorf("failed to parse private key: %w", err)
} }

View File

@@ -1,10 +1,10 @@
package hscontrol package headscale
import ( import (
"net/netip"
"os" "os"
"testing" "testing"
"github.com/juanfont/headscale/hscontrol/types"
"gopkg.in/check.v1" "gopkg.in/check.v1"
) )
@@ -18,7 +18,7 @@ type Suite struct{}
var ( var (
tmpDir string tmpDir string
app *Headscale app Headscale
) )
func (s *Suite) SetUpTest(c *check.C) { func (s *Suite) SetUpTest(c *check.C) {
@@ -34,25 +34,28 @@ func (s *Suite) ResetDB(c *check.C) {
os.RemoveAll(tmpDir) os.RemoveAll(tmpDir)
} }
var err error var err error
tmpDir, err = os.MkdirTemp("", "autoygg-client-test2") tmpDir, err = os.MkdirTemp("", "autoygg-client-test")
if err != nil { if err != nil {
c.Fatal(err) c.Fatal(err)
} }
cfg := types.Config{ cfg := Config{
NoisePrivateKeyPath: tmpDir + "/noise_private.key", IPPrefixes: []netip.Prefix{
Database: types.DatabaseConfig{ netip.MustParsePrefix("10.27.0.0/23"),
Type: "sqlite3",
Sqlite: types.SqliteConfig{
Path: tmpDir + "/headscale_test.db",
},
},
OIDC: types.OIDCConfig{
StripEmaildomain: false,
}, },
} }
app, err = NewHeadscale(&cfg) app = Headscale{
cfg: &cfg,
dbType: "sqlite3",
dbString: tmpDir + "/headscale_test.db",
}
err = app.initDB()
if err != nil { if err != nil {
c.Fatal(err) c.Fatal(err)
} }
db, err := app.openDB()
if err != nil {
c.Fatal(err)
}
app.db = db
} }

View File

@@ -6,10 +6,98 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"log" "log"
"os"
"os/exec" "os/exec"
"path"
"path/filepath"
"strings" "strings"
"text/template"
) )
var (
githubWorkflowPath = "../../.github/workflows/"
jobFileNameTemplate = `test-integration-v2-%s.yaml`
jobTemplate = template.Must(
template.New("jobTemplate").
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - {{.Name}}
on: [pull_request]
concurrency:
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^{{.Name}}$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
`),
)
)
const workflowFilePerm = 0o600
func removeTests() {
glob := fmt.Sprintf(jobFileNameTemplate, "*")
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
if err != nil {
log.Fatalf("failed to find test files")
}
for _, file := range files {
err := os.Remove(file)
if err != nil {
log.Printf("failed to remove: %s", err)
}
}
}
func findTests() []string { func findTests() []string {
rgBin, err := exec.LookPath("rg") rgBin, err := exec.LookPath("rg")
if err != nil { if err != nil {
@@ -26,44 +114,50 @@ func findTests() []string {
"--no-heading", "--no-heading",
} }
cmd := exec.Command(rgBin, args...) log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
var out bytes.Buffer
cmd.Stdout = &out ripgrep := exec.Command(
err = cmd.Run() rgBin,
args...,
)
result, err := ripgrep.CombinedOutput()
if err != nil { if err != nil {
log.Fatalf("failed to run command: %s", err) log.Printf("out: %s", result)
log.Fatalf("failed to run ripgrep: %s", err)
} }
tests := strings.Split(strings.TrimSpace(out.String()), "\n") tests := strings.Split(string(result), "\n")
tests = tests[:len(tests)-1]
return tests return tests
} }
func updateYAML(tests []string) { func main() {
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", ")) type testConfig struct {
Name string
yqCommand := fmt.Sprintf(
"yq eval '.jobs.integration-test.strategy.matrix.test = %s' ../../.github/workflows/test-integration.yaml -i",
testsForYq,
)
cmd := exec.Command("bash", "-c", yqCommand)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
log.Fatalf("failed to run yq command: %s", err)
} }
fmt.Println("YAML file updated successfully")
}
func main() {
tests := findTests() tests := findTests()
quotedTests := make([]string, len(tests)) removeTests()
for i, test := range tests {
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
}
updateYAML(quotedTests) for _, test := range tests {
log.Printf("generating workflow for %s", test)
var content bytes.Buffer
if err := jobTemplate.Execute(&content, testConfig{
Name: test,
}); err != nil {
log.Fatalf("failed to render template: %s", err)
}
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
if err != nil {
log.Fatalf("failed to write github job: %s", err)
}
}
} }

View File

@@ -5,8 +5,8 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model" "github.com/prometheus/common/model"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
@@ -29,16 +29,11 @@ func init() {
apiKeysCmd.AddCommand(createAPIKeyCmd) apiKeysCmd.AddCommand(createAPIKeyCmd)
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix") expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil { err := expireAPIKeyCmd.MarkFlagRequired("prefix")
if err != nil {
log.Fatal().Err(err).Msg("") log.Fatal().Err(err).Msg("")
} }
apiKeysCmd.AddCommand(expireAPIKeyCmd) apiKeysCmd.AddCommand(expireAPIKeyCmd)
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
} }
var apiKeysCmd = &cobra.Command{ var apiKeysCmd = &cobra.Command{
@@ -72,7 +67,7 @@ var listAPIKeys = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.GetApiKeys(), "", output) SuccessOutput(response.ApiKeys, "", output)
return return
} }
@@ -80,15 +75,15 @@ var listAPIKeys = &cobra.Command{
tableData := pterm.TableData{ tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"}, {"ID", "Prefix", "Expiration", "Created"},
} }
for _, key := range response.GetApiKeys() { for _, key := range response.ApiKeys {
expiration := "-" expiration := "-"
if key.GetExpiration() != nil { if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime()) expiration = ColourTime(key.Expiration.AsTime())
} }
tableData = append(tableData, []string{ tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10), strconv.FormatUint(key.GetId(), headscale.Base10),
key.GetPrefix(), key.GetPrefix(),
expiration, expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat), key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
@@ -160,7 +155,7 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
return return
} }
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output) SuccessOutput(response.ApiKey, response.ApiKey, output)
}, },
} }
@@ -204,44 +199,3 @@ var expireAPIKeyCmd = &cobra.Command{
SuccessOutput(response, "Key expired", output) SuccessOutput(response, "Key expired", output)
}, },
} }
var deleteAPIKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete an ApiKey",
Aliases: []string{"remove", "del"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
prefix, err := cmd.Flags().GetString("prefix")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.DeleteApiKeyRequest{
Prefix: prefix,
}
response, err := client.DeleteApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response, "Key deleted", output)
},
}

View File

@@ -3,11 +3,11 @@ package cli
import ( import (
"fmt" "fmt"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
"tailscale.com/types/key"
) )
const ( const (
@@ -57,7 +57,7 @@ var debugCmd = &cobra.Command{
var createNodeCmd = &cobra.Command{ var createNodeCmd = &cobra.Command{
Use: "create-node", Use: "create-node",
Short: "Create a node that can be registered with `nodes register <>` command", Short: "Create a node (machine) that can be registered with `nodes register <>` command",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
@@ -93,13 +93,11 @@ var createNodeCmd = &cobra.Command{
return return
} }
if !headscale.NodePublicKeyRegex.Match([]byte(machineKey)) {
var mkey key.MachinePublic err = errPreAuthKeyMalformed
err = mkey.UnmarshalText([]byte(machineKey))
if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Failed to parse machine key from flag: %s", err), fmt.Sprintf("Error: %s", err),
output, output,
) )
@@ -117,24 +115,24 @@ var createNodeCmd = &cobra.Command{
return return
} }
request := &v1.DebugCreateNodeRequest{ request := &v1.DebugCreateMachineRequest{
Key: machineKey, Key: machineKey,
Name: name, Name: name,
User: user, User: user,
Routes: routes, Routes: routes,
} }
response, err := client.DebugCreateNode(ctx, request) response, err := client.DebugCreateMachine(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()), fmt.Sprintf("Cannot create machine: %s", status.Convert(err).Message()),
output, output,
) )
return return
} }
SuccessOutput(response.GetNode(), "Node created", output) SuccessOutput(response.Machine, "Machine created", output)
}, },
} }

View File

@@ -9,8 +9,8 @@ import (
"time" "time"
survey "github.com/AlecAivazis/survey/v2" survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
@@ -97,8 +97,6 @@ func init() {
tagCmd.Flags(). tagCmd.Flags().
StringSliceP("tags", "t", []string{}, "List of tags to add to the node") StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
nodeCmd.AddCommand(tagCmd) nodeCmd.AddCommand(tagCmd)
nodeCmd.AddCommand(backfillNodeIPsCmd)
} }
var nodeCmd = &cobra.Command{ var nodeCmd = &cobra.Command{
@@ -109,7 +107,7 @@ var nodeCmd = &cobra.Command{
var registerNodeCmd = &cobra.Command{ var registerNodeCmd = &cobra.Command{
Use: "register", Use: "register",
Short: "Registers a node to your network", Short: "Registers a machine to your network",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
@@ -134,17 +132,17 @@ var registerNodeCmd = &cobra.Command{
return return
} }
request := &v1.RegisterNodeRequest{ request := &v1.RegisterMachineRequest{
Key: machineKey, Key: machineKey,
User: user, User: user,
} }
response, err := client.RegisterNode(ctx, request) response, err := client.RegisterMachine(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf( fmt.Sprintf(
"Cannot register node: %s\n", "Cannot register machine: %s\n",
status.Convert(err).Message(), status.Convert(err).Message(),
), ),
output, output,
@@ -154,8 +152,8 @@ var registerNodeCmd = &cobra.Command{
} }
SuccessOutput( SuccessOutput(
response.GetNode(), response.Machine,
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()), output) fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
}, },
} }
@@ -182,11 +180,11 @@ var listNodesCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
request := &v1.ListNodesRequest{ request := &v1.ListMachinesRequest{
User: user, User: user,
} }
response, err := client.ListNodes(ctx, request) response, err := client.ListMachines(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
@@ -198,12 +196,12 @@ var listNodesCmd = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.GetNodes(), "", output) SuccessOutput(response.Machines, "", output)
return return
} }
tableData, err := nodesToPtables(user, showTags, response.GetNodes()) tableData, err := nodesToPtables(user, showTags, response.Machines)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -225,7 +223,7 @@ var listNodesCmd = &cobra.Command{
var expireNodeCmd = &cobra.Command{ var expireNodeCmd = &cobra.Command{
Use: "expire", Use: "expire",
Short: "Expire (log out) a node in your network", Short: "Expire (log out) a machine in your network",
Long: "Expiring a node will keep the node in the database and force it to reauthenticate.", Long: "Expiring a node will keep the node in the database and force it to reauthenticate.",
Aliases: []string{"logout", "exp", "e"}, Aliases: []string{"logout", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
@@ -246,16 +244,16 @@ var expireNodeCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
request := &v1.ExpireNodeRequest{ request := &v1.ExpireMachineRequest{
NodeId: identifier, MachineId: identifier,
} }
response, err := client.ExpireNode(ctx, request) response, err := client.ExpireMachine(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf( fmt.Sprintf(
"Cannot expire node: %s\n", "Cannot expire machine: %s\n",
status.Convert(err).Message(), status.Convert(err).Message(),
), ),
output, output,
@@ -264,13 +262,13 @@ var expireNodeCmd = &cobra.Command{
return return
} }
SuccessOutput(response.GetNode(), "Node expired", output) SuccessOutput(response.Machine, "Machine expired", output)
}, },
} }
var renameNodeCmd = &cobra.Command{ var renameNodeCmd = &cobra.Command{
Use: "rename NEW_NAME", Use: "rename NEW_NAME",
Short: "Renames a node in your network", Short: "Renames a machine in your network",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
@@ -293,17 +291,17 @@ var renameNodeCmd = &cobra.Command{
if len(args) > 0 { if len(args) > 0 {
newName = args[0] newName = args[0]
} }
request := &v1.RenameNodeRequest{ request := &v1.RenameMachineRequest{
NodeId: identifier, MachineId: identifier,
NewName: newName, NewName: newName,
} }
response, err := client.RenameNode(ctx, request) response, err := client.RenameMachine(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf( fmt.Sprintf(
"Cannot rename node: %s\n", "Cannot rename machine: %s\n",
status.Convert(err).Message(), status.Convert(err).Message(),
), ),
output, output,
@@ -312,7 +310,7 @@ var renameNodeCmd = &cobra.Command{
return return
} }
SuccessOutput(response.GetNode(), "Node renamed", output) SuccessOutput(response.Machine, "Machine renamed", output)
}, },
} }
@@ -338,11 +336,11 @@ var deleteNodeCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
getRequest := &v1.GetNodeRequest{ getRequest := &v1.GetMachineRequest{
NodeId: identifier, MachineId: identifier,
} }
getResponse, err := client.GetNode(ctx, getRequest) getResponse, err := client.GetMachine(ctx, getRequest)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
@@ -356,8 +354,8 @@ var deleteNodeCmd = &cobra.Command{
return return
} }
deleteRequest := &v1.DeleteNodeRequest{ deleteRequest := &v1.DeleteMachineRequest{
NodeId: identifier, MachineId: identifier,
} }
confirm := false confirm := false
@@ -366,7 +364,7 @@ var deleteNodeCmd = &cobra.Command{
prompt := &survey.Confirm{ prompt := &survey.Confirm{
Message: fmt.Sprintf( Message: fmt.Sprintf(
"Do you want to remove the node %s?", "Do you want to remove the node %s?",
getResponse.GetNode().GetName(), getResponse.GetMachine().Name,
), ),
} }
err = survey.AskOne(prompt, &confirm) err = survey.AskOne(prompt, &confirm)
@@ -376,7 +374,7 @@ var deleteNodeCmd = &cobra.Command{
} }
if confirm || force { if confirm || force {
response, err := client.DeleteNode(ctx, deleteRequest) response, err := client.DeleteMachine(ctx, deleteRequest)
if output != "" { if output != "" {
SuccessOutput(response, "", output) SuccessOutput(response, "", output)
@@ -438,11 +436,11 @@ var moveNodeCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
getRequest := &v1.GetNodeRequest{ getRequest := &v1.GetMachineRequest{
NodeId: identifier, MachineId: identifier,
} }
_, err = client.GetNode(ctx, getRequest) _, err = client.GetMachine(ctx, getRequest)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
@@ -456,12 +454,12 @@ var moveNodeCmd = &cobra.Command{
return return
} }
moveRequest := &v1.MoveNodeRequest{ moveRequest := &v1.MoveMachineRequest{
NodeId: identifier, MachineId: identifier,
User: user, User: user,
} }
moveResponse, err := client.MoveNode(ctx, moveRequest) moveResponse, err := client.MoveMachine(ctx, moveRequest)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
@@ -475,65 +473,14 @@ var moveNodeCmd = &cobra.Command{
return return
} }
SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output) SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
},
}
var backfillNodeIPsCmd = &cobra.Command{
Use: "backfillips",
Short: "Backfill IPs missing from nodes",
Long: `
Backfill IPs can be used to add/remove IPs from nodes
based on the current configuration of Headscale.
If there are nodes that does not have IPv4 or IPv6
even if prefixes for both are configured in the config,
this command can be used to assign IPs of the sort to
all nodes that are missing.
If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer
be assigned to nodes.`,
Run: func(cmd *cobra.Command, args []string) {
var err error
output, _ := cmd.Flags().GetString("output")
confirm := false
prompt := &survey.Confirm{
Message: "Are you sure that you want to assign/remove IPs to/from nodes?",
}
err = survey.AskOne(prompt, &confirm)
if err != nil {
return
}
if confirm {
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Error backfilling IPs: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(changes, "Node IPs backfilled successfully", output)
}
}, },
} }
func nodesToPtables( func nodesToPtables(
currentUser string, currentUser string,
showTags bool, showTags bool,
nodes []*v1.Node, machines []*v1.Machine,
) (pterm.TableData, error) { ) (pterm.TableData, error) {
tableHeader := []string{ tableHeader := []string{
"ID", "ID",
@@ -546,7 +493,7 @@ func nodesToPtables(
"Ephemeral", "Ephemeral",
"Last seen", "Last seen",
"Expiration", "Expiration",
"Connected", "Online",
"Expired", "Expired",
} }
if showTags { if showTags {
@@ -558,23 +505,23 @@ func nodesToPtables(
} }
tableData := pterm.TableData{tableHeader} tableData := pterm.TableData{tableHeader}
for _, node := range nodes { for _, machine := range machines {
var ephemeral bool var ephemeral bool
if node.GetPreAuthKey() != nil && node.GetPreAuthKey().GetEphemeral() { if machine.PreAuthKey != nil && machine.PreAuthKey.Ephemeral {
ephemeral = true ephemeral = true
} }
var lastSeen time.Time var lastSeen time.Time
var lastSeenTime string var lastSeenTime string
if node.GetLastSeen() != nil { if machine.LastSeen != nil {
lastSeen = node.GetLastSeen().AsTime() lastSeen = machine.LastSeen.AsTime()
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05") lastSeenTime = lastSeen.Format("2006-01-02 15:04:05")
} }
var expiry time.Time var expiry time.Time
var expiryTime string var expiryTime string
if node.GetExpiry() != nil { if machine.Expiry != nil {
expiry = node.GetExpiry().AsTime() expiry = machine.Expiry.AsTime()
expiryTime = expiry.Format("2006-01-02 15:04:05") expiryTime = expiry.Format("2006-01-02 15:04:05")
} else { } else {
expiryTime = "N/A" expiryTime = "N/A"
@@ -582,7 +529,7 @@ func nodesToPtables(
var machineKey key.MachinePublic var machineKey key.MachinePublic
err := machineKey.UnmarshalText( err := machineKey.UnmarshalText(
[]byte(node.GetMachineKey()), []byte(headscale.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
) )
if err != nil { if err != nil {
machineKey = key.MachinePublic{} machineKey = key.MachinePublic{}
@@ -590,14 +537,14 @@ func nodesToPtables(
var nodeKey key.NodePublic var nodeKey key.NodePublic
err = nodeKey.UnmarshalText( err = nodeKey.UnmarshalText(
[]byte(node.GetNodeKey()), []byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
) )
if err != nil { if err != nil {
return nil, err return nil, err
} }
var online string var online string
if node.GetOnline() { if machine.Online {
online = pterm.LightGreen("online") online = pterm.LightGreen("online")
} else { } else {
online = pterm.LightRed("offline") online = pterm.LightRed("offline")
@@ -611,36 +558,36 @@ func nodesToPtables(
} }
var forcedTags string var forcedTags string
for _, tag := range node.GetForcedTags() { for _, tag := range machine.ForcedTags {
forcedTags += "," + tag forcedTags += "," + tag
} }
forcedTags = strings.TrimLeft(forcedTags, ",") forcedTags = strings.TrimLeft(forcedTags, ",")
var invalidTags string var invalidTags string
for _, tag := range node.GetInvalidTags() { for _, tag := range machine.InvalidTags {
if !contains(node.GetForcedTags(), tag) { if !contains(machine.ForcedTags, tag) {
invalidTags += "," + pterm.LightRed(tag) invalidTags += "," + pterm.LightRed(tag)
} }
} }
invalidTags = strings.TrimLeft(invalidTags, ",") invalidTags = strings.TrimLeft(invalidTags, ",")
var validTags string var validTags string
for _, tag := range node.GetValidTags() { for _, tag := range machine.ValidTags {
if !contains(node.GetForcedTags(), tag) { if !contains(machine.ForcedTags, tag) {
validTags += "," + pterm.LightGreen(tag) validTags += "," + pterm.LightGreen(tag)
} }
} }
validTags = strings.TrimLeft(validTags, ",") validTags = strings.TrimLeft(validTags, ",")
var user string var user string
if currentUser == "" || (currentUser == node.GetUser().GetName()) { if currentUser == "" || (currentUser == machine.User.Name) {
user = pterm.LightMagenta(node.GetUser().GetName()) user = pterm.LightMagenta(machine.User.Name)
} else { } else {
// Shared into this user // Shared into this user
user = pterm.LightYellow(node.GetUser().GetName()) user = pterm.LightYellow(machine.User.Name)
} }
var IPV4Address string var IPV4Address string
var IPV6Address string var IPV6Address string
for _, addr := range node.GetIpAddresses() { for _, addr := range machine.IpAddresses {
if netip.MustParseAddr(addr).Is4() { if netip.MustParseAddr(addr).Is4() {
IPV4Address = addr IPV4Address = addr
} else { } else {
@@ -649,9 +596,9 @@ func nodesToPtables(
} }
nodeData := []string{ nodeData := []string{
strconv.FormatUint(node.GetId(), util.Base10), strconv.FormatUint(machine.Id, headscale.Base10),
node.GetName(), machine.Name,
node.GetGivenName(), machine.GetGivenName(),
machineKey.ShortString(), machineKey.ShortString(),
nodeKey.ShortString(), nodeKey.ShortString(),
user, user,
@@ -699,17 +646,17 @@ var tagCmd = &cobra.Command{
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err), fmt.Sprintf("Error retrieving list of tags to add to machine, %v", err),
output, output,
) )
return return
} }
// Sending tags to node // Sending tags to machine
request := &v1.SetTagsRequest{ request := &v1.SetTagsRequest{
NodeId: identifier, MachineId: identifier,
Tags: tagsToSet, Tags: tagsToSet,
} }
resp, err := client.SetTags(ctx, request) resp, err := client.SetTags(ctx, request)
if err != nil { if err != nil {
@@ -724,8 +671,8 @@ var tagCmd = &cobra.Command{
if resp != nil { if resp != nil {
SuccessOutput( SuccessOutput(
resp.GetNode(), resp.GetMachine(),
"Node updated", "Machine updated",
output, output,
) )
} }

View File

@@ -84,7 +84,7 @@ var listPreAuthKeys = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.GetPreAuthKeys(), "", output) SuccessOutput(response.PreAuthKeys, "", output)
return return
} }
@@ -101,15 +101,22 @@ var listPreAuthKeys = &cobra.Command{
"Tags", "Tags",
}, },
} }
for _, key := range response.GetPreAuthKeys() { for _, key := range response.PreAuthKeys {
expiration := "-" expiration := "-"
if key.GetExpiration() != nil { if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime()) expiration = ColourTime(key.Expiration.AsTime())
}
var reusable string
if key.GetEphemeral() {
reusable = "N/A"
} else {
reusable = fmt.Sprintf("%v", key.GetReusable())
} }
aclTags := "" aclTags := ""
for _, tag := range key.GetAclTags() { for _, tag := range key.AclTags {
aclTags += "," + tag aclTags += "," + tag
} }
@@ -118,7 +125,7 @@ var listPreAuthKeys = &cobra.Command{
tableData = append(tableData, []string{ tableData = append(tableData, []string{
key.GetId(), key.GetId(),
key.GetKey(), key.GetKey(),
strconv.FormatBool(key.GetReusable()), reusable,
strconv.FormatBool(key.GetEphemeral()), strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()), strconv.FormatBool(key.GetUsed()),
expiration, expiration,
@@ -207,7 +214,7 @@ var createPreAuthKeyCmd = &cobra.Command{
return return
} }
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output) SuccessOutput(response.PreAuthKey, response.PreAuthKey.Key, output)
}, },
} }

View File

@@ -5,7 +5,7 @@ import (
"os" "os"
"runtime" "runtime"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
@@ -38,33 +38,33 @@ func initConfig() {
cfgFile = os.Getenv("HEADSCALE_CONFIG") cfgFile = os.Getenv("HEADSCALE_CONFIG")
} }
if cfgFile != "" { if cfgFile != "" {
err := types.LoadConfig(cfgFile, true) err := headscale.LoadConfig(cfgFile, true)
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile) log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile)
} }
} else { } else {
err := types.LoadConfig("", false) err := headscale.LoadConfig("", false)
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config") log.Fatal().Caller().Err(err).Msgf("Error loading config")
} }
} }
cfg, err := types.GetHeadscaleConfig() cfg, err := headscale.GetHeadscaleConfig()
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msg("Failed to get headscale configuration") log.Fatal().Caller().Err(err)
} }
machineOutput := HasMachineOutputFlag() machineOutput := HasMachineOutputFlag()
zerolog.SetGlobalLevel(cfg.Log.Level) zerolog.SetGlobalLevel(cfg.Log.Level)
// If the user has requested a "node" readable format, // If the user has requested a "machine" readable format,
// then disable login so the output remains valid. // then disable login so the output remains valid.
if machineOutput { if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled) zerolog.SetGlobalLevel(zerolog.Disabled)
} }
if cfg.Log.Format == types.JSONLogFormat { if cfg.Log.Format == headscale.JSONLogFormat {
log.Logger = log.Output(os.Stdout) log.Logger = log.Output(os.Stdout)
} }
@@ -78,7 +78,7 @@ func initConfig() {
res, err := latest.Check(githubTag, Version) res, err := latest.Check(githubTag, Version)
if err == nil && res.Outdated { if err == nil && res.Outdated {
//nolint //nolint
log.Warn().Msgf( fmt.Printf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n", "An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current, res.Current,
Version, Version,

View File

@@ -6,8 +6,8 @@ import (
"net/netip" "net/netip"
"strconv" "strconv"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
@@ -87,20 +87,20 @@ var listRoutesCmd = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.GetRoutes(), "", output) SuccessOutput(response.Routes, "", output)
return return
} }
routes = response.GetRoutes() routes = response.Routes
} else { } else {
response, err := client.GetNodeRoutes(ctx, &v1.GetNodeRoutesRequest{ response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
NodeId: machineID, MachineId: machineID,
}) })
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Cannot get routes for node %d: %s", machineID, status.Convert(err).Message()), fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
output, output,
) )
@@ -108,12 +108,12 @@ var listRoutesCmd = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.GetRoutes(), "", output) SuccessOutput(response.Routes, "", output)
return return
} }
routes = response.GetRoutes() routes = response.Routes
} }
tableData := routesToPtables(routes) tableData := routesToPtables(routes)
@@ -267,29 +267,29 @@ var deleteRouteCmd = &cobra.Command{
// routesToPtables converts the list of routes to a nice table. // routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes []*v1.Route) pterm.TableData { func routesToPtables(routes []*v1.Route) pterm.TableData {
tableData := pterm.TableData{{"ID", "Node", "Prefix", "Advertised", "Enabled", "Primary"}} tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
for _, route := range routes { for _, route := range routes {
var isPrimaryStr string var isPrimaryStr string
prefix, err := netip.ParsePrefix(route.GetPrefix()) prefix, err := netip.ParsePrefix(route.Prefix)
if err != nil { if err != nil {
log.Printf("Error parsing prefix %s: %s", route.GetPrefix(), err) log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
continue continue
} }
if prefix == types.ExitRouteV4 || prefix == types.ExitRouteV6 { if prefix == headscale.ExitRouteV4 || prefix == headscale.ExitRouteV6 {
isPrimaryStr = "-" isPrimaryStr = "-"
} else { } else {
isPrimaryStr = strconv.FormatBool(route.GetIsPrimary()) isPrimaryStr = strconv.FormatBool(route.IsPrimary)
} }
tableData = append(tableData, tableData = append(tableData,
[]string{ []string{
strconv.FormatUint(route.GetId(), Base10), strconv.FormatUint(route.Id, Base10),
route.GetNode().GetGivenName(), route.Machine.GivenName,
route.GetPrefix(), route.Prefix,
strconv.FormatBool(route.GetAdvertised()), strconv.FormatBool(route.Advertised),
strconv.FormatBool(route.GetEnabled()), strconv.FormatBool(route.Enabled),
isPrimaryStr, isPrimaryStr,
}) })
} }

View File

@@ -1,10 +1,10 @@
package cli package cli
import ( import (
"errors"
"fmt" "fmt"
survey "github.com/AlecAivazis/survey/v2" survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
@@ -20,7 +20,9 @@ func init() {
userCmd.AddCommand(renameUserCmd) userCmd.AddCommand(renameUserCmd)
} }
var errMissingParameter = errors.New("missing parameters") const (
errMissingParameter = headscale.Error("missing parameters")
)
var userCmd = &cobra.Command{ var userCmd = &cobra.Command{
Use: "users", Use: "users",
@@ -67,7 +69,7 @@ var createUserCmd = &cobra.Command{
return return
} }
SuccessOutput(response.GetUser(), "User created", output) SuccessOutput(response.User, "User created", output)
}, },
} }
@@ -169,7 +171,7 @@ var listUsersCmd = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.GetUsers(), "", output) SuccessOutput(response.Users, "", output)
return return
} }
@@ -236,6 +238,6 @@ var renameUserCmd = &cobra.Command{
return return
} }
SuccessOutput(response.GetUser(), "User renamed", output) SuccessOutput(response.User, "User renamed", output)
}, },
} }

View File

@@ -8,11 +8,8 @@ import (
"os" "os"
"reflect" "reflect"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"google.golang.org/grpc" "google.golang.org/grpc"
"google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials"
@@ -25,8 +22,8 @@ const (
SocketWritePermissions = 0o666 SocketWritePermissions = 0o666
) )
func getHeadscaleApp() (*hscontrol.Headscale, error) { func getHeadscaleApp() (*headscale.Headscale, error) {
cfg, err := types.GetHeadscaleConfig() cfg, err := headscale.GetHeadscaleConfig()
if err != nil { if err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf(
"failed to load configuration while creating headscale instance: %w", "failed to load configuration while creating headscale instance: %w",
@@ -34,7 +31,7 @@ func getHeadscaleApp() (*hscontrol.Headscale, error) {
) )
} }
app, err := hscontrol.NewHeadscale(cfg) app, err := headscale.NewHeadscale(cfg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -42,23 +39,21 @@ func getHeadscaleApp() (*hscontrol.Headscale, error) {
// We are doing this here, as in the future could be cool to have it also hot-reload // We are doing this here, as in the future could be cool to have it also hot-reload
if cfg.ACL.PolicyPath != "" { if cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath) aclPath := headscale.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath) err = app.LoadACLPolicy(aclPath)
if err != nil { if err != nil {
log.Fatal(). log.Fatal().
Str("path", aclPath). Str("path", aclPath).
Err(err). Err(err).
Msg("Could not load the ACL policy") Msg("Could not load the ACL policy")
} }
app.ACLPolicy = pol
} }
return app, nil return app, nil
} }
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) { func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg, err := types.GetHeadscaleConfig() cfg, err := headscale.GetHeadscaleConfig()
if err != nil { if err != nil {
log.Fatal(). log.Fatal().
Err(err). Err(err).
@@ -79,7 +74,7 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
address := cfg.CLI.Address address := cfg.CLI.Address
// If the address is not set, we assume that we are on the server hosting hscontrol. // If the address is not set, we assume that we are on the server hosting headscale.
if address == "" { if address == "" {
log.Debug(). log.Debug().
Str("socket", cfg.UnixSocket). Str("socket", cfg.UnixSocket).
@@ -103,7 +98,7 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
grpcOptions = append( grpcOptions = append(
grpcOptions, grpcOptions,
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer), grpc.WithContextDialer(headscale.GrpcSocketDialer),
) )
} else { } else {
// If we are not connecting to a local server, require an API key for authentication // If we are not connecting to a local server, require an API key for authentication
@@ -154,17 +149,17 @@ func SuccessOutput(result interface{}, override string, outputFormat string) {
case "json": case "json":
jsonBytes, err = json.MarshalIndent(result, "", "\t") jsonBytes, err = json.MarshalIndent(result, "", "\t")
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output") log.Fatal().Err(err)
} }
case "json-line": case "json-line":
jsonBytes, err = json.Marshal(result) jsonBytes, err = json.Marshal(result)
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output") log.Fatal().Err(err)
} }
case "yaml": case "yaml":
jsonBytes, err = yaml.Marshal(result) jsonBytes, err = yaml.Marshal(result)
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output") log.Fatal().Err(err)
} }
default: default:
//nolint //nolint

View File

@@ -4,7 +4,7 @@ import (
"os" "os"
"time" "time"
"github.com/jagottsicher/termcolor" "github.com/efekarakus/termcolor"
"github.com/juanfont/headscale/cmd/headscale/cli" "github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
@@ -34,7 +34,7 @@ func main() {
zerolog.TimeFieldFormat = zerolog.TimeFormatUnix zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
log.Logger = log.Output(zerolog.ConsoleWriter{ log.Logger = log.Output(zerolog.ConsoleWriter{
Out: os.Stderr, Out: os.Stdout,
TimeFormat: time.RFC3339, TimeFormat: time.RFC3339,
NoColor: !colors, NoColor: !colors,
}) })

View File

@@ -7,8 +7,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/spf13/viper" "github.com/spf13/viper"
"gopkg.in/check.v1" "gopkg.in/check.v1"
) )
@@ -51,21 +50,21 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
} }
// Load example config, it should load without validation errors // Load example config, it should load without validation errors
err = types.LoadConfig(cfgFile, true) err = headscale.LoadConfig(cfgFile, true)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly // Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite") c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite") c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "") c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http") c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1") c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert( c.Assert(
util.GetFileMode("unix_socket_permission"), headscale.GetFileMode("unix_socket_permission"),
check.Equals, check.Equals,
fs.FileMode(0o770), fs.FileMode(0o770),
) )
@@ -94,21 +93,21 @@ func (*Suite) TestConfigLoading(c *check.C) {
} }
// Load example config, it should load without validation errors // Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly // Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite") c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite") c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "") c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http") c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1") c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert( c.Assert(
util.GetFileMode("unix_socket_permission"), headscale.GetFileMode("unix_socket_permission"),
check.Equals, check.Equals,
fs.FileMode(0o770), fs.FileMode(0o770),
) )
@@ -138,10 +137,10 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
} }
// Load example config, it should load without validation errors // Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
dnsConfig, baseDomain := types.GetDNSConfig() dnsConfig, baseDomain := headscale.GetDNSConfig()
c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1") c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1")
c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1") c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1")
@@ -173,7 +172,7 @@ noise:
writeConfig(c, tmpDir, configYaml) writeConfig(c, tmpDir, configYaml)
// Check configuration validation errors (1) // Check configuration validation errors (1)
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
// check.Matches can not handle multiline strings // check.Matches can not handle multiline strings
tmp := strings.ReplaceAll(err.Error(), "\n", "***") tmp := strings.ReplaceAll(err.Error(), "\n", "***")
@@ -202,6 +201,6 @@ tls_letsencrypt_hostname: example.com
tls_letsencrypt_challenge_type: TLS-ALPN-01 tls_letsencrypt_challenge_type: TLS-ALPN-01
`) `)
writeConfig(c, tmpDir, configYaml) writeConfig(c, tmpDir, configYaml)
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
} }

View File

@@ -40,31 +40,32 @@ grpc_listen_addr: 127.0.0.1:50443
# are doing. # are doing.
grpc_allow_insecure: false grpc_allow_insecure: false
# Private key used to encrypt the traffic between headscale
# and Tailscale clients.
# The private key file will be autogenerated if it's missing.
#
private_key_path: /var/lib/headscale/private.key
# The Noise section includes specific configuration for the # The Noise section includes specific configuration for the
# TS2021 Noise protocol # TS2021 Noise protocol
noise: noise:
# The Noise private key is used to encrypt the # The Noise private key is used to encrypt the
# traffic between headscale and Tailscale clients when # traffic between headscale and Tailscale clients when
# using the new Noise-based protocol. # using the new Noise-based protocol. It must be different
# from the legacy private key.
private_key_path: /var/lib/headscale/noise_private.key private_key_path: /var/lib/headscale/noise_private.key
# List of IP prefixes to allocate tailaddresses from. # List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address, # Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash. # and the associated prefix length, delimited by a slash.
# It must be within IP ranges supported by the Tailscale # While this looks like it can take arbitrary values, it
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48. # needs to be within IP ranges supported by the Tailscale
# See below: # client.
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71 # IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33 # IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# Any other range is NOT supported, and it will cause unexpected issues. ip_prefixes:
prefixes: - fd7a:115c:a1e0::/48
v6: fd7a:115c:a1e0::/48 - 100.64.0.0/10
v4: 100.64.0.0/10
# Strategy used for allocation of IPs to nodes, available options:
# - sequential (default): assigns the next free IP from the previous given IP.
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
allocation: sequential
# DERP is a relay system that Tailscale uses when a direct # DERP is a relay system that Tailscale uses when a direct
# connection cannot be established. # connection cannot be established.
@@ -93,22 +94,6 @@ derp:
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/ # For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
stun_listen_addr: "0.0.0.0:3478" stun_listen_addr: "0.0.0.0:3478"
# Private key used to encrypt the traffic between headscale DERP
# and Tailscale clients.
# The private key file will be autogenerated if it's missing.
#
private_key_path: /var/lib/headscale/derp_server_private.key
# This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
# it enables the creation of your very own DERP map entry using a locally available file with the parameter DERP.paths
# If you enable the DERP server and set this to false, it is required to add the DERP server to the DERP map using DERP.paths
automatically_add_embedded_derp_region: true
# For better connection stability (especially when using an Exit-Node and DNS is not working),
# it is possible to optionall add the public IPv4 and IPv6 address to the Derp-Map using:
ipv4: 1.2.3.4
ipv6: 2001:db8::1
# List of externally available DERP maps encoded in JSON # List of externally available DERP maps encoded in JSON
urls: urls:
- https://controlplane.tailscale.com/derpmap/default - https://controlplane.tailscale.com/derpmap/default
@@ -137,28 +122,30 @@ disable_check_updates: false
# Time before an inactive ephemeral node is deleted? # Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m ephemeral_node_inactivity_timeout: 30m
database: # Period to check for node updates within the tailnet. A value too low will severely affect
type: sqlite # CPU consumption of Headscale. A value too high (over 60s) will cause problems
# for the nodes, as they won't get updates or keep alive messages frequently enough.
# In case of doubts, do not touch the default 10s.
node_update_check_interval: 10s
# SQLite config # SQLite config
sqlite: db_type: sqlite3
path: /var/lib/headscale/db.sqlite
# # Postgres config # For production:
# postgres: db_path: /var/lib/headscale/db.sqlite
# # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# host: localhost
# port: 5432
# name: headscale
# user: foo
# pass: bar
# max_open_conns: 10
# max_idle_conns: 10
# conn_max_idle_time_secs: 3600
# # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need # # Postgres config
# # in the 'ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1. # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# ssl: false # db_type: postgres
# db_host: localhost
# db_port: 5432
# db_name: headscale
# db_user: foo
# db_pass: bar
# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# db_ssl: false
### TLS configuration ### TLS configuration
# #

View File

@@ -1,4 +1,4 @@
package types package headscale
import ( import (
"errors" "errors"
@@ -11,18 +11,22 @@ import (
"time" "time"
"github.com/coreos/go-oidc/v3/oidc" "github.com/coreos/go-oidc/v3/oidc"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model" "github.com/prometheus/common/model"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/viper" "github.com/spf13/viper"
"go4.org/netipx" "go4.org/netipx"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/dnstype" "tailscale.com/types/dnstype"
) )
const ( const (
tlsALPN01ChallengeType = "TLS-ALPN-01"
http01ChallengeType = "HTTP-01"
JSONLogFormat = "json"
TextLogFormat = "text"
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
maxDuration time.Duration = 1<<63 - 1 maxDuration time.Duration = 1<<63 - 1
) )
@@ -31,13 +35,6 @@ var errOidcMutuallyExclusive = errors.New(
"oidc_client_secret and oidc_client_secret_path are mutually exclusive", "oidc_client_secret and oidc_client_secret_path are mutually exclusive",
) )
type IPAllocationStrategy string
const (
IPAllocationStrategySequential IPAllocationStrategy = "sequential"
IPAllocationStrategyRandom IPAllocationStrategy = "random"
)
// Config contains the initial Headscale configuration. // Config contains the initial Headscale configuration.
type Config struct { type Config struct {
ServerURL string ServerURL string
@@ -46,18 +43,25 @@ type Config struct {
GRPCAddr string GRPCAddr string
GRPCAllowInsecure bool GRPCAllowInsecure bool
EphemeralNodeInactivityTimeout time.Duration EphemeralNodeInactivityTimeout time.Duration
PrefixV4 *netip.Prefix NodeUpdateCheckInterval time.Duration
PrefixV6 *netip.Prefix IPPrefixes []netip.Prefix
IPAllocation IPAllocationStrategy PrivateKeyPath string
NoisePrivateKeyPath string NoisePrivateKeyPath string
BaseDomain string BaseDomain string
Log LogConfig Log LogConfig
DisableUpdateCheck bool DisableUpdateCheck bool
Database DatabaseConfig
DERP DERPConfig DERP DERPConfig
DBtype string
DBpath string
DBhost string
DBport int
DBname string
DBuser string
DBpass string
DBssl string
TLS TLSConfig TLS TLSConfig
ACMEURL string ACMEURL string
@@ -76,33 +80,6 @@ type Config struct {
CLI CLIConfig CLI CLIConfig
ACL ACLConfig ACL ACLConfig
Tuning Tuning
}
type SqliteConfig struct {
Path string
}
type PostgresConfig struct {
Host string
Port int
Name string
User string
Pass string
Ssl string
MaxOpenConnections int
MaxIdleConnections int
ConnMaxIdleTimeSecs int
}
type DatabaseConfig struct {
// Type sets the database type, either "sqlite3" or "postgres"
Type string
Debug bool
Sqlite SqliteConfig
Postgres PostgresConfig
} }
type TLSConfig struct { type TLSConfig struct {
@@ -135,19 +112,15 @@ type OIDCConfig struct {
} }
type DERPConfig struct { type DERPConfig struct {
ServerEnabled bool ServerEnabled bool
AutomaticallyAddEmbeddedDerpRegion bool ServerRegionID int
ServerRegionID int ServerRegionCode string
ServerRegionCode string ServerRegionName string
ServerRegionName string STUNAddr string
ServerPrivateKeyPath string URLs []url.URL
STUNAddr string Paths []string
URLs []url.URL AutoUpdate bool
Paths []string UpdateFrequency time.Duration
AutoUpdate bool
UpdateFrequency time.Duration
IPv4 string
IPv6 string
} }
type LogTailConfig struct { type LogTailConfig struct {
@@ -170,11 +143,6 @@ type LogConfig struct {
Level zerolog.Level Level zerolog.Level
} }
type Tuning struct {
BatchChangeDelay time.Duration
NodeMapSessionBufferedChanSize int
}
func LoadConfig(path string, isFile bool) error { func LoadConfig(path string, isFile bool) error {
if isFile { if isFile {
viper.SetConfigFile(path) viper.SetConfigFile(path)
@@ -195,7 +163,7 @@ func LoadConfig(path string, isFile bool) error {
viper.AutomaticEnv() viper.AutomaticEnv()
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache") viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
viper.SetDefault("tls_letsencrypt_challenge_type", HTTP01ChallengeType) viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
viper.SetDefault("log.level", "info") viper.SetDefault("log.level", "info")
viper.SetDefault("log.format", TextLogFormat) viper.SetDefault("log.format", TextLogFormat)
@@ -205,9 +173,8 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("derp.server.enabled", false) viper.SetDefault("derp.server.enabled", false)
viper.SetDefault("derp.server.stun.enabled", true) viper.SetDefault("derp.server.stun.enabled", true)
viper.SetDefault("derp.server.automatically_add_embedded_derp_region", true)
viper.SetDefault("unix_socket", "/var/run/headscale/headscale.sock") viper.SetDefault("unix_socket", "/var/run/headscale.sock")
viper.SetDefault("unix_socket_permission", "0o770") viper.SetDefault("unix_socket_permission", "0o770")
viper.SetDefault("grpc_listen_addr", ":50443") viper.SetDefault("grpc_listen_addr", ":50443")
@@ -216,10 +183,7 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("cli.timeout", "5s") viper.SetDefault("cli.timeout", "5s")
viper.SetDefault("cli.insecure", false) viper.SetDefault("cli.insecure", false)
viper.SetDefault("database.postgres.ssl", false) viper.SetDefault("db_ssl", false)
viper.SetDefault("database.postgres.max_open_conns", 10)
viper.SetDefault("database.postgres.max_idle_conns", 10)
viper.SetDefault("database.postgres.conn_max_idle_time_secs", 3600)
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"}) viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
viper.SetDefault("oidc.strip_email_domain", true) viper.SetDefault("oidc.strip_email_domain", true)
@@ -232,10 +196,7 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("ephemeral_node_inactivity_timeout", "120s") viper.SetDefault("ephemeral_node_inactivity_timeout", "120s")
viper.SetDefault("tuning.batch_change_delay", "800ms") viper.SetDefault("node_update_check_interval", "10s")
viper.SetDefault("tuning.node_mapsession_buffered_chan_size", 30)
viper.SetDefault("prefixes.allocation", string(IPAllocationStrategySequential))
if IsCLIConfigured() { if IsCLIConfigured() {
return nil return nil
@@ -259,15 +220,15 @@ func LoadConfig(path string, isFile bool) error {
} }
if (viper.GetString("tls_letsencrypt_hostname") != "") && if (viper.GetString("tls_letsencrypt_hostname") != "") &&
(viper.GetString("tls_letsencrypt_challenge_type") == TLSALPN01ChallengeType) && (viper.GetString("tls_letsencrypt_challenge_type") == tlsALPN01ChallengeType) &&
(!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) { (!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) {
// this is only a warning because there could be something sitting in front of headscale that redirects the traffic (e.g. an iptables rule) // this is only a warning because there could be something sitting in front of headscale that redirects the traffic (e.g. an iptables rule)
log.Warn(). log.Warn().
Msg("Warning: when using tls_letsencrypt_hostname with TLS-ALPN-01 as challenge type, headscale must be reachable on port 443, i.e. listen_addr should probably end in :443") Msg("Warning: when using tls_letsencrypt_hostname with TLS-ALPN-01 as challenge type, headscale must be reachable on port 443, i.e. listen_addr should probably end in :443")
} }
if (viper.GetString("tls_letsencrypt_challenge_type") != HTTP01ChallengeType) && if (viper.GetString("tls_letsencrypt_challenge_type") != http01ChallengeType) &&
(viper.GetString("tls_letsencrypt_challenge_type") != TLSALPN01ChallengeType) { (viper.GetString("tls_letsencrypt_challenge_type") != tlsALPN01ChallengeType) {
errorText += "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are HTTP-01 and TLS-ALPN-01\n" errorText += "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are HTTP-01 and TLS-ALPN-01\n"
} }
@@ -287,8 +248,17 @@ func LoadConfig(path string, isFile bool) error {
) )
} }
maxNodeUpdateCheckInterval, _ := time.ParseDuration("60s")
if viper.GetDuration("node_update_check_interval") > maxNodeUpdateCheckInterval {
errorText += fmt.Sprintf(
"Fatal config error: node_update_check_interval (%s) is set too high, must be less than %s",
viper.GetString("node_update_check_interval"),
maxNodeUpdateCheckInterval,
)
}
if errorText != "" { if errorText != "" {
// nolint //nolint
return errors.New(strings.TrimSuffix(errorText, "\n")) return errors.New(strings.TrimSuffix(errorText, "\n"))
} else { } else {
return nil return nil
@@ -300,15 +270,15 @@ func GetTLSConfig() TLSConfig {
LetsEncrypt: LetsEncryptConfig{ LetsEncrypt: LetsEncryptConfig{
Hostname: viper.GetString("tls_letsencrypt_hostname"), Hostname: viper.GetString("tls_letsencrypt_hostname"),
Listen: viper.GetString("tls_letsencrypt_listen"), Listen: viper.GetString("tls_letsencrypt_listen"),
CacheDir: util.AbsolutePathFromConfigPath( CacheDir: AbsolutePathFromConfigPath(
viper.GetString("tls_letsencrypt_cache_dir"), viper.GetString("tls_letsencrypt_cache_dir"),
), ),
ChallengeType: viper.GetString("tls_letsencrypt_challenge_type"), ChallengeType: viper.GetString("tls_letsencrypt_challenge_type"),
}, },
CertPath: util.AbsolutePathFromConfigPath( CertPath: AbsolutePathFromConfigPath(
viper.GetString("tls_cert_path"), viper.GetString("tls_cert_path"),
), ),
KeyPath: util.AbsolutePathFromConfigPath( KeyPath: AbsolutePathFromConfigPath(
viper.GetString("tls_key_path"), viper.GetString("tls_key_path"),
), ),
} }
@@ -320,14 +290,7 @@ func GetDERPConfig() DERPConfig {
serverRegionCode := viper.GetString("derp.server.region_code") serverRegionCode := viper.GetString("derp.server.region_code")
serverRegionName := viper.GetString("derp.server.region_name") serverRegionName := viper.GetString("derp.server.region_name")
stunAddr := viper.GetString("derp.server.stun_listen_addr") stunAddr := viper.GetString("derp.server.stun_listen_addr")
privateKeyPath := util.AbsolutePathFromConfigPath(
viper.GetString("derp.server.private_key_path"),
)
ipv4 := viper.GetString("derp.server.ipv4")
ipv6 := viper.GetString("derp.server.ipv6")
automaticallyAddEmbeddedDerpRegion := viper.GetBool(
"derp.server.automatically_add_embedded_derp_region",
)
if serverEnabled && stunAddr == "" { if serverEnabled && stunAddr == "" {
log.Fatal(). log.Fatal().
Msg("derp.server.stun_listen_addr must be set if derp.server.enabled is true") Msg("derp.server.stun_listen_addr must be set if derp.server.enabled is true")
@@ -350,28 +313,19 @@ func GetDERPConfig() DERPConfig {
paths := viper.GetStringSlice("derp.paths") paths := viper.GetStringSlice("derp.paths")
if serverEnabled && !automaticallyAddEmbeddedDerpRegion && len(paths) == 0 {
log.Fatal().
Msg("Disabling derp.server.automatically_add_embedded_derp_region requires to configure the derp server in derp.paths")
}
autoUpdate := viper.GetBool("derp.auto_update_enabled") autoUpdate := viper.GetBool("derp.auto_update_enabled")
updateFrequency := viper.GetDuration("derp.update_frequency") updateFrequency := viper.GetDuration("derp.update_frequency")
return DERPConfig{ return DERPConfig{
ServerEnabled: serverEnabled, ServerEnabled: serverEnabled,
ServerRegionID: serverRegionID, ServerRegionID: serverRegionID,
ServerRegionCode: serverRegionCode, ServerRegionCode: serverRegionCode,
ServerRegionName: serverRegionName, ServerRegionName: serverRegionName,
ServerPrivateKeyPath: privateKeyPath, STUNAddr: stunAddr,
STUNAddr: stunAddr, URLs: urls,
URLs: urls, Paths: paths,
Paths: paths, AutoUpdate: autoUpdate,
AutoUpdate: autoUpdate, UpdateFrequency: updateFrequency,
UpdateFrequency: updateFrequency,
IPv4: ipv4,
IPv6: ipv6,
AutomaticallyAddEmbeddedDerpRegion: automaticallyAddEmbeddedDerpRegion,
} }
} }
@@ -419,45 +373,6 @@ func GetLogConfig() LogConfig {
} }
} }
func GetDatabaseConfig() DatabaseConfig {
debug := viper.GetBool("database.debug")
type_ := viper.GetString("database.type")
switch type_ {
case DatabaseSqlite, DatabasePostgres:
break
case "sqlite":
type_ = "sqlite3"
default:
log.Fatal().
Msgf("invalid database type %q, must be sqlite, sqlite3 or postgres", type_)
}
return DatabaseConfig{
Type: type_,
Debug: debug,
Sqlite: SqliteConfig{
Path: util.AbsolutePathFromConfigPath(
viper.GetString("database.sqlite.path"),
),
},
Postgres: PostgresConfig{
Host: viper.GetString("database.postgres.host"),
Port: viper.GetInt("database.postgres.port"),
Name: viper.GetString("database.postgres.name"),
User: viper.GetString("database.postgres.user"),
Pass: viper.GetString("database.postgres.pass"),
Ssl: viper.GetString("database.postgres.ssl"),
MaxOpenConnections: viper.GetInt("database.postgres.max_open_conns"),
MaxIdleConnections: viper.GetInt("database.postgres.max_idle_conns"),
ConnMaxIdleTimeSecs: viper.GetInt(
"database.postgres.conn_max_idle_time_secs",
),
},
}
}
func GetDNSConfig() (*tailcfg.DNSConfig, string) { func GetDNSConfig() (*tailcfg.DNSConfig, string) {
if viper.IsSet("dns_config") { if viper.IsSet("dns_config") {
dnsConfig := &tailcfg.DNSConfig{} dnsConfig := &tailcfg.DNSConfig{}
@@ -569,65 +484,12 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
baseDomain = "headscale.net" // does not really matter when MagicDNS is not enabled baseDomain = "headscale.net" // does not really matter when MagicDNS is not enabled
} }
log.Trace().Interface("dns_config", dnsConfig).Msg("DNS configuration loaded")
return dnsConfig, baseDomain return dnsConfig, baseDomain
} }
return nil, "" return nil, ""
} }
func PrefixV4() (*netip.Prefix, error) {
prefixV4Str := viper.GetString("prefixes.v4")
if prefixV4Str == "" {
return nil, nil
}
prefixV4, err := netip.ParsePrefix(prefixV4Str)
if err != nil {
return nil, fmt.Errorf("parsing IPv4 prefix from config: %w", err)
}
builder := netipx.IPSetBuilder{}
builder.AddPrefix(tsaddr.CGNATRange())
builder.AddPrefix(tsaddr.TailscaleULARange())
ipSet, _ := builder.IPSet()
if !ipSet.ContainsPrefix(prefixV4) {
log.Warn().
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
prefixV4Str, tsaddr.CGNATRange())
}
return &prefixV4, nil
}
func PrefixV6() (*netip.Prefix, error) {
prefixV6Str := viper.GetString("prefixes.v6")
if prefixV6Str == "" {
return nil, nil
}
prefixV6, err := netip.ParsePrefix(prefixV6Str)
if err != nil {
return nil, fmt.Errorf("parsing IPv6 prefix from config: %w", err)
}
builder := netipx.IPSetBuilder{}
builder.AddPrefix(tsaddr.CGNATRange())
builder.AddPrefix(tsaddr.TailscaleULARange())
ipSet, _ := builder.IPSet()
if !ipSet.ContainsPrefix(prefixV6) {
log.Warn().
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
prefixV6Str, tsaddr.TailscaleULARange())
}
return &prefixV6, nil
}
func GetHeadscaleConfig() (*Config, error) { func GetHeadscaleConfig() (*Config, error) {
if IsCLIConfigured() { if IsCLIConfigured() {
return &Config{ return &Config{
@@ -640,32 +502,43 @@ func GetHeadscaleConfig() (*Config, error) {
}, nil }, nil
} }
prefix4, err := PrefixV4()
if err != nil {
return nil, err
}
prefix6, err := PrefixV6()
if err != nil {
return nil, err
}
allocStr := viper.GetString("prefixes.allocation")
var alloc IPAllocationStrategy
switch allocStr {
case string(IPAllocationStrategySequential):
alloc = IPAllocationStrategySequential
case string(IPAllocationStrategyRandom):
alloc = IPAllocationStrategyRandom
default:
log.Fatal().Msgf("config error, prefixes.allocation is set to %s, which is not a valid strategy, allowed options: %s, %s", allocStr, IPAllocationStrategySequential, IPAllocationStrategyRandom)
}
dnsConfig, baseDomain := GetDNSConfig() dnsConfig, baseDomain := GetDNSConfig()
derpConfig := GetDERPConfig() derpConfig := GetDERPConfig()
logConfig := GetLogTailConfig() logConfig := GetLogTailConfig()
randomizeClientPort := viper.GetBool("randomize_client_port") randomizeClientPort := viper.GetBool("randomize_client_port")
configuredPrefixes := viper.GetStringSlice("ip_prefixes")
parsedPrefixes := make([]netip.Prefix, 0, len(configuredPrefixes)+1)
for i, prefixInConfig := range configuredPrefixes {
prefix, err := netip.ParsePrefix(prefixInConfig)
if err != nil {
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
}
parsedPrefixes = append(parsedPrefixes, prefix)
}
prefixes := make([]netip.Prefix, 0, len(parsedPrefixes))
{
// dedup
normalizedPrefixes := make(map[string]int, len(parsedPrefixes))
for i, p := range parsedPrefixes {
normalized, _ := netipx.RangeOfPrefix(p).Prefix()
normalizedPrefixes[normalized.String()] = i
}
// convert back to list
for _, i := range normalizedPrefixes {
prefixes = append(prefixes, parsedPrefixes[i])
}
}
if len(prefixes) < 1 {
prefixes = append(prefixes, netip.MustParsePrefix("100.64.0.0/10"))
log.Warn().
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
}
oidcClientSecret := viper.GetString("oidc.client_secret") oidcClientSecret := viper.GetString("oidc.client_secret")
oidcClientSecretPath := viper.GetString("oidc.client_secret_path") oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
if oidcClientSecretPath != "" && oidcClientSecret != "" { if oidcClientSecretPath != "" && oidcClientSecret != "" {
@@ -676,7 +549,7 @@ func GetHeadscaleConfig() (*Config, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
oidcClientSecret = strings.TrimSpace(string(secretBytes)) oidcClientSecret = string(secretBytes)
} }
return &Config{ return &Config{
@@ -687,11 +560,11 @@ func GetHeadscaleConfig() (*Config, error) {
GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"), GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"),
DisableUpdateCheck: viper.GetBool("disable_check_updates"), DisableUpdateCheck: viper.GetBool("disable_check_updates"),
PrefixV4: prefix4, IPPrefixes: prefixes,
PrefixV6: prefix6, PrivateKeyPath: AbsolutePathFromConfigPath(
IPAllocation: IPAllocationStrategy(alloc), viper.GetString("private_key_path"),
),
NoisePrivateKeyPath: util.AbsolutePathFromConfigPath( NoisePrivateKeyPath: AbsolutePathFromConfigPath(
viper.GetString("noise.private_key_path"), viper.GetString("noise.private_key_path"),
), ),
BaseDomain: baseDomain, BaseDomain: baseDomain,
@@ -702,7 +575,18 @@ func GetHeadscaleConfig() (*Config, error) {
"ephemeral_node_inactivity_timeout", "ephemeral_node_inactivity_timeout",
), ),
Database: GetDatabaseConfig(), NodeUpdateCheckInterval: viper.GetDuration(
"node_update_check_interval",
),
DBtype: viper.GetString("db_type"),
DBpath: AbsolutePathFromConfigPath(viper.GetString("db_path")),
DBhost: viper.GetString("db_host"),
DBport: viper.GetInt("db_port"),
DBname: viper.GetString("db_name"),
DBuser: viper.GetString("db_user"),
DBpass: viper.GetString("db_pass"),
DBssl: viper.GetString("db_ssl"),
TLS: GetTLSConfig(), TLS: GetTLSConfig(),
@@ -712,7 +596,7 @@ func GetHeadscaleConfig() (*Config, error) {
ACMEURL: viper.GetString("acme_url"), ACMEURL: viper.GetString("acme_url"),
UnixSocket: viper.GetString("unix_socket"), UnixSocket: viper.GetString("unix_socket"),
UnixSocketPermission: util.GetFileMode("unix_socket_permission"), UnixSocketPermission: GetFileMode("unix_socket_permission"),
OIDC: OIDCConfig{ OIDC: OIDCConfig{
OnlyStartIfOIDCIsAvailable: viper.GetBool( OnlyStartIfOIDCIsAvailable: viper.GetBool(
@@ -758,12 +642,6 @@ func GetHeadscaleConfig() (*Config, error) {
}, },
Log: GetLogConfig(), Log: GetLogConfig(),
// TODO(kradalby): Document these settings when more stable
Tuning: Tuning{
BatchChangeDelay: viper.GetDuration("tuning.batch_change_delay"),
NodeMapSessionBufferedChanSize: viper.GetInt("tuning.node_mapsession_buffered_chan_size"),
},
}, nil }, nil
} }

404
db.go Normal file
View File

@@ -0,0 +1,404 @@
package headscale
import (
"context"
"database/sql/driver"
"encoding/json"
"errors"
"fmt"
"net/netip"
"time"
"github.com/glebarez/sqlite"
"github.com/rs/zerolog/log"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"tailscale.com/tailcfg"
)
const (
dbVersion = "1"
errValueNotFound = Error("not found")
ErrCannotParsePrefix = Error("cannot parse prefix")
)
// KV is a key-value store in a psql table. For future use...
type KV struct {
Key string
Value string
}
func (h *Headscale) initDB() error {
db, err := h.openDB()
if err != nil {
return err
}
h.db = db
if h.dbType == Postgres {
db.Exec(`create extension if not exists "uuid-ossp";`)
}
_ = db.Migrator().RenameTable("namespaces", "users")
err = db.AutoMigrate(&User{})
if err != nil {
return err
}
_ = db.Migrator().RenameColumn(&Machine{}, "namespace_id", "user_id")
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
// GivenName is used as the primary source of DNS names, make sure
// the field is populated and normalized if it was not when the
// machine was registered.
_ = db.Migrator().RenameColumn(&Machine{}, "nickname", "given_name")
// If the Machine table has a column for registered,
// find all occourences of "false" and drop them. Then
// remove the column.
if db.Migrator().HasColumn(&Machine{}, "registered") {
log.Info().
Msg(`Database has legacy "registered" column in machine, removing...`)
machines := Machines{}
if err := h.db.Not("registered").Find(&machines).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for _, machine := range machines {
log.Info().
Str("machine", machine.Hostname).
Str("machine_key", machine.MachineKey).
Msg("Deleting unregistered machine")
if err := h.db.Delete(&Machine{}, machine.ID).Error; err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Str("machine_key", machine.MachineKey).
Msg("Error deleting unregistered machine")
}
}
err := db.Migrator().DropColumn(&Machine{}, "registered")
if err != nil {
log.Error().Err(err).Msg("Error dropping registered column")
}
}
err = db.AutoMigrate(&Route{})
if err != nil {
return err
}
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
type MachineAux struct {
ID uint64
EnabledRoutes IPPrefixes
}
machinesAux := []MachineAux{}
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
if err != nil {
log.Fatal().Err(err).Msg("Error accessing db")
}
for _, machine := range machinesAux {
for _, prefix := range machine.EnabledRoutes {
if err != nil {
log.Error().
Err(err).
Str("enabled_route", prefix.String()).
Msg("Error parsing enabled_route")
continue
}
err = db.Preload("Machine").
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
First(&Route{}).
Error
if err == nil {
log.Info().
Str("enabled_route", prefix.String()).
Msg("Route already migrated to new table, skipping")
continue
}
route := Route{
MachineID: machine.ID,
Advertised: true,
Enabled: true,
Prefix: IPPrefix(prefix),
}
if err := h.db.Create(&route).Error; err != nil {
log.Error().Err(err).Msg("Error creating route")
} else {
log.Info().
Uint64("machine_id", route.MachineID).
Str("prefix", prefix.String()).
Msg("Route migrated")
}
}
}
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
if err != nil {
log.Error().Err(err).Msg("Error dropping enabled_routes column")
}
}
err = db.AutoMigrate(&Machine{})
if err != nil {
return err
}
if db.Migrator().HasColumn(&Machine{}, "given_name") {
machines := Machines{}
if err := h.db.Find(&machines).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for item, machine := range machines {
if machine.GivenName == "" {
normalizedHostname, err := NormalizeToFQDNRules(
machine.Hostname,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
log.Error().
Caller().
Str("hostname", machine.Hostname).
Err(err).
Msg("Failed to normalize machine hostname in DB migration")
}
err = h.RenameMachine(&machines[item], normalizedHostname)
if err != nil {
log.Error().
Caller().
Str("hostname", machine.Hostname).
Err(err).
Msg("Failed to save normalized machine name in DB migration")
}
}
}
}
err = db.AutoMigrate(&KV{})
if err != nil {
return err
}
err = db.AutoMigrate(&PreAuthKey{})
if err != nil {
return err
}
err = db.AutoMigrate(&PreAuthKeyACLTag{})
if err != nil {
return err
}
_ = db.Migrator().DropTable("shared_machines")
err = db.AutoMigrate(&APIKey{})
if err != nil {
return err
}
err = h.setValue("db_version", dbVersion)
return err
}
func (h *Headscale) openDB() (*gorm.DB, error) {
var db *gorm.DB
var err error
var log logger.Interface
if h.dbDebug {
log = logger.Default
} else {
log = logger.Default.LogMode(logger.Silent)
}
switch h.dbType {
case Sqlite:
db, err = gorm.Open(
sqlite.Open(h.dbString+"?_synchronous=1&_journal_mode=WAL"),
&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
},
)
db.Exec("PRAGMA foreign_keys=ON")
// The pure Go SQLite library does not handle locking in
// the same way as the C based one and we cant use the gorm
// connection pool as of 2022/02/23.
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(1)
sqlDB.SetMaxOpenConns(1)
sqlDB.SetConnMaxIdleTime(time.Hour)
case Postgres:
db, err = gorm.Open(postgres.Open(h.dbString), &gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
})
}
if err != nil {
return nil, err
}
return db, nil
}
// getValue returns the value for the given key in KV.
func (h *Headscale) getValue(key string) (string, error) {
var row KV
if result := h.db.First(&row, "key = ?", key); errors.Is(
result.Error,
gorm.ErrRecordNotFound,
) {
return "", errValueNotFound
}
return row.Value, nil
}
// setValue sets value for the given key in KV.
func (h *Headscale) setValue(key string, value string) error {
keyValue := KV{
Key: key,
Value: value,
}
if _, err := h.getValue(key); err == nil {
h.db.Model(&keyValue).Where("key = ?", key).Update("value", value)
return nil
}
if err := h.db.Create(keyValue).Error; err != nil {
return fmt.Errorf("failed to create key value pair in the database: %w", err)
}
return nil
}
func (h *Headscale) pingDB(ctx context.Context) error {
ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel()
db, err := h.db.DB()
if err != nil {
return err
}
return db.PingContext(ctx)
}
// This is a "wrapper" type around tailscales
// Hostinfo to allow us to add database "serialization"
// methods. This allows us to use a typed values throughout
// the code and not have to marshal/unmarshal and error
// check all over the code.
type HostInfo tailcfg.Hostinfo
func (hi *HostInfo) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, hi)
case string:
return json.Unmarshal([]byte(value), hi)
default:
return fmt.Errorf("%w: unexpected data type %T", ErrMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (hi HostInfo) Value() (driver.Value, error) {
bytes, err := json.Marshal(hi)
return string(bytes), err
}
type IPPrefix netip.Prefix
func (i *IPPrefix) Scan(destination interface{}) error {
switch value := destination.(type) {
case string:
prefix, err := netip.ParsePrefix(value)
if err != nil {
return err
}
*i = IPPrefix(prefix)
return nil
default:
return fmt.Errorf("%w: unexpected data type %T", ErrCannotParsePrefix, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i IPPrefix) Value() (driver.Value, error) {
prefixStr := netip.Prefix(i).String()
return prefixStr, nil
}
type IPPrefixes []netip.Prefix
func (i *IPPrefixes) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", ErrMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i IPPrefixes) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}
type StringList []string
func (i *StringList) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", ErrMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i StringList) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}

View File

@@ -1,4 +1,4 @@
package derp package headscale
import ( import (
"context" "context"
@@ -7,8 +7,8 @@ import (
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
@@ -31,7 +31,7 @@ func loadDERPMapFromPath(path string) (*tailcfg.DERPMap, error) {
} }
func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) { func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
ctx, cancel := context.WithTimeout(context.Background(), types.HTTPTimeout) ctx, cancel := context.WithTimeout(context.Background(), HTTPReadTimeout)
defer cancel() defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, addr.String(), nil) req, err := http.NewRequestWithContext(ctx, http.MethodGet, addr.String(), nil)
@@ -40,7 +40,7 @@ func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
} }
client := http.Client{ client := http.Client{
Timeout: types.HTTPTimeout, Timeout: HTTPReadTimeout,
} }
resp, err := client.Do(req) resp, err := client.Do(req)
@@ -80,7 +80,7 @@ func mergeDERPMaps(derpMaps []*tailcfg.DERPMap) *tailcfg.DERPMap {
return &result return &result
} }
func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap { func GetDERPMap(cfg DERPConfig) *tailcfg.DERPMap {
derpMaps := make([]*tailcfg.DERPMap, 0) derpMaps := make([]*tailcfg.DERPMap, 0)
for _, path := range cfg.Paths { for _, path := range cfg.Paths {
@@ -132,3 +132,26 @@ func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap {
return derpMap return derpMap
} }
func (h *Headscale) scheduledDERPMapUpdateWorker(cancelChan <-chan struct{}) {
log.Info().
Dur("frequency", h.cfg.DERP.UpdateFrequency).
Msg("Setting up a DERPMap update worker")
ticker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
for {
select {
case <-cancelChan:
return
case <-ticker.C:
log.Info().Msg("Fetching DERPMap updates")
h.DERPMap = GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled {
h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
}
h.setLastStateChangeToNow()
}
}
}

View File

@@ -1,4 +1,4 @@
package server package headscale
import ( import (
"context" "context"
@@ -12,8 +12,6 @@ import (
"strings" "strings"
"time" "time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"tailscale.com/derp" "tailscale.com/derp"
"tailscale.com/net/stun" "tailscale.com/net/stun"
@@ -28,30 +26,23 @@ import (
const fastStartHeader = "Derp-Fast-Start" const fastStartHeader = "Derp-Fast-Start"
type DERPServer struct { type DERPServer struct {
serverURL string
key key.NodePrivate
cfg *types.DERPConfig
tailscaleDERP *derp.Server tailscaleDERP *derp.Server
region tailcfg.DERPRegion
} }
func NewDERPServer( func (h *Headscale) NewDERPServer() (*DERPServer, error) {
serverURL string,
derpKey key.NodePrivate,
cfg *types.DERPConfig,
) (*DERPServer, error) {
log.Trace().Caller().Msg("Creating new embedded DERP server") log.Trace().Caller().Msg("Creating new embedded DERP server")
server := derp.NewServer(derpKey, util.TSLogfWrapper()) // nolint // zerolinter complains server := derp.NewServer(key.NodePrivate(*h.privateKey), log.Info().Msgf)
region, err := h.generateRegionLocalDERP()
if err != nil {
return nil, err
}
return &DERPServer{ return &DERPServer{server, region}, nil
serverURL: serverURL,
key: derpKey,
cfg: cfg,
tailscaleDERP: server,
}, nil
} }
func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) { func (h *Headscale) generateRegionLocalDERP() (tailcfg.DERPRegion, error) {
serverURL, err := url.Parse(d.serverURL) serverURL, err := url.Parse(h.cfg.ServerURL)
if err != nil { if err != nil {
return tailcfg.DERPRegion{}, err return tailcfg.DERPRegion{}, err
} }
@@ -74,23 +65,21 @@ func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
} }
localDERPregion := tailcfg.DERPRegion{ localDERPregion := tailcfg.DERPRegion{
RegionID: d.cfg.ServerRegionID, RegionID: h.cfg.DERP.ServerRegionID,
RegionCode: d.cfg.ServerRegionCode, RegionCode: h.cfg.DERP.ServerRegionCode,
RegionName: d.cfg.ServerRegionName, RegionName: h.cfg.DERP.ServerRegionName,
Avoid: false, Avoid: false,
Nodes: []*tailcfg.DERPNode{ Nodes: []*tailcfg.DERPNode{
{ {
Name: fmt.Sprintf("%d", d.cfg.ServerRegionID), Name: fmt.Sprintf("%d", h.cfg.DERP.ServerRegionID),
RegionID: d.cfg.ServerRegionID, RegionID: h.cfg.DERP.ServerRegionID,
HostName: host, HostName: host,
DERPPort: port, DERPPort: port,
IPv4: d.cfg.IPv4,
IPv6: d.cfg.IPv6,
}, },
}, },
} }
_, portSTUNStr, err := net.SplitHostPort(d.cfg.STUNAddr) _, portSTUNStr, err := net.SplitHostPort(h.cfg.DERP.STUNAddr)
if err != nil { if err != nil {
return tailcfg.DERPRegion{}, err return tailcfg.DERPRegion{}, err
} }
@@ -101,12 +90,11 @@ func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
localDERPregion.Nodes[0].STUNPort = portSTUN localDERPregion.Nodes[0].STUNPort = portSTUN
log.Info().Caller().Msgf("DERP region: %+v", localDERPregion) log.Info().Caller().Msgf("DERP region: %+v", localDERPregion)
log.Info().Caller().Msgf("DERP Nodes[0]: %+v", localDERPregion.Nodes[0])
return localDERPregion, nil return localDERPregion, nil
} }
func (d *DERPServer) DERPHandler( func (h *Headscale) DERPHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
) { ) {
@@ -168,7 +156,7 @@ func (d *DERPServer) DERPHandler(
log.Trace().Caller().Msgf("Hijacked connection from %v", req.RemoteAddr) log.Trace().Caller().Msgf("Hijacked connection from %v", req.RemoteAddr)
if !fastStart { if !fastStart {
pubKey := d.key.Public() pubKey := h.privateKey.Public()
pubKeyStr, _ := pubKey.MarshalText() //nolint pubKeyStr, _ := pubKey.MarshalText() //nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+ fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+ "Upgrade: DERP\r\n"+
@@ -179,12 +167,12 @@ func (d *DERPServer) DERPHandler(
string(pubKeyStr)) string(pubKeyStr))
} }
d.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String()) h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
} }
// DERPProbeHandler is the endpoint that js/wasm clients hit to measure // DERPProbeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries. // DERP latency, since they can't do UDP STUN queries.
func DERPProbeHandler( func (h *Headscale) DERPProbeHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
) { ) {
@@ -211,48 +199,43 @@ func DERPProbeHandler(
// The initial implementation is here https://github.com/tailscale/tailscale/pull/1406 // The initial implementation is here https://github.com/tailscale/tailscale/pull/1406
// They have a cache, but not clear if that is really necessary at Headscale, uh, scale. // They have a cache, but not clear if that is really necessary at Headscale, uh, scale.
// An example implementation is found here https://derp.tailscale.com/bootstrap-dns // An example implementation is found here https://derp.tailscale.com/bootstrap-dns
// Coordination server is included automatically, since local DERP is using the same DNS Name in d.serverURL. func (h *Headscale) DERPBootstrapDNSHandler(
func DERPBootstrapDNSHandler( writer http.ResponseWriter,
derpMap *tailcfg.DERPMap, req *http.Request,
) func(http.ResponseWriter, *http.Request) { ) {
return func( dnsEntries := make(map[string][]net.IP)
writer http.ResponseWriter,
req *http.Request,
) {
dnsEntries := make(map[string][]net.IP)
resolvCtx, cancel := context.WithTimeout(req.Context(), time.Minute) resolvCtx, cancel := context.WithTimeout(req.Context(), time.Minute)
defer cancel() defer cancel()
var resolver net.Resolver var resolver net.Resolver
for _, region := range derpMap.Regions { for _, region := range h.DERPMap.Regions {
for _, node := range region.Nodes { // we don't care if we override some nodes for _, node := range region.Nodes { // we don't care if we override some nodes
addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName) addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName)
if err != nil { if err != nil {
log.Trace(). log.Trace().
Caller(). Caller().
Err(err). Err(err).
Msgf("bootstrap DNS lookup failed %q", node.HostName) Msgf("bootstrap DNS lookup failed %q", node.HostName)
continue continue
}
dnsEntries[node.HostName] = addrs
} }
dnsEntries[node.HostName] = addrs
} }
writer.Header().Set("Content-Type", "application/json") }
writer.WriteHeader(http.StatusOK) writer.Header().Set("Content-Type", "application/json")
err := json.NewEncoder(writer).Encode(dnsEntries) writer.WriteHeader(http.StatusOK)
if err != nil { err := json.NewEncoder(writer).Encode(dnsEntries)
log.Error(). if err != nil {
Caller(). log.Error().
Err(err). Caller().
Msg("Failed to write response") Err(err).
} Msg("Failed to write response")
} }
} }
// ServeSTUN starts a STUN server on the configured addr. // ServeSTUN starts a STUN server on the configured addr.
func (d *DERPServer) ServeSTUN() { func (h *Headscale) ServeSTUN() {
packetConn, err := net.ListenPacket("udp", d.cfg.STUNAddr) packetConn, err := net.ListenPacket("udp", h.cfg.DERP.STUNAddr)
if err != nil { if err != nil {
log.Fatal().Msgf("failed to open STUN listener: %v", err) log.Fatal().Msgf("failed to open STUN listener: %v", err)
} }

View File

@@ -1,87 +1,30 @@
package util package headscale
import ( import (
"errors"
"fmt" "fmt"
"net/netip" "net/netip"
"regexp" "net/url"
"strings" "strings"
"github.com/spf13/viper" mapset "github.com/deckarep/golang-set/v2"
"go4.org/netipx" "go4.org/netipx"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
"tailscale.com/util/dnsname" "tailscale.com/util/dnsname"
) )
const ( const (
ByteSize = 8 ByteSize = 8
ipv4AddressLength = 32
ipv6AddressLength = 128
// value related to RFC 1123 and 952.
LabelHostnameLength = 63
) )
var invalidCharsInUserRegex = regexp.MustCompile("[^a-z0-9-.]+") const (
ipv4AddressLength = 32
ipv6AddressLength = 128
)
var ErrInvalidUserName = errors.New("invalid user name") const (
nextDNSDoHPrefix = "https://dns.nextdns.io"
func NormalizeToFQDNRulesConfigFromViper(name string) (string, error) { )
strip := viper.GetBool("oidc.strip_email_domain")
return NormalizeToFQDNRules(name, strip)
}
// NormalizeToFQDNRules will replace forbidden chars in user
// it can also return an error if the user doesn't respect RFC 952 and 1123.
func NormalizeToFQDNRules(name string, stripEmailDomain bool) (string, error) {
name = strings.ToLower(name)
name = strings.ReplaceAll(name, "'", "")
atIdx := strings.Index(name, "@")
if stripEmailDomain && atIdx > 0 {
name = name[:atIdx]
} else {
name = strings.ReplaceAll(name, "@", ".")
}
name = invalidCharsInUserRegex.ReplaceAllString(name, "-")
for _, elt := range strings.Split(name, ".") {
if len(elt) > LabelHostnameLength {
return "", fmt.Errorf(
"label %v is more than 63 chars: %w",
elt,
ErrInvalidUserName,
)
}
}
return name, nil
}
func CheckForFQDNRules(name string) error {
if len(name) > LabelHostnameLength {
return fmt.Errorf(
"DNS segment must not be over 63 chars. %v doesn't comply with this rule: %w",
name,
ErrInvalidUserName,
)
}
if strings.ToLower(name) != name {
return fmt.Errorf(
"DNS segment should be lowercase. %v doesn't comply with this rule: %w",
name,
ErrInvalidUserName,
)
}
if invalidCharsInUserRegex.MatchString(name) {
return fmt.Errorf(
"DNS segment should only be composed of lowercase ASCII letters numbers, hyphen and dots. %v doesn't comply with theses rules: %w",
name,
ErrInvalidUserName,
)
}
return nil
}
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`. // generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS // This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
@@ -103,7 +46,33 @@ func CheckForFQDNRules(name string) error {
// From the netmask we can find out the wildcard bits (the bits that are not set in the netmask). // From the netmask we can find out the wildcard bits (the bits that are not set in the netmask).
// This allows us to then calculate the subnets included in the subsequent class block and generate the entries. // This allows us to then calculate the subnets included in the subsequent class block and generate the entries.
func GenerateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN { func generateMagicDNSRootDomains(ipPrefixes []netip.Prefix) []dnsname.FQDN {
fqdns := make([]dnsname.FQDN, 0, len(ipPrefixes))
for _, ipPrefix := range ipPrefixes {
var generateDNSRoot func(netip.Prefix) []dnsname.FQDN
switch ipPrefix.Addr().BitLen() {
case ipv4AddressLength:
generateDNSRoot = generateIPv4DNSRootDomain
case ipv6AddressLength:
generateDNSRoot = generateIPv6DNSRootDomain
default:
panic(
fmt.Sprintf(
"unsupported IP version with address length %d",
ipPrefix.Addr().BitLen(),
),
)
}
fqdns = append(fqdns, generateDNSRoot(ipPrefix)...)
}
return fqdns
}
func generateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
// Conversion to the std lib net.IPnet, a bit easier to operate // Conversion to the std lib net.IPnet, a bit easier to operate
netRange := netipx.PrefixIPNet(ipPrefix) netRange := netipx.PrefixIPNet(ipPrefix)
maskBits, _ := netRange.Mask.Size() maskBits, _ := netRange.Mask.Size()
@@ -139,27 +108,7 @@ func GenerateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
return fqdns return fqdns
} }
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`. func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
// server (listening in 100.100.100.100 udp/53) should be used for.
//
// Tailscale.com includes in the list:
// - the `BaseDomain` of the user
// - the reverse DNS entry for IPv6 (0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa., see below more on IPv6)
// - the reverse DNS entries for the IPv4 subnets covered by the user's `IPPrefix`.
// In the public SaaS this is [64-127].100.in-addr.arpa.
//
// The main purpose of this function is then generating the list of IPv4 entries. For the 100.64.0.0/10, this
// is clear, and could be hardcoded. But we are allowing any range as `IPPrefix`, so we need to find out the
// subnets when we have 172.16.0.0/16 (i.e., [0-255].16.172.in-addr.arpa.), or any other subnet.
//
// How IN-ADDR.ARPA domains work is defined in RFC1035 (section 3.5). Tailscale.com seems to adhere to this,
// and do not make use of RFC2317 ("Classless IN-ADDR.ARPA delegation") - hence generating the entries for the next
// class block only.
// From the netmask we can find out the wildcard bits (the bits that are not set in the netmask).
// This allows us to then calculate the subnets included in the subsequent class block and generate the entries.
func GenerateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
const nibbleLen = 4 const nibbleLen = 4
maskBits, _ := netipx.PrefixIPNet(ipPrefix).Mask.Size() maskBits, _ := netipx.PrefixIPNet(ipPrefix).Mask.Size()
@@ -208,3 +157,63 @@ func GenerateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
return fqdns return fqdns
} }
// If any nextdns DoH resolvers are present in the list of resolvers it will
// take metadata from the machine metadata and instruct tailscale to add it
// to the requests. This makes it possible to identify from which device the
// requests come in the NextDNS dashboard.
//
// This will produce a resolver like:
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
for _, resolver := range resolvers {
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
attrs := url.Values{
"device_name": []string{machine.Hostname},
"device_model": []string{machine.HostInfo.OS},
}
if len(machine.IPAddresses) > 0 {
attrs.Add("device_ip", machine.IPAddresses[0].String())
}
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
}
}
}
func getMapResponseDNSConfig(
dnsConfigOrig *tailcfg.DNSConfig,
baseDomain string,
machine Machine,
peers Machines,
) *tailcfg.DNSConfig {
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
// Only inject the Search Domain of the current user - shared nodes should use their full FQDN
dnsConfig.Domains = append(
dnsConfig.Domains,
fmt.Sprintf(
"%s.%s",
machine.User.Name,
baseDomain,
),
)
userSet := mapset.NewSet[User]()
userSet.Add(machine.User)
for _, p := range peers {
userSet.Add(p.User)
}
for _, user := range userSet.ToSlice() {
dnsRoute := fmt.Sprintf("%v.%v", user.Name, baseDomain)
dnsConfig.Routes[dnsRoute] = nil
}
} else {
dnsConfig = dnsConfigOrig
}
addNextDNSMetadata(dnsConfig.Resolvers, machine)
return dnsConfig
}

394
dns_test.go Normal file
View File

@@ -0,0 +1,394 @@
package headscale
import (
"fmt"
"net/netip"
"gopkg.in/check.v1"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
prefixes := []netip.Prefix{
netip.MustParsePrefix("100.64.0.0/10"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "64.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "100.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "127.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
func (s *Suite) TestMagicDNSRootDomains172(c *check.C) {
prefixes := []netip.Prefix{
netip.MustParsePrefix("172.16.0.0/16"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "0.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "255.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
// Happens when netmask is a multiple of 4 bits (sounds likely).
func (s *Suite) TestMagicDNSRootDomainsIPv6Single(c *check.C) {
prefixes := []netip.Prefix{
netip.MustParsePrefix("fd7a:115c:a1e0::/48"),
}
domains := generateMagicDNSRootDomains(prefixes)
c.Assert(len(domains), check.Equals, 1)
c.Assert(
domains[0].WithTrailingDot(),
check.Equals,
"0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa.",
)
}
func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
prefixes := []netip.Prefix{
netip.MustParsePrefix("fd7a:115c:a1e0::/50"),
}
domains := generateMagicDNSRootDomains(prefixes)
yieldsRoot := func(dom string) bool {
for _, candidate := range domains {
if candidate.WithTrailingDot() == dom {
return true
}
}
return false
}
c.Assert(len(domains), check.Equals, 4)
c.Assert(yieldsRoot("0.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("1.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("2.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("3.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
userShared1, err := app.CreateUser("shared1")
c.Assert(err, check.IsNil)
userShared2, err := app.CreateUser("shared2")
c.Assert(err, check.IsNil)
userShared3, err := app.CreateUser("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
userShared1.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
userShared2.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
userShared3.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
userShared1.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Hostname: "test_get_shared_nodes_1",
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_2",
UserID: userShared2.ID,
User: *userShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_3",
UserID: userShared3.ID,
User: *userShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_4",
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
AuthKeyID: uint(PreAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]*dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: true,
}
peersOfMachineInShared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachineInShared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
domainRouteShared1 := fmt.Sprintf("%s.%s", userShared1.Name, baseDomain)
_, ok := dnsConfig.Routes[domainRouteShared1]
c.Assert(ok, check.Equals, true)
domainRouteShared2 := fmt.Sprintf("%s.%s", userShared2.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared2]
c.Assert(ok, check.Equals, true)
domainRouteShared3 := fmt.Sprintf("%s.%s", userShared3.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared3]
c.Assert(ok, check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
userShared1, err := app.CreateUser("shared1")
c.Assert(err, check.IsNil)
userShared2, err := app.CreateUser("shared2")
c.Assert(err, check.IsNil)
userShared3, err := app.CreateUser("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
userShared1.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
userShared2.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
userShared3.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
preAuthKey2InShared1, err := app.CreatePreAuthKey(
userShared1.Name,
false,
false,
nil,
nil,
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Hostname: "test_get_shared_nodes_1",
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_2",
UserID: userShared2.ID,
User: *userShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_3",
UserID: userShared3.ID,
User: *userShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_4",
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
AuthKeyID: uint(preAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]*dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: false,
}
peersOfMachine1Shared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachine1Shared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 0)
c.Assert(len(dnsConfig.Domains), check.Equals, 1)
}

56
docs/README.md Normal file
View File

@@ -0,0 +1,56 @@
# headscale documentation
This page contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
## Official documentation
### How-to
- [Running headscale on Linux](running-headscale-linux.md)
- [Control headscale remotely](remote-cli.md)
- [Using a Windows client with headscale](windows-client.md)
- [Configuring OIDC](oidc.md)
### References
- [Configuration](../config-example.yaml)
- [Glossary](glossary.md)
- [TLS](tls.md)
## Community documentation
Community documentation is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
- [Running headscale in a container](running-headscale-container.md)
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
- [Running headscale behind a reverse proxy](reverse-proxy.md)
- [Set Custom DNS records](dns-records.md)
## Misc
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
The [ACLs](acls.md) document should help understand a fictional case of setting
up ACLs in a small company. All concepts presented in this document could be
applied outside of business oriented usage.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.

View File

@@ -1,15 +1,4 @@
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment. # ACLs use case example
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACLs use case example
Let's build an example use case for a small business (It may be the place where Let's build an example use case for a small business (It may be the place where
ACL's are the most useful). ACL's are the most useful).

View File

@@ -1,12 +1,5 @@
# Setting custom DNS records # Setting custom DNS records
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal ## Goal
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns. This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
@@ -18,25 +11,23 @@ An example use case is to serve apps on the same host via a reverse proxy like N
1. Change the `config.yaml` to contain the desired records like so: 1. Change the `config.yaml` to contain the desired records like so:
```yaml ```yaml
dns_config: dns_config:
... ...
extra_records: extra_records:
- name: "prometheus.myvpn.example.com" - name: "prometheus.myvpn.example.com"
type: "A" type: "A"
value: "100.64.0.3" value: "100.64.0.3"
- name: "grafana.myvpn.example.com" - name: "grafana.myvpn.example.com"
type: "A" type: "A"
value: "100.64.0.3" value: "100.64.0.3"
... ...
``` ```
1. Restart your headscale instance. 2. Restart your headscale instance.
!!! warning Beware of the limitations listed later on!
Beware of the limitations listed later on!
### 2. Verify that the records are set ### 2. Verify that the records are set

Some files were not shown because too many files have changed in this diff Show More