Compare commits

..

2 Commits

Author SHA1 Message Date
Juan Font Alonso
5539ef1f8f Reduce the number of containers 2022-08-04 21:36:38 +02:00
Juan Font Alonso
100f7190f3 Temporary fix integration tests with dedicated Dockerfile 2022-07-31 10:44:09 +02:00
290 changed files with 17572 additions and 32404 deletions

5
.github/FUNDING.yml vendored
View File

@@ -1,3 +1,2 @@
# These are supported funding model platforms ko_fi: kradalby
github: [kradalby]
ko_fi: headscale

View File

@@ -6,24 +6,19 @@ labels: ["bug"]
assignees: "" assignees: ""
--- ---
<!-- <!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
to make sure that it is truly a bug.
The issue tracker is not the place to ask for support or how to set up Headscale.
Bug reports without the sufficient information will be closed. **Bug description**
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Bug description
<!-- A clear and concise description of what the bug is. Describe the expected bahavior <!-- A clear and concise description of what the bug is. Describe the expected bahavior
and how it is currently different. If you are unsure if it is a bug, consider discussing and how it is currently different. If you are unsure if it is a bug, consider discussing
it on our Discord server first. --> it on our Discord server first. -->
## Environment **To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example: <!-- Please add relevant information about your system. For example:
- Version of headscale used - Version of headscale used
@@ -33,20 +28,3 @@ All bug reports needs to be in English.
- The relevant config parameters you used - The relevant config parameters you used
- Log output - Log output
--> -->
- OS:
- Headscale version:
- Tailscale version:
<!--
We do not support running Headscale in a container nor behind a (reverse) proxy.
If either of these are true for your environment, ask the community in Discord
instead of filing a bug report.
-->
- [ ] Headscale is behind a (reverse) proxy
- [ ] Headscale runs in a container
## To Reproduce
<!-- Steps to reproduce the behavior. -->

View File

@@ -6,21 +6,12 @@ labels: ["enhancement"]
assignees: "" assignees: ""
--- ---
<!-- <!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
We typically have a clear roadmap for what we want to improve and reserve the right
to close feature requests that does not fit in the roadmap, or fit with the scope
of the project, or we actually want to implement ourselves.
Headscale is a multinational community across the globe. Our language is English. **Feature request**
All bug reports needs to be in English.
-->
## Why
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
## Description
<!-- A clear and precise description of what new or changed feature you want. --> <!-- A clear and precise description of what new or changed feature you want. -->
<!-- Please include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->

30
.github/ISSUE_TEMPLATE/other_issue.md vendored Normal file
View File

@@ -0,0 +1,30 @@
---
name: "Other issue"
about: "Report a different issue"
title: ""
labels: ["bug"]
assignees: ""
---
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
<!-- If you have a question, please consider using our Discord for asking questions -->
**Issue description**
<!-- Please add your issue description. -->
**To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->

View File

@@ -1,15 +1,3 @@
<!--
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
-->
<!-- Please tick if the following things apply. You… --> <!-- Please tick if the following things apply. You… -->
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing) - [ ] read the [CONTRIBUTING guidelines](README.md#contributing)

26
.github/renovate.json vendored
View File

@@ -6,27 +6,31 @@
"onboarding": false, "onboarding": false,
"extends": ["config:base", ":rebaseStalePrs"], "extends": ["config:base", ":rebaseStalePrs"],
"ignorePresets": [":prHourlyLimit2"], "ignorePresets": [":prHourlyLimit2"],
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"], "enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
"includeForks": true, "includeForks": true,
"repositories": ["juanfont/headscale"], "repositories": ["juanfont/headscale"],
"platform": "github", "platform": "github",
"packageRules": [ "packageRules": [
{ {
"matchDatasources": ["go"], "matchDatasources": ["go"],
"groupName": "Go modules", "groupName": "Go modules",
"groupSlug": "gomod", "groupSlug": "gomod",
"separateMajorMinor": false "separateMajorMinor": false
}, },
{ {
"matchDatasources": ["docker"], "matchDatasources": ["docker"],
"groupName": "Dockerfiles", "groupName": "Dockerfiles",
"groupSlug": "dockerfiles" "groupSlug": "dockerfiles"
} }
], ],
"regexManagers": [ "regexManagers": [
{ {
"fileMatch": [".github/workflows/.*.yml$"], "fileMatch": [
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"], ".github/workflows/.*.yml$"
],
"matchStrings": [
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
],
"datasourceTemplate": "golang-version", "datasourceTemplate": "golang-version",
"depNameTemplate": "actions/go-version" "depNameTemplate": "actions/go-version"
} }

View File

@@ -8,23 +8,18 @@ on:
branches: branches:
- main - main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: write-all
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v2
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: tj-actions/changed-files@v34 uses: tj-actions/changed-files@v14.1
with: with:
files: | files: |
*.nix *.nix
@@ -33,40 +28,14 @@ jobs:
integration_test/ integration_test/
config-example.yaml config-example.yaml
- uses: DeterminateSystems/nix-installer-action@main - uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.any_changed == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- name: Run build - name: Run build
id: build
if: steps.changed-files.outputs.any_changed == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: | run: nix build
nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}"
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g') - uses: actions/upload-artifact@v2
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@v6
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.rest.pulls.createReviewComment({
pull_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@v3
if: steps.changed-files.outputs.any_changed == 'true' if: steps.changed-files.outputs.any_changed == 'true'
with: with:
name: headscale-linux name: headscale-linux

View File

@@ -9,7 +9,7 @@ jobs:
add-contributors: add-contributors:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v2
- name: Delete upstream contributor branch - name: Delete upstream contributor branch
# Allow continue on failure to account for when the # Allow continue on failure to account for when the
# upstream branch is deleted or does not exist. # upstream branch is deleted or does not exist.

View File

@@ -1,45 +0,0 @@
name: Build documentation
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
- name: Build docs
run: mkdocs build --strict
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: ./site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1

View File

@@ -1,23 +0,0 @@
name: GitHub Actions Version Updater
# Controls when the action will run.
on:
schedule:
# Automatically run on every Sunday
- cron: "0 0 * * 0"
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@v0.7.1
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}

View File

@@ -1,23 +1,19 @@
--- ---
name: Lint name: CI
on: [push, pull_request] on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs: jobs:
golangci-lint: golangci-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v2
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: tj-actions/changed-files@v34 uses: tj-actions/changed-files@v14.1
with: with:
files: | files: |
*.nix *.nix
@@ -30,7 +26,7 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true' if: steps.changed-files.outputs.any_changed == 'true'
uses: golangci/golangci-lint-action@v2 uses: golangci/golangci-lint-action@v2
with: with:
version: v1.51.2 version: v1.46.1
# Only block PRs on new problems. # Only block PRs on new problems.
# If this is not enabled, we will end up having PRs # If this is not enabled, we will end up having PRs
@@ -63,7 +59,7 @@ jobs:
- name: Prettify code - name: Prettify code
if: steps.changed-files.outputs.any_changed == 'true' if: steps.changed-files.outputs.any_changed == 'true'
uses: creyD/prettier_action@v4.3 uses: creyD/prettier_action@v4.0
with: with:
prettier_options: >- prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html} --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
@@ -74,7 +70,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- uses: bufbuild/buf-setup-action@v1.7.0 - uses: bufbuild/buf-setup-action@v0.7.0
- uses: bufbuild/buf-lint-action@v1 - uses: bufbuild/buf-lint-action@v1
with: with:
input: "proto" input: "proto"

View File

@@ -1,138 +0,0 @@
---
name: Release Docker
on:
push:
tags:
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -1,5 +1,5 @@
--- ---
name: Release name: release
on: on:
push: push:
@@ -9,17 +9,221 @@ on:
jobs: jobs:
goreleaser: goreleaser:
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.18.0
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y gcc-aarch64-linux-gnu
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
with:
distribution: goreleaser
version: latest
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker-release:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v2
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,value=latest
type=sha
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- uses: DeterminateSystems/nix-installer-action@main docker-debug-release:
- uses: DeterminateSystems/magic-nix-cache-action@main runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
latest=false
tags: |
type=semver,pattern={{version}}-debug
type=semver,pattern={{major}}.{{minor}}-debug
type=semver,pattern={{major}}-debug
type=raw,value=latest-debug
type=sha,suffix=-debug
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
- name: Run goreleaser docker-alpine-release:
run: nix develop --command -- goreleaser release --clean runs-on: ubuntu-latest
env: steps:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} - name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-alpine
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-alpine-
- name: Docker meta
id: meta-alpine
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
latest=false
tags: |
type=semver,pattern={{version}}-alpine
type=semver,pattern={{major}}.{{minor}}-alpine
type=semver,pattern={{major}}-alpine
type=raw,value=latest-alpine
type=sha,suffix=-alpine
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.alpine
tags: ${{ steps.meta-alpine.outputs.tags }}
labels: ${{ steps.meta-alpine.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-alpine
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
build-args: |
VERSION=${{ steps.meta-alpine.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-alpine
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine

27
.github/workflows/renovatebot.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: Renovate
on:
schedule:
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
workflow_dispatch:
jobs:
renovate:
runs-on: ubuntu-latest
steps:
- name: Get token
id: get_token
uses: machine-learning-apps/actions-app-token@master
with:
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
- name: Checkout
uses: actions/checkout@v2.0.0
- name: Self-hosted Renovate
uses: renovatebot/github-action@v31.81.3
with:
configurationFile: .github/renovate.json
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
# env:
# LOG_LEVEL: "debug"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowStarDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowStarDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUser80Dst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUser80Dst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUserDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUserDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDenyAllPort80
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDenyAllPort80$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDevice1CanAccessDevice2$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLHostsInNetMapTable
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLHostsInNetMapTable$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReach
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReach$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReachBySubnet$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestApiKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestApiKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthKeyLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestCreateTailscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestCreateTailscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestDERPServerScenario
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestDERPServerScenario$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnablingRoutes
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnablingRoutes$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestExpireNode
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestExpireNode$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHeadscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHeadscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeExpireCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeExpireCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeMoveCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeMoveCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeRenameCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeRenameCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeTagCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeTagCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByHostname
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByHostname$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByIP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandReusableEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandWithoutExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestResolveMagicDNS
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestResolveMagicDNS$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHIsBlockedInACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHIsBlockedInACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleUsersAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHNoSSHConfigured
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHNoSSHConfigured$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneUserAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneUserAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSUserOnlyIsolation
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSUserOnlyIsolation$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTaildrop
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTaildrop$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTailscaleNodesJoiningHeadcale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,65 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestUserCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestUserCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

35
.github/workflows/test-integration.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: CI
on: [pull_request]
jobs:
integration-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run Integration tests
if: steps.changed-files.outputs.any_changed == 'true'
uses: nick-fields/retry@v2
with:
timeout_minutes: 240
max_attempts: 5
retry_on: error
command: nix develop --command -- make test_integration

View File

@@ -1,23 +1,19 @@
name: Tests name: CI
on: [push, pull_request] on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs: jobs:
test: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@v2
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: tj-actions/changed-files@v34 uses: tj-actions/changed-files@v14.1
with: with:
files: | files: |
*.nix *.nix
@@ -26,9 +22,7 @@ jobs:
integration_test/ integration_test/
config-example.yaml config-example.yaml
- uses: DeterminateSystems/nix-installer-action@main - uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.any_changed == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- name: Run tests - name: Run tests

14
.gitignore vendored
View File

@@ -1,5 +1,3 @@
ignored/
# Binaries for programs and plugins # Binaries for programs and plugins
*.exe *.exe
*.exe~ *.exe~
@@ -14,9 +12,8 @@ ignored/
*.out *.out
# Dependency directories (remove the comment below to include it) # Dependency directories (remove the comment below to include it)
vendor/ # vendor/
dist/
/headscale /headscale
config.json config.json
config.yaml config.yaml
@@ -29,17 +26,10 @@ derp.yaml
# Exclude Jetbrains Editors # Exclude Jetbrains Editors
.idea .idea
test_output/ test_output/
control_logs/
# Nix build output # Nix build output
result result
.direnv/ .direnv/
integration_test/etc/config.dump.yaml integration_test/etc/config.dump.yaml
# MkDocs
.cache
/site
__debug_bin

View File

@@ -1,8 +1,6 @@
--- ---
run: run:
timeout: 10m timeout: 10m
build-tags:
- ts2019
issues: issues:
skip-dirs: skip-dirs:
@@ -28,15 +26,6 @@ linters:
- ireturn - ireturn
- execinquery - execinquery
- exhaustruct - exhaustruct
- nolintlint
- musttag # causes issues with imported libs
# deprecated
- structcheck # replaced by unused
- ifshort # deprecated by the owner
- varcheck # replaced by unused
- nosnakecase # replaced by revive
- deadcode # replaced by unused
# We should strive to enable these: # We should strive to enable these:
- wrapcheck - wrapcheck
@@ -44,9 +33,6 @@ linters:
- makezero - makezero
- maintidx - maintidx
# Limits the methods of an interface to 10. We have more in integration tests
- interfacebloat
# We might want to enable this, but it might be a lot of work # We might want to enable this, but it might be a lot of work
- cyclop - cyclop
- nestif - nestif

View File

@@ -1,85 +1,74 @@
--- ---
before: before:
hooks: hooks:
- go mod tidy -compat=1.20 - go mod tidy -compat=1.18
- go mod vendor
release: release:
prerelease: auto prerelease: auto
builds: builds:
- id: headscale - id: darwin-amd64
main: ./cmd/headscale/headscale.go main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}" mod_timestamp: "{{ .CommitTimestamp }}"
env: env:
- CGO_ENABLED=0 - CGO_ENABLED=0
targets: goos:
- darwin_amd64 - darwin
- darwin_arm64 goarch:
- freebsd_amd64 - amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
flags: flags:
- -mod=readonly - -mod=readonly
ldflags: ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}} - -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019 - id: darwin-arm64
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- darwin
goarch:
- arm64
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
- id: linux-amd64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- amd64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
- id: linux-arm64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
archives: archives:
- id: golang-cross - id: golang-cross
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}' builds:
- darwin-amd64
- darwin-arm64
- linux-amd64
- linux-arm64
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
format: binary format: binary
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}"
format: tar.gz
files:
- "vendor/"
nfpms:
# Configure nFPM for .deb and .rpm releases
#
# See https://nfpm.goreleaser.com/configuration/
# and https://goreleaser.com/customization/nfpm/
#
# Useful tools for debugging .debs:
# List file contents: dpkg -c dist/headscale...deb
# Package metadata: dpkg --info dist/headscale....deb
#
- builds:
- headscale
package_name: headscale
priority: optional
vendor: headscale
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
homepage: https://github.com/juanfont/headscale
license: BSD
bindir: /usr/bin
formats:
- deb
# - rpm
contents:
- src: ./config-example.yaml
dst: /etc/headscale/config.yaml
type: config|noreplace
file_info:
mode: 0644
- src: ./docs/packaging/headscale.systemd.service
dst: /usr/lib/systemd/system/headscale.service
- dst: /var/lib/headscale
type: dir
- dst: /var/run/headscale
type: dir
scripts:
postinstall: ./docs/packaging/postinstall.sh
postremove: ./docs/packaging/postremove.sh
checksum: checksum:
name_template: "checksums.txt" name_template: "checksums.txt"
snapshot: snapshot:

View File

@@ -1 +0,0 @@
.github/workflows/test-integration-v2*

View File

@@ -1,171 +1,6 @@
# CHANGELOG # CHANGELOG
## 0.23.0 (2023-XX-XX) ## 0.17.0 (2022-xx-xx)
### BREAKING
- Code reorganisation, a lot of code has moved, please review the following PRs accordingly [#1444](https://github.com/juanfont/headscale/pull/1444)
### Changes
- Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484)
- SSH is no longer experimental [#1487](https://github.com/juanfont/headscale/pull/1487)
## 0.22.3 (2023-05-12)
### Changes
- Added missing ca-certificates in Docker image [#1463](https://github.com/juanfont/headscale/pull/1463)
## 0.22.2 (2023-05-10)
### Changes
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
- Profiles are continously generated in our integration tests.
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
- Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
## 0.22.1 (2023-04-20)
### Changes
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
## 0.22.0 (2023-04-20)
### Changes
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
## 0.21.0 (2023-03-20)
### Changes
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
## 0.20.0 (2023-02-03)
### Changes
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
- defaults to 180 days like Tailscale SaaS
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
## 0.19.0 (2023-01-29)
### BREAKING
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
- **BACKUP your database before upgrading**
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
## 0.18.0 (2023-01-14)
### Changes
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
## 0.17.1 (2022-12-05)
### Changes
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
## 0.17.0 (2022-11-26)
### BREAKING
- `noise.private_key_path` has been added and is required for the new noise protocol.
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
### Important Changes
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
- Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847)
- Please note that this support should be considered _partially_ implemented
- SSH ACLs status:
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
- If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
### Changes
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
- Give a warning when running Headscale with reverse proxy improperly configured for WebSockets [#788](https://github.com/juanfont/headscale/pull/788)
- Fix subnet routers with Primary Routes [#811](https://github.com/juanfont/headscale/pull/811)
- Added support for JSON logs [#653](https://github.com/juanfont/headscale/issues/653)
- Sanitise the node key passed to registration url [#823](https://github.com/juanfont/headscale/pull/823)
- Add support for generating pre-auth keys with tags [#767](https://github.com/juanfont/headscale/pull/767)
- Add support for evaluating `autoApprovers` ACL entries when a machine is registered [#763](https://github.com/juanfont/headscale/pull/763)
- Add config flag to allow Headscale to start if OIDC provider is down [#829](https://github.com/juanfont/headscale/pull/829)
- Fix prefix length comparison bug in AutoApprovers route evaluation [#862](https://github.com/juanfont/headscale/pull/862)
- Random node DNS suffix only applied if names collide in namespace. [#766](https://github.com/juanfont/headscale/issues/766)
- Remove `ip_prefix` configuration option and warning [#899](https://github.com/juanfont/headscale/pull/899)
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971)
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
## 0.16.4 (2022-08-21)
### Changes
- Add ability to connect to PostgreSQL over TLS/SSL [#745](https://github.com/juanfont/headscale/pull/745)
- Fix CLI registration of expired machines [#754](https://github.com/juanfont/headscale/pull/754)
## 0.16.3 (2022-08-17)
### Changes
- Fix issue with OIDC authentication [#747](https://github.com/juanfont/headscale/pull/747)
## 0.16.2 (2022-08-14)
### Changes
- Fixed bugs in the client registration process after migration to NodeKey [#735](https://github.com/juanfont/headscale/pull/735)
## 0.16.1 (2022-08-12)
### Changes
- Updated dependencies (including the library that lacked armhf support) [#722](https://github.com/juanfont/headscale/pull/722)
- Fix missing group expansion in function `excludeCorretlyTaggedNodes` [#563](https://github.com/juanfont/headscale/issues/563)
- Improve registration protocol implementation and switch to NodeKey as main identifier [#725](https://github.com/juanfont/headscale/pull/725)
- Add ability to connect to PostgreSQL via unix socket [#734](https://github.com/juanfont/headscale/pull/734)
## 0.16.0 (2022-07-25) ## 0.16.0 (2022-07-25)
@@ -275,7 +110,7 @@ This is a part of aligning `headscale`'s behaviour with Tailscale's upstream beh
- OpenID Connect users will be mapped per namespaces - OpenID Connect users will be mapped per namespaces
- Each user will get its own namespace, created if it does not exist - Each user will get its own namespace, created if it does not exist
- `oidc.domain_map` option has been removed - `oidc.domain_map` option has been removed
- `strip_email_domain` option has been added (see [config-example.yaml](./config-example.yaml)) - `strip_email_domain` option has been added (see [config-example.yaml](./config_example.yaml))
### Changes ### Changes

View File

@@ -1,134 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation
in our community a harassment-free experience for everyone, regardless
of age, body size, visible or invisible disability, ethnicity, sex
characteristics, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance,
race, religion, or sexual identity and orientation.
We pledge to act and interact in ways that contribute to an open,
welcoming, diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for
our community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our
mistakes, and learning from the experience
- Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or
advances of any kind
- Trolling, insulting or derogatory comments, and personal or
political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email
address, without their explicit permission
- Other conduct which could reasonably be considered inappropriate in
a professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our
standards of acceptable behavior and will take appropriate and fair
corrective action in response to any behavior that they deem
inappropriate, threatening, offensive, or harmful.
Community leaders have the right and responsibility to remove, edit,
or reject comments, commits, code, wiki edits, issues, and other
contributions that are not aligned to this Code of Conduct, and will
communicate reasons for moderation decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also
applies when an individual is officially representing the community in
public spaces. Examples of representing our community include using an
official e-mail address, posting via an official social media account,
or acting as an appointed representative at an online or offline
event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
at our Discord channel. All complaints
will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and
security of the reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in
determining the consequences for any action they deem in violation of
this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior
deemed unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders,
providing clarity around the nature of the violation and an
explanation of why the behavior was inappropriate. A public apology
may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued
behavior. No interaction with the people involved, including
unsolicited interaction with those enforcing the Code of Conduct, for
a specified period of time. This includes avoiding interactions in
community spaces as well as external channels like social
media. Violating these terms may lead to a temporary or permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards,
including sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or
public communication with the community for a specified period of
time. No public or private interaction with the people involved,
including unsolicited interaction with those enforcing the Code of
Conduct, is allowed during this period. Violating these terms may lead
to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of
community standards, including sustained inappropriate behavior,
harassment of an individual, or aggression toward or disparagement of
classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction
within the community.
## Attribution
This Code of Conduct is adapted from the [Contributor
Covenant][homepage], version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of
conduct enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the
FAQ at https://www.contributor-covenant.org/faq. Translations are
available at https://www.contributor-covenant.org/translations.

View File

@@ -1,5 +1,5 @@
# Builder image # Builder image
FROM docker.io/golang:1.20-bullseye AS build FROM --platform=$BUILDPLATFORM docker.io/golang:1.18.0-bullseye AS build
ARG VERSION=dev ARG VERSION=dev
ENV GOPATH /go ENV GOPATH /go
WORKDIR /go/src/headscale WORKDIR /go/src/headscale
@@ -8,23 +8,15 @@ COPY go.mod go.sum /go/src/headscale/
RUN go mod download RUN go mod download
COPY . . COPY . .
ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale RUN CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /go/bin/headscale -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale RUN test -e /go/bin/headscale
# Production image # Production image
FROM docker.io/debian:bullseye-slim FROM gcr.io/distroless/base-debian11
RUN apt-get update \
&& apt-get install -y ca-certificates \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
COPY --from=build /go/bin/headscale /bin/headscale COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC ENV TZ UTC
RUN mkdir -p /var/run/headscale
EXPOSE 8080/tcp EXPOSE 8080/tcp
CMD ["headscale"] CMD ["headscale"]

24
Dockerfile.alpine Normal file
View File

@@ -0,0 +1,24 @@
# Builder image
FROM --platform=$BUILDPLATFORM docker.io/golang:1.18.0-alpine AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN apk add gcc musl-dev
RUN go mod download
COPY . .
ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /go/bin/headscale -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN test -e /go/bin/headscale
# Production image
FROM docker.io/alpine:latest
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -1,5 +1,5 @@
# Builder image # Builder image
FROM docker.io/golang:1.20-bullseye AS build FROM --platform=$BUILDPLATFORM docker.io/golang:1.18.0-bullseye AS build
ARG VERSION=dev ARG VERSION=dev
ENV GOPATH /go ENV GOPATH /go
WORKDIR /go/src/headscale WORKDIR /go/src/headscale
@@ -9,17 +9,16 @@ RUN go mod download
COPY . . COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN test -e /go/bin/headscale RUN test -e /go/bin/headscale
# Debug image # Debug image
FROM docker.io/golang:1.20.0-bullseye FROM gcr.io/distroless/base-debian11:debug
COPY --from=build /go/bin/headscale /bin/headscale COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC ENV TZ UTC
RUN mkdir -p /var/run/headscale
# Need to reset the entrypoint or everything will run as a busybox script # Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT [] ENTRYPOINT []
EXPOSE 8080/tcp EXPOSE 8080/tcp

View File

@@ -1,16 +1,17 @@
FROM ubuntu:22.04 FROM ubuntu:latest
ARG TAILSCALE_VERSION=* ARG TAILSCALE_VERSION=*
ARG TAILSCALE_CHANNEL=stable ARG TAILSCALE_CHANNEL=stable
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y gnupg curl ssh dnsutils ca-certificates \ && apt-get install -y gnupg curl \
&& adduser --shell=/bin/bash ssh-it-user && curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
# Tailscale is deliberately split into a second stage so we can cash utils as a seperate layer.
RUN curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \ && curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \ && apt-get update \
&& apt-get install -y tailscale=${TAILSCALE_VERSION} \ && apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -1,17 +1,21 @@
FROM golang:latest FROM golang:latest
RUN apt-get update \ RUN apt-get update \
&& apt-get install -y dnsutils git iptables ssh ca-certificates \ && apt-get install -y ca-certificates dnsutils git iptables \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN useradd --shell=/bin/bash --create-home ssh-it-user
RUN git clone https://github.com/tailscale/tailscale.git RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR /go/tailscale WORKDIR tailscale
RUN git checkout main \ RUN sh build_dist.sh tailscale.com/cmd/tailscale
&& sh build_dist.sh tailscale.com/cmd/tailscale \ RUN sh build_dist.sh tailscale.com/cmd/tailscaled
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
&& cp tailscale /usr/local/bin/ \ RUN cp tailscale /usr/local/bin/
&& cp tailscaled /usr/local/bin/ RUN cp tailscaled /usr/local/bin/
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -0,0 +1,21 @@
# Builder image
FROM docker.io/golang:1.18.0-bullseye AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /go/bin/headscale -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN test -e /go/bin/headscale
# Production image
FROM gcr.io/distroless/base-debian11
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -10,8 +10,6 @@ ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
else else
endif endif
TAGS = -tags ts2019
# GO_SOURCES = $(wildcard *.go) # GO_SOURCES = $(wildcard *.go)
# PROTO_SOURCES = $(wildcard **/*.proto) # PROTO_SOURCES = $(wildcard **/*.proto)
GO_SOURCES = $(call rwildcard,,*.go) GO_SOURCES = $(call rwildcard,,*.go)
@@ -19,22 +17,27 @@ PROTO_SOURCES = $(call rwildcard,,*.proto)
build: build:
nix build GOOS=$(GOOS) CGO_ENABLED=0 go build -trimpath $(pieflags) -mod=readonly -ldflags "-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$(version)" cmd/headscale/headscale.go
dev: lint test build dev: lint test build
test: test:
gotestsum -- $(TAGS) -short -coverprofile=coverage.out ./... @go test -coverprofile=coverage.out ./...
test_integration: test_integration:
docker run \ go test -failfast -tags integration -timeout 30m -count=1 ./...
-t --rm \
-v ~/.cache/hs-integration-go:/go \ test_integration_cli:
--name headscale-test-suite \ go test -tags integration -v integration_cli_test.go integration_common_test.go
-v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \ test_integration_derp:
golang:1 \ go test -tags integration -v integration_embedded_derp_test.go integration_common_test.go
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
coverprofile_func:
go tool cover -func=coverage.out
coverprofile_html:
go tool cover -html=coverage.out
lint: lint:
golangci-lint run --fix --timeout 10m golangci-lint run --fix --timeout 10m
@@ -52,4 +55,11 @@ compress: build
generate: generate:
rm -rf gen rm -rf gen
buf generate proto go run github.com/bufbuild/buf/cmd/buf generate proto
install-protobuf-plugins:
go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc

596
README.md
View File

@@ -1,4 +1,4 @@
![headscale logo](./docs/logo/headscale3_header_stacked_left.png) # headscale
![ci](https://github.com/juanfont/headscale/actions/workflows/test.yml/badge.svg) ![ci](https://github.com/juanfont/headscale/actions/workflows/test.yml/badge.svg)
@@ -32,18 +32,22 @@ organisation.
## Design goal ## Design goal
Headscale aims to implement a self-hosted, open source alternative to the Tailscale `headscale` aims to implement a self-hosted, open source alternative to the Tailscale
control server. control server. `headscale` has a narrower scope and an instance of `headscale`
Headscale's goal is to provide self-hosters and hobbyists with an open-source implements a _single_ Tailnet, which is typically what a single organisation, or
server they can use for their projects and labs. home/personal setup would use.
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
open-source organisation.
## Supporting Headscale `headscale` uses terms that maps to Tailscale's control server, consult the
[glossary](./docs/glossary.md) for explainations.
## Support
If you like `headscale` and find it useful, there is a sponsorship and donation If you like `headscale` and find it useful, there is a sponsorship and donation
buttons available in the repo. buttons available in the repo.
If you would like to sponsor features, bugs or prioritisation, reach out to
one of the maintainers.
## Features ## Features
- Full "base" support of Tailscale's features - Full "base" support of Tailscale's features
@@ -63,47 +67,27 @@ buttons available in the repo.
## Client OS support ## Client OS support
| OS | Supports headscale | | OS | Supports headscale |
| ------- | --------------------------------------------------------- | | ------- | ----------------------------------------------------------------------------------------------------------------- |
| Linux | Yes | | Linux | Yes |
| OpenBSD | Yes | | OpenBSD | Yes |
| FreeBSD | Yes | | FreeBSD | Yes |
| macOS | Yes (see `/apple` on your headscale for more information) | | macOS | Yes (see `/apple` on your headscale for more information) |
| Windows | Yes [docs](./docs/windows-client.md) | | Windows | Yes [docs](./docs/windows-client.md) |
| Android | Yes [docs](./docs/android-client.md) | | Android | [You need to compile the client yourself](https://github.com/juanfont/headscale/issues/58#issuecomment-885255270) |
| iOS | Yes [docs](./docs/iOS-client.md) | | iOS | Not yet |
## Running headscale ## Running headscale
**Please note that we do not support nor encourage the use of reverse proxies Please have a look at the documentation under [`docs/`](docs/).
and container to run Headscale.**
Please have a look at the [`documentation`](https://headscale.net/).
## Talks
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
- presented by Juan Font Alonso and Kristoffer Dalby
## Disclaimer ## Disclaimer
1. This project is not associated with Tailscale Inc. 1. We have nothing to do with Tailscale, or Tailscale Inc.
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel. 2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
## Contributing ## Contributing
Headscale is "Open Source, acknowledged contribution", this means that any
contribution will have to be discussed with the Maintainers before being submitted.
This model has been chosen to reduce the risk of burnout by limiting the
maintenance overhead of reviewing and validating third-party code.
Headscale is open to code contributions for bug fixes without discussion.
If you find mistakes in the documentation, please submit a fix to the documentation.
### Requirements
To contribute to headscale you would need the lastest version of [Go](https://golang.org) To contribute to headscale you would need the lastest version of [Go](https://golang.org)
and [Buf](https://buf.build)(Protobuf generator). and [Buf](https://buf.build)(Protobuf generator).
@@ -111,6 +95,8 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
be done with `nix develop`, which will install the tools and give you a shell. be done with `nix develop`, which will install the tools and give you a shell.
This guarantees that you will have the same dev env as `headscale` maintainers. This guarantees that you will have the same dev env as `headscale` maintainers.
PRs and suggestions are welcome.
### Code style ### Code style
To ensure we have some consistency with a growing number of contributions, To ensure we have some consistency with a growing number of contributions,
@@ -188,6 +174,13 @@ make build
<sub style="font-size:14px"><b>Juan Font</b></sub> <sub style="font-size:14px"><b>Juan Font</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cure> <a href=https://github.com/cure>
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/> <img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
@@ -202,13 +195,6 @@ make build
<sub style="font-size:14px"><b>Jiang Zhu</b></sub> <sub style="font-size:14px"><b>Jiang Zhu</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tsujamin>
<img src=https://avatars.githubusercontent.com/u/2435619?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Benjamin Roberts/>
<br />
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/reynico> <a href=https://github.com/reynico>
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/> <img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
@@ -218,13 +204,6 @@ make build
</td> </td>
</tr> </tr>
<tr> <tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/evenh>
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
<br />
<sub style="font-size:14px"><b>Even Holthe</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/e-zk> <a href=https://github.com/e-zk>
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/> <img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
@@ -233,7 +212,7 @@ make build
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ImpostorKeanu> <a href=https://github.com/arch4ngel>
<img src=https://avatars.githubusercontent.com/u/11574161?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Justin Angel/> <img src=https://avatars.githubusercontent.com/u/11574161?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Justin Angel/>
<br /> <br />
<sub style="font-size:14px"><b>Justin Angel</b></sub> <sub style="font-size:14px"><b>Justin Angel</b></sub>
@@ -246,13 +225,6 @@ make build
<sub style="font-size:14px"><b>Alessandro (Ale) Segala</b></sub> <sub style="font-size:14px"><b>Alessandro (Ale) Segala</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
<br />
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/unreality> <a href=https://github.com/unreality>
<img src=https://avatars.githubusercontent.com/u/352522?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=unreality/> <img src=https://avatars.githubusercontent.com/u/352522?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=unreality/>
@@ -260,8 +232,6 @@ make build
<sub style="font-size:14px"><b>unreality</b></sub> <sub style="font-size:14px"><b>unreality</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr> <a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/> <img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
@@ -270,49 +240,14 @@ make build
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Orhideous> <a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/2265184?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andriy Kushnir/> <img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
<br /> <br />
<sub style="font-size:14px"><b>Andriy Kushnir</b></sub> <sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/GrigoriyMikhalkin>
<img src=https://avatars.githubusercontent.com/u/3637857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=GrigoriyMikhalkin/>
<br />
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/christian-heusel>
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
<br />
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mike-lloyd03>
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
<br />
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
</a> </a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/iSchluff>
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
<br />
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Niek> <a href=https://github.com/Niek>
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/> <img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
@@ -327,13 +262,6 @@ make build
<sub style="font-size:14px"><b>Eugen Biegler</b></sub> <sub style="font-size:14px"><b>Eugen Biegler</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/617a7a>
<img src=https://avatars.githubusercontent.com/u/67651251?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Azz/>
<br />
<sub style="font-size:14px"><b>Azz</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/qbit> <a href=https://github.com/qbit>
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/> <img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
@@ -342,26 +270,10 @@ make build
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kazauwa> <a href=https://github.com/iSchluff>
<img src=https://avatars.githubusercontent.com/u/12330159?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Igor Perepilitsyn/> <img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
<br /> <br />
<sub style="font-size:14px"><b>Igor Perepilitsyn</b></sub> <sub style="font-size:14px"><b>Anton Schubert</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Aluxima>
<img src=https://avatars.githubusercontent.com/u/16262531?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Laurent Marchaud/>
<br />
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -372,17 +284,19 @@ make build
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/OrvilleQ> <a href=https://github.com/GrigoriyMikhalkin>
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/> <img src=https://avatars.githubusercontent.com/u/3637857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=GrigoriyMikhalkin/>
<br /> <br />
<sub style="font-size:14px"><b>Orville Q. Song</b></sub> <sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hdhoang> <a href=https://github.com/hdhoang>
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/> <img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Hoàng Đức Hiếu/>
<br /> <br />
<sub style="font-size:14px"><b>hdhoang</b></sub> <sub style="font-size:14px"><b>Hoàng Đức Hiếu</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -392,8 +306,6 @@ make build
<sub style="font-size:14px"><b>bravechamp</b></sub> <sub style="font-size:14px"><b>bravechamp</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/deonthomasgy> <a href=https://github.com/deonthomasgy>
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/> <img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
@@ -401,20 +313,6 @@ make build
<sub style="font-size:14px"><b>Deon Thomas</b></sub> <sub style="font-size:14px"><b>Deon Thomas</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/madjam002>
<img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
<br />
<sub style="font-size:14px"><b>Jamie Greeff</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jonathanspw>
<img src=https://avatars.githubusercontent.com/u/8390543?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan Wright/>
<br />
<sub style="font-size:14px"><b>Jonathan Wright</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ChibangLW> <a href=https://github.com/ChibangLW>
<img src=https://avatars.githubusercontent.com/u/22293464?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ChibangLW/> <img src=https://avatars.githubusercontent.com/u/22293464?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ChibangLW/>
@@ -422,13 +320,6 @@ make build
<sub style="font-size:14px"><b>ChibangLW</b></sub> <sub style="font-size:14px"><b>ChibangLW</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majabojarska>
<img src=https://avatars.githubusercontent.com/u/33836570?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maja Bojarska/>
<br />
<sub style="font-size:14px"><b>Maja Bojarska</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mevansam> <a href=https://github.com/mevansam>
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/> <img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
@@ -436,8 +327,6 @@ make build
<sub style="font-size:14px"><b>Mevan Samaratunga</b></sub> <sub style="font-size:14px"><b>Mevan Samaratunga</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dragetd> <a href=https://github.com/dragetd>
<img src=https://avatars.githubusercontent.com/u/3639577?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael G./> <img src=https://avatars.githubusercontent.com/u/3639577?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael G./>
@@ -445,6 +334,8 @@ make build
<sub style="font-size:14px"><b>Michael G.</b></sub> <sub style="font-size:14px"><b>Michael G.</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ptman> <a href=https://github.com/ptman>
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/> <img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
@@ -460,28 +351,12 @@ make build
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/loprima-l> <a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/69201633?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=loprima-l/> <img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br /> <br />
<sub style="font-size:14px"><b>loprima-l</b></sub> <sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kevin1sMe>
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
<br />
<sub style="font-size:14px"><b>kevinlin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/QZAiXH>
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
<br />
<sub style="font-size:14px"><b>Snack</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/artemklevtsov> <a href=https://github.com/artemklevtsov>
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/> <img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
@@ -496,36 +371,6 @@ make build
<sub style="font-size:14px"><b>Casey Marshall</b></sub> <sub style="font-size:14px"><b>Casey Marshall</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dbevacqua>
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
<br />
<sub style="font-size:14px"><b>dbevacqua</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/joshuataylor>
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
<br />
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/CNLHC>
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
<br />
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/motiejus>
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
<br />
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pvinis> <a href=https://github.com/pvinis>
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/> <img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
@@ -533,6 +378,8 @@ make build
<sub style="font-size:14px"><b>Pavlos Vinieratos</b></sub> <sub style="font-size:14px"><b>Pavlos Vinieratos</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/SilverBut> <a href=https://github.com/SilverBut>
<img src=https://avatars.githubusercontent.com/u/6560655?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Silver Bullet/> <img src=https://avatars.githubusercontent.com/u/6560655?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Silver Bullet/>
@@ -541,17 +388,10 @@ make build
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/snh> <a href=https://github.com/lachy2849>
<img src=https://avatars.githubusercontent.com/u/2051768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Steven Honson/> <img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
<br /> <br />
<sub style="font-size:14px"><b>Steven Honson</b></sub> <sub style="font-size:14px"><b>lachy2849</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ratsclub>
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
<br />
<sub style="font-size:14px"><b>Victor Freire</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -561,15 +401,6 @@ make build
<sub style="font-size:14px"><b>thomas</b></sub> <sub style="font-size:14px"><b>thomas</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/linsomniac>
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
<br />
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aberoham> <a href=https://github.com/aberoham>
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/> <img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
@@ -577,27 +408,6 @@ make build
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub> <sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/iFargle>
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
<br />
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/puzpuzpuz>
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
<br />
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/theryecatcher>
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
<br />
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/apognu> <a href=https://github.com/apognu>
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/> <img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
@@ -605,15 +415,6 @@ make build
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub> <sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tony1661>
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
<br />
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aofei> <a href=https://github.com/aofei>
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/> <img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
@@ -621,13 +422,8 @@ make build
<sub style="font-size:14px"><b>Aofei Sheng</b></sub> <sub style="font-size:14px"><b>Aofei Sheng</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> </tr>
<a href=https://github.com/arnarg> <tr>
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
<br />
<sub style="font-size:14px"><b>Arnar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/awoimbee> <a href=https://github.com/awoimbee>
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/> <img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
@@ -635,13 +431,6 @@ make build
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub> <sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/avirut>
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
<br />
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stensonb> <a href=https://github.com/stensonb>
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/> <img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
@@ -656,20 +445,11 @@ make build
<sub style="font-size:14px"><b> Carson Yang</b></sub> <sub style="font-size:14px"><b> Carson Yang</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kundel> <a href=https://github.com/kundel>
<img src=https://avatars.githubusercontent.com/u/10158899?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Darrell Kundel/> <img src=https://avatars.githubusercontent.com/u/10158899?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kundel/>
<br /> <br />
<sub style="font-size:14px"><b>Darrell Kundel</b></sub> <sub style="font-size:14px"><b>kundel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fatih-acar>
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
<br />
<sub style="font-size:14px"><b>fatih-acar</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -686,13 +466,8 @@ make build
<sub style="font-size:14px"><b>Felix Yan</b></sub> <sub style="font-size:14px"><b>Felix Yan</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> </tr>
<a href=https://github.com/gabe565> <tr>
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
<br />
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JJGadgets> <a href=https://github.com/JJGadgets>
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/> <img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
@@ -700,13 +475,11 @@ make build
<sub style="font-size:14px"><b>JJGadgets</b></sub> <sub style="font-size:14px"><b>JJGadgets</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hrtkpf> <a href=https://github.com/madjam002>
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/> <img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
<br /> <br />
<sub style="font-size:14px"><b>hrtkpf</b></sub> <sub style="font-size:14px"><b>Jamie Greeff</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -716,80 +489,6 @@ make build
<sub style="font-size:14px"><b>Jim Tittsler</b></sub> <sub style="font-size:14px"><b>Jim Tittsler</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jsiebens>
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
<br />
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/johnae>
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
<br />
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ShadowJonathan>
<img src=https://avatars.githubusercontent.com/u/22740616?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan de Jong/>
<br />
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JulienFloris>
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
<br />
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/win-t>
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
<br />
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/foxtrot>
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
<br />
<sub style="font-size:14px"><b>Marc</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magf>
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
<br />
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mhameed>
<img src=https://avatars.githubusercontent.com/u/447017?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mesar Hameed/>
<br />
<sub style="font-size:14px"><b>Mesar Hameed</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mikejsavage>
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
<br />
<sub style="font-size:14px"><b>Michael Savage</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pkrivanec>
<img src=https://avatars.githubusercontent.com/u/25530641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Philipp Krivanec/>
<br />
<sub style="font-size:14px"><b>Philipp Krivanec</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/piec> <a href=https://github.com/piec>
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/> <img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
@@ -797,20 +496,6 @@ make build
<sub style="font-size:14px"><b>Pierre Carru</b></sub> <sub style="font-size:14px"><b>Pierre Carru</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Donran>
<img src=https://avatars.githubusercontent.com/u/4838348?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pontus N/>
<br />
<sub style="font-size:14px"><b>Pontus N</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nnsee>
<img src=https://avatars.githubusercontent.com/u/36747857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Rasmus Moorats/>
<br />
<sub style="font-size:14px"><b>Rasmus Moorats</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/rcursaru> <a href=https://github.com/rcursaru>
<img src=https://avatars.githubusercontent.com/u/16259641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=rcursaru/> <img src=https://avatars.githubusercontent.com/u/16259641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=rcursaru/>
@@ -820,25 +505,18 @@ make build
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/renovate-bot> <a href=https://github.com/renovate-bot>
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mend Renovate/> <img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=WhiteSource Renovate/>
<br /> <br />
<sub style="font-size:14px"><b>Mend Renovate</b></sub> <sub style="font-size:14px"><b>WhiteSource Renovate</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ryanfowler>
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
<br />
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a> </a>
</td> </td>
</tr> </tr>
<tr> <tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/muzy> <a href=https://github.com/ryanfowler>
<img src=https://avatars.githubusercontent.com/u/321723?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sebastian Muszytowski/> <img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
<br /> <br />
<sub style="font-size:14px"><b>Sebastian Muszytowski</b></sub> <sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -848,27 +526,6 @@ make build
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub> <sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/6ixfalls>
<img src=https://avatars.githubusercontent.com/u/23470032?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Six/>
<br />
<sub style="font-size:14px"><b>Six</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stefanvanburen>
<img src=https://avatars.githubusercontent.com/u/622527?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan VanBuren/>
<br />
<sub style="font-size:14px"><b>Stefan VanBuren</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/sophware>
<img src=https://avatars.githubusercontent.com/u/41669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sophware/>
<br />
<sub style="font-size:14px"><b>sophware</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/m-tanner-dev0> <a href=https://github.com/m-tanner-dev0>
<img src=https://avatars.githubusercontent.com/u/97977342?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tanner/> <img src=https://avatars.githubusercontent.com/u/97977342?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tanner/>
@@ -876,8 +533,6 @@ make build
<sub style="font-size:14px"><b>Tanner</b></sub> <sub style="font-size:14px"><b>Tanner</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Teteros> <a href=https://github.com/Teteros>
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/> <img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
@@ -899,13 +554,8 @@ make build
<sub style="font-size:14px"><b>Tianon Gravi</b></sub> <sub style="font-size:14px"><b>Tianon Gravi</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> </tr>
<a href=https://github.com/thetillhoff> <tr>
<img src=https://avatars.githubusercontent.com/u/25052289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Till Hoffmann/>
<br />
<sub style="font-size:14px"><b>Till Hoffmann</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/woudsma> <a href=https://github.com/woudsma>
<img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/> <img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/>
@@ -920,22 +570,6 @@ make build
<sub style="font-size:14px"><b>Yang Bin</b></sub> <sub style="font-size:14px"><b>Yang Bin</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gozssky>
<img src=https://avatars.githubusercontent.com/u/17199941?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yujie Xia/>
<br />
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/newellz2>
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
<br />
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zekker6> <a href=https://github.com/zekker6>
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/> <img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
@@ -943,13 +577,6 @@ make build
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub> <sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zhzy0077>
<img src=https://avatars.githubusercontent.com/u/8717471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zhiyuan Zheng/>
<br />
<sub style="font-size:14px"><b>Zhiyuan Zheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Bpazy> <a href=https://github.com/Bpazy>
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ziyuan Han/> <img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ziyuan Han/>
@@ -957,15 +584,6 @@ make build
<sub style="font-size:14px"><b>Ziyuan Han</b></sub> <sub style="font-size:14px"><b>Ziyuan Han</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/caelansar>
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
<br />
<sub style="font-size:14px"><b>caelansar</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/derelm> <a href=https://github.com/derelm>
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/> <img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
@@ -973,13 +591,6 @@ make build
<sub style="font-size:14px"><b>derelm</b></sub> <sub style="font-size:14px"><b>derelm</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dnaq>
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
<br />
<sub style="font-size:14px"><b>dnaq</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nning> <a href=https://github.com/nning>
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/> <img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
@@ -987,6 +598,8 @@ make build
<sub style="font-size:14px"><b>henning mueller</b></sub> <sub style="font-size:14px"><b>henning mueller</b></sub>
</a> </a>
</td> </td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ignoramous> <a href=https://github.com/ignoramous>
<img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/> <img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/>
@@ -994,66 +607,20 @@ make build
<sub style="font-size:14px"><b>ignoramous</b></sub> <sub style="font-size:14px"><b>ignoramous</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimyag>
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
<br />
<sub style="font-size:14px"><b>jimyag</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magichuihui>
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
<br />
<sub style="font-size:14px"><b>suhelen</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lion24> <a href=https://github.com/lion24>
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/> <img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lion24/>
<br /> <br />
<sub style="font-size:14px"><b>sharkonet</b></sub> <sub style="font-size:14px"><b>lion24</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ma6174>
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
<br />
<sub style="font-size:14px"><b>ma6174</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/manju-rn>
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
<br />
<sub style="font-size:14px"><b>manju-rn</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nicholas-yap>
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
<br />
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila> <a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/> <img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
<br /> <br />
<sub style="font-size:14px"><b>Tommi Pernila</b></sub> <sub style="font-size:14px"><b>pernila</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/phpmalik>
<img src=https://avatars.githubusercontent.com/u/26834645?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=phpmalik/>
<br />
<sub style="font-size:14px"><b>phpmalik</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0"> <td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Wakeful-Cloud> <a href=https://github.com/Wakeful-Cloud>
<img src=https://avatars.githubusercontent.com/u/38930607?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Wakeful-Cloud/> <img src=https://avatars.githubusercontent.com/u/38930607?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Wakeful-Cloud/>
@@ -1068,12 +635,5 @@ make build
<sub style="font-size:14px"><b>zy</b></sub> <sub style="font-size:14px"><b>zy</b></sub>
</a> </a>
</td> </td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/atorregrosa-smd>
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
<br />
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
</a>
</td>
</tr> </tr>
</table> </table>

563
acls.go Normal file
View File

@@ -0,0 +1,563 @@
package headscale
import (
"encoding/json"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/rs/zerolog/log"
"github.com/tailscale/hujson"
"gopkg.in/yaml.v3"
"inet.af/netaddr"
"tailscale.com/tailcfg"
)
const (
errEmptyPolicy = Error("empty policy")
errInvalidAction = Error("invalid action")
errInvalidGroup = Error("invalid group")
errInvalidTag = Error("invalid tag")
errInvalidPortFormat = Error("invalid port format")
errWildcardIsNeeded = Error("wildcard as port is required for the protocol")
)
const (
Base8 = 8
Base10 = 10
BitSize16 = 16
BitSize32 = 32
BitSize64 = 64
portRangeBegin = 0
portRangeEnd = 65535
expectedTokenItems = 2
)
// For some reason golang.org/x/net/internal/iana is an internal package.
const (
protocolICMP = 1 // Internet Control Message
protocolIGMP = 2 // Internet Group Management
protocolIPv4 = 4 // IPv4 encapsulation
protocolTCP = 6 // Transmission Control
protocolEGP = 8 // Exterior Gateway Protocol
protocolIGP = 9 // any private interior gateway (used by Cisco for their IGRP)
protocolUDP = 17 // User Datagram
protocolGRE = 47 // Generic Routing Encapsulation
protocolESP = 50 // Encap Security Payload
protocolAH = 51 // Authentication Header
protocolIPv6ICMP = 58 // ICMP for IPv6
protocolSCTP = 132 // Stream Control Transmission Protocol
ProtocolFC = 133 // Fibre Channel
)
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
func (h *Headscale) LoadACLPolicy(path string) error {
log.Debug().
Str("func", "LoadACLPolicy").
Str("path", path).
Msg("Loading ACL policy from path")
policyFile, err := os.Open(path)
if err != nil {
return err
}
defer policyFile.Close()
var policy ACLPolicy
policyBytes, err := io.ReadAll(policyFile)
if err != nil {
return err
}
switch filepath.Ext(path) {
case ".yml", ".yaml":
log.Debug().
Str("path", path).
Bytes("file", policyBytes).
Msg("Loading ACLs from YAML")
err := yaml.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
log.Trace().
Interface("policy", policy).
Msg("Loaded policy from YAML")
default:
ast, err := hujson.Parse(policyBytes)
if err != nil {
return err
}
ast.Standardize()
policyBytes = ast.Pack()
err = json.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
}
if policy.IsZero() {
return errEmptyPolicy
}
h.aclPolicy = &policy
return h.UpdateACLRules()
}
func (h *Headscale) UpdateACLRules() error {
rules, err := h.generateACLRules()
if err != nil {
return err
}
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
h.aclRules = rules
return nil
}
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
rules := []tailcfg.FilterRule{}
if h.aclPolicy == nil {
return nil, errEmptyPolicy
}
machines, err := h.ListMachines()
if err != nil {
return nil, err
}
for index, acl := range h.aclPolicy.ACLs {
if acl.Action != "accept" {
return nil, errInvalidAction
}
srcIPs := []string{}
for innerIndex, src := range acl.Sources {
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, src)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
return nil, err
}
srcIPs = append(srcIPs, srcs...)
}
protocols, needsWildcard, err := parseProtocol(acl.Protocol)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d. protocol unknown %s", index, acl.Protocol)
return nil, err
}
destPorts := []tailcfg.NetPortRange{}
for innerIndex, dest := range acl.Destinations {
dests, err := h.generateACLPolicyDest(machines, *h.aclPolicy, dest, needsWildcard)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Destination %d", index, innerIndex)
return nil, err
}
destPorts = append(destPorts, dests...)
}
rules = append(rules, tailcfg.FilterRule{
SrcIPs: srcIPs,
DstPorts: destPorts,
IPProto: protocols,
})
}
return rules, nil
}
func (h *Headscale) generateACLPolicySrcIP(
machines []Machine,
aclPolicy ACLPolicy,
src string,
) ([]string, error) {
return expandAlias(machines, aclPolicy, src, h.cfg.OIDC.StripEmaildomain)
}
func (h *Headscale) generateACLPolicyDest(
machines []Machine,
aclPolicy ACLPolicy,
dest string,
needsWildcard bool,
) ([]tailcfg.NetPortRange, error) {
tokens := strings.Split(dest, ":")
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
return nil, errInvalidPortFormat
}
var alias string
// We can have here stuff like:
// git-server:*
// 192.168.1.0/24:22
// tag:montreal-webserver:80,443
// tag:api-server:443
// example-host-1:*
if len(tokens) == expectedTokenItems {
alias = tokens[0]
} else {
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
}
expanded, err := expandAlias(
machines,
aclPolicy,
alias,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
return nil, err
}
ports, err := expandPorts(tokens[len(tokens)-1], needsWildcard)
if err != nil {
return nil, err
}
dests := []tailcfg.NetPortRange{}
for _, d := range expanded {
for _, p := range *ports {
pr := tailcfg.NetPortRange{
IP: d,
Ports: p,
}
dests = append(dests, pr)
}
}
return dests, nil
}
// parseProtocol reads the proto field of the ACL and generates a list of
// protocols that will be allowed, following the IANA IP protocol number
// https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
//
// If the ACL proto field is empty, it allows ICMPv4, ICMPv6, TCP, and UDP,
// as per Tailscale behaviour (see tailcfg.FilterRule).
//
// Also returns a boolean indicating if the protocol
// requires all the destinations to use wildcard as port number (only TCP,
// UDP and SCTP support specifying ports).
func parseProtocol(protocol string) ([]int, bool, error) {
switch protocol {
case "":
return []int{protocolICMP, protocolIPv6ICMP, protocolTCP, protocolUDP}, false, nil
case "igmp":
return []int{protocolIGMP}, true, nil
case "ipv4", "ip-in-ip":
return []int{protocolIPv4}, true, nil
case "tcp":
return []int{protocolTCP}, false, nil
case "egp":
return []int{protocolEGP}, true, nil
case "igp":
return []int{protocolIGP}, true, nil
case "udp":
return []int{protocolUDP}, false, nil
case "gre":
return []int{protocolGRE}, true, nil
case "esp":
return []int{protocolESP}, true, nil
case "ah":
return []int{protocolAH}, true, nil
case "sctp":
return []int{protocolSCTP}, false, nil
case "icmp":
return []int{protocolICMP, protocolIPv6ICMP}, true, nil
default:
protocolNumber, err := strconv.Atoi(protocol)
if err != nil {
return nil, false, err
}
needsWildcard := protocolNumber != protocolTCP && protocolNumber != protocolUDP && protocolNumber != protocolSCTP
return []int{protocolNumber}, needsWildcard, nil
}
}
// expandalias has an input of either
// - a namespace
// - a group
// - a tag
// and transform these in IPAddresses.
func expandAlias(
machines []Machine,
aclPolicy ACLPolicy,
alias string,
stripEmailDomain bool,
) ([]string, error) {
ips := []string{}
if alias == "*" {
return []string{"*"}, nil
}
log.Debug().
Str("alias", alias).
Msg("Expanding")
if strings.HasPrefix(alias, "group:") {
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
for _, n := range namespaces {
nodes := filterMachinesByNamespace(machines, n)
for _, node := range nodes {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
}
return ips, nil
}
if strings.HasPrefix(alias, "tag:") {
// check for forced tags
for _, machine := range machines {
if contains(machine.ForcedTags, alias) {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
}
// find tag owners
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
if err != nil {
if errors.Is(err, errInvalidTag) {
if len(ips) == 0 {
return ips, fmt.Errorf(
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
errInvalidTag,
alias,
)
}
return ips, nil
} else {
return ips, err
}
}
// filter out machines per tag owner
for _, namespace := range owners {
machines := filterMachinesByNamespace(machines, namespace)
for _, machine := range machines {
hi := machine.GetHostInfo()
if contains(hi.RequestTags, alias) {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
}
}
return ips, nil
}
// if alias is a namespace
nodes := filterMachinesByNamespace(machines, alias)
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias)
for _, n := range nodes {
ips = append(ips, n.IPAddresses.ToStringSlice()...)
}
if len(ips) > 0 {
return ips, nil
}
// if alias is an host
if h, ok := aclPolicy.Hosts[alias]; ok {
return []string{h.String()}, nil
}
// if alias is an IP
ip, err := netaddr.ParseIP(alias)
if err == nil {
return []string{ip.String()}, nil
}
// if alias is an CIDR
cidr, err := netaddr.ParseIPPrefix(alias)
if err == nil {
return []string{cidr.String()}, nil
}
log.Warn().Msgf("No IPs found with the alias %v", alias)
return ips, nil
}
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
// that are correctly tagged since they should not be listed as being in the namespace
// we assume in this function that we only have nodes from 1 namespace.
func excludeCorrectlyTaggedNodes(
aclPolicy ACLPolicy,
nodes []Machine,
namespace string,
) []Machine {
out := []Machine{}
tags := []string{}
for tag, ns := range aclPolicy.TagOwners {
if contains(ns, namespace) {
tags = append(tags, tag)
}
}
// for each machine if tag is in tags list, don't append it.
for _, machine := range nodes {
hi := machine.GetHostInfo()
found := false
for _, t := range hi.RequestTags {
if contains(tags, t) {
found = true
break
}
}
if len(machine.ForcedTags) > 0 {
found = true
}
if !found {
out = append(out, machine)
}
}
return out
}
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
if portsStr == "*" {
return &[]tailcfg.PortRange{
{First: portRangeBegin, Last: portRangeEnd},
}, nil
}
if needsWildcard {
return nil, errWildcardIsNeeded
}
ports := []tailcfg.PortRange{}
for _, portStr := range strings.Split(portsStr, ",") {
rang := strings.Split(portStr, "-")
switch len(rang) {
case 1:
port, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(port),
Last: uint16(port),
})
case expectedTokenItems:
start, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
last, err := strconv.ParseUint(rang[1], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(start),
Last: uint16(last),
})
default:
return nil, errInvalidPortFormat
}
}
return &ports, nil
}
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
out := []Machine{}
for _, machine := range machines {
if machine.Namespace.Name == namespace {
out = append(out, machine)
}
}
return out
}
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
// a group cannot be composed of groups.
func expandTagOwners(
aclPolicy ACLPolicy,
tag string,
stripEmailDomain bool,
) ([]string, error) {
var owners []string
ows, ok := aclPolicy.TagOwners[tag]
if !ok {
return []string{}, fmt.Errorf(
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
errInvalidTag,
tag,
)
}
for _, owner := range ows {
if strings.HasPrefix(owner, "group:") {
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
if err != nil {
return []string{}, err
}
owners = append(owners, gs...)
} else {
owners = append(owners, owner)
}
}
return owners, nil
}
// expandGroup will return the list of namespace inside the group
// after some validation.
func expandGroup(
aclPolicy ACLPolicy,
group string,
stripEmailDomain bool,
) ([]string, error) {
outGroups := []string{}
aclGroups, ok := aclPolicy.Groups[group]
if !ok {
return []string{}, fmt.Errorf(
"group %v isn't registered. %w",
group,
errInvalidGroup,
)
}
for _, group := range aclGroups {
if strings.HasPrefix(group, "group:") {
return []string{}, fmt.Errorf(
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
errInvalidGroup,
)
}
grp, err := NormalizeToFQDNRules(group, stripEmailDomain)
if err != nil {
return []string{}, fmt.Errorf(
"failed to normalize group %q, err: %w",
group,
errInvalidGroup,
)
}
outGroups = append(outGroups, grp)
}
return outGroups, nil
}

1382
acls_test.go Normal file

File diff suppressed because it is too large Load Diff

102
acls_types.go Normal file
View File

@@ -0,0 +1,102 @@
package headscale
import (
"encoding/json"
"strings"
"github.com/tailscale/hujson"
"gopkg.in/yaml.v3"
"inet.af/netaddr"
)
// ACLPolicy represents a Tailscale ACL Policy.
type ACLPolicy struct {
Groups Groups `json:"groups" yaml:"groups"`
Hosts Hosts `json:"hosts" yaml:"hosts"`
TagOwners TagOwners `json:"tagOwners" yaml:"tagOwners"`
ACLs []ACL `json:"acls" yaml:"acls"`
Tests []ACLTest `json:"tests" yaml:"tests"`
}
// ACL is a basic rule for the ACL Policy.
type ACL struct {
Action string `json:"action" yaml:"action"`
Protocol string `json:"proto" yaml:"proto"`
Sources []string `json:"src" yaml:"src"`
Destinations []string `json:"dst" yaml:"dst"`
}
// Groups references a series of alias in the ACL rules.
type Groups map[string][]string
// Hosts are alias for IP addresses or subnets.
type Hosts map[string]netaddr.IPPrefix
// TagOwners specify what users (namespaces?) are allow to use certain tags.
type TagOwners map[string][]string
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
type ACLTest struct {
Source string `json:"src" yaml:"src"`
Accept []string `json:"accept" yaml:"accept"`
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
}
// UnmarshalJSON allows to parse the Hosts directly into netaddr objects.
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
newHosts := Hosts{}
hostIPPrefixMap := make(map[string]string)
ast, err := hujson.Parse(data)
if err != nil {
return err
}
ast.Standardize()
data = ast.Pack()
err = json.Unmarshal(data, &hostIPPrefixMap)
if err != nil {
return err
}
for host, prefixStr := range hostIPPrefixMap {
if !strings.Contains(prefixStr, "/") {
prefixStr += "/32"
}
prefix, err := netaddr.ParseIPPrefix(prefixStr)
if err != nil {
return err
}
newHosts[host] = prefix
}
*hosts = newHosts
return nil
}
// UnmarshalYAML allows to parse the Hosts directly into netaddr objects.
func (hosts *Hosts) UnmarshalYAML(data []byte) error {
newHosts := Hosts{}
hostIPPrefixMap := make(map[string]string)
err := yaml.Unmarshal(data, &hostIPPrefixMap)
if err != nil {
return err
}
for host, prefixStr := range hostIPPrefixMap {
prefix, err := netaddr.ParseIPPrefix(prefixStr)
if err != nil {
return err
}
newHosts[host] = prefix
}
*hosts = newHosts
return nil
}
// IsZero is perhaps a bit naive here.
func (policy ACLPolicy) IsZero() bool {
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
return true
}
return false
}

865
api.go Normal file
View File

@@ -0,0 +1,865 @@
package headscale
import (
"bytes"
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"html/template"
"io"
"net/http"
"strings"
"time"
"github.com/gorilla/mux"
"github.com/klauspost/compress/zstd"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
const (
reservedResponseHeaderSize = 4
RegisterMethodAuthKey = "authkey"
RegisterMethodOIDC = "oidc"
RegisterMethodCLI = "cli"
ErrRegisterMethodCLIDoesNotSupportExpire = Error(
"machines registered with CLI does not support expire",
)
)
func (h *Headscale) HealthHandler(
writer http.ResponseWriter,
req *http.Request,
) {
respond := func(err error) {
writer.Header().Set("Content-Type", "application/health+json; charset=utf-8")
res := struct {
Status string `json:"status"`
}{
Status: "pass",
}
if err != nil {
writer.WriteHeader(http.StatusInternalServerError)
log.Error().Caller().Err(err).Msg("health check failed")
res.Status = "fail"
}
buf, err := json.Marshal(res)
if err != nil {
log.Error().Caller().Err(err).Msg("marshal failed")
}
_, err = writer.Write(buf)
if err != nil {
log.Error().Caller().Err(err).Msg("write failed")
}
}
if err := h.pingDB(); err != nil {
respond(err)
return
}
respond(nil)
}
// KeyHandler provides the Headscale pub key
// Listens in /key.
func (h *Headscale) KeyHandler(
writer http.ResponseWriter,
req *http.Request,
) {
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err := writer.Write([]byte(MachinePublicKeyStripPrefix(h.privateKey.Public())))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
type registerWebAPITemplateConfig struct {
Key string
}
var registerWebAPITemplate = template.Must(
template.New("registerweb").Parse(`
<html>
<head>
<title>Registration - Headscale</title>
</head>
<body>
<h1>headscale</h1>
<h2>Machine registration</h2>
<p>
Run the command below in the headscale server to add this machine to your network:
</p>
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
</body>
</html>
`))
// RegisterWebAPI shows a simple message in the browser to point to the CLI
// Listens in /register.
func (h *Headscale) RegisterWebAPI(
writer http.ResponseWriter,
req *http.Request,
) {
machineKeyStr := req.URL.Query().Get("key")
if machineKeyStr == "" {
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
writer.WriteHeader(http.StatusBadRequest)
_, err := writer.Write([]byte("Wrong params"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
var content bytes.Buffer
if err := registerWebAPITemplate.Execute(&content, registerWebAPITemplateConfig{
Key: machineKeyStr,
}); err != nil {
log.Error().
Str("func", "RegisterWebAPI").
Err(err).
Msg("Could not render register web API template")
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
writer.WriteHeader(http.StatusInternalServerError)
_, err = writer.Write([]byte("Could not render register web API template"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err := writer.Write(content.Bytes())
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
// RegistrationHandler handles the actual registration process of a machine
// Endpoint /machine/:mkey.
func (h *Headscale) RegistrationHandler(
writer http.ResponseWriter,
req *http.Request,
) {
vars := mux.Vars(req)
machineKeyStr, ok := vars["mkey"]
if !ok || machineKeyStr == "" {
log.Error().
Str("handler", "RegistrationHandler").
Msg("No machine ID in request")
http.Error(writer, "No machine ID in request", http.StatusBadRequest)
return
}
body, _ := io.ReadAll(req.Body)
var machineKey key.MachinePublic
err := machineKey.UnmarshalText([]byte(MachinePublicKeyEnsurePrefix(machineKeyStr)))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot parse machine key")
machineRegistrations.WithLabelValues("unknown", "web", "error", "unknown").Inc()
http.Error(writer, "Cannot parse machine key", http.StatusBadRequest)
return
}
registerRequest := tailcfg.RegisterRequest{}
err = decode(body, &registerRequest, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot decode message")
machineRegistrations.WithLabelValues("unknown", "web", "error", "unknown").Inc()
http.Error(writer, "Cannot decode message", http.StatusBadRequest)
return
}
now := time.Now().UTC()
machine, err := h.GetMachineByMachineKey(machineKey)
if errors.Is(err, gorm.ErrRecordNotFound) {
log.Info().Str("machine", registerRequest.Hostinfo.Hostname).Msg("New machine")
machineKeyStr := MachinePublicKeyStripPrefix(machineKey)
// If the machine has AuthKey set, handle registration via PreAuthKeys
if registerRequest.Auth.AuthKey != "" {
h.handleAuthKey(writer, req, machineKey, registerRequest)
return
}
givenName, err := h.GenerateGivenName(registerRequest.Hostinfo.Hostname)
if err != nil {
log.Error().
Caller().
Str("func", "RegistrationHandler").
Str("hostinfo.name", registerRequest.Hostinfo.Hostname).
Err(err)
return
}
// The machine did not have a key to authenticate, which means
// that we rely on a method that calls back some how (OpenID or CLI)
// We create the machine and then keep it around until a callback
// happens
newMachine := Machine{
MachineKey: machineKeyStr,
Hostname: registerRequest.Hostinfo.Hostname,
GivenName: givenName,
NodeKey: NodePublicKeyStripPrefix(registerRequest.NodeKey),
LastSeen: &now,
Expiry: &time.Time{},
}
if !registerRequest.Expiry.IsZero() {
log.Trace().
Caller().
Str("machine", registerRequest.Hostinfo.Hostname).
Time("expiry", registerRequest.Expiry).
Msg("Non-zero expiry time requested")
newMachine.Expiry = &registerRequest.Expiry
}
h.registrationCache.Set(
machineKeyStr,
newMachine,
registerCacheExpiration,
)
h.handleMachineRegistrationNew(writer, req, machineKey, registerRequest)
return
}
// The machine is already registered, so we need to pass through reauth or key update.
if machine != nil {
// If the NodeKey stored in headscale is the same as the key presented in a registration
// request, then we have a node that is either:
// - Trying to log out (sending a expiry in the past)
// - A valid, registered machine, looking for the node map
// - Expired machine wanting to reauthenticate
if machine.NodeKey == NodePublicKeyStripPrefix(registerRequest.NodeKey) {
// The client sends an Expiry in the past if the client is requesting to expire the key (aka logout)
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L648
if !registerRequest.Expiry.IsZero() && registerRequest.Expiry.UTC().Before(now) {
h.handleMachineLogOut(writer, req, machineKey, *machine)
return
}
// If machine is not expired, and is register, we have a already accepted this machine,
// let it proceed with a valid registration
if !machine.isExpired() {
h.handleMachineValidRegistration(writer, req, machineKey, *machine)
return
}
}
// The NodeKey we have matches OldNodeKey, which means this is a refresh after a key expiration
if machine.NodeKey == NodePublicKeyStripPrefix(registerRequest.OldNodeKey) &&
!machine.isExpired() {
h.handleMachineRefreshKey(writer, req, machineKey, registerRequest, *machine)
return
}
// The machine has expired
h.handleMachineExpired(writer, req, machineKey, registerRequest, *machine)
return
}
}
func (h *Headscale) getMapResponse(
machineKey key.MachinePublic,
mapRequest tailcfg.MapRequest,
machine *Machine,
) ([]byte, error) {
log.Trace().
Str("func", "getMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
Msg("Creating Map response")
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Cannot convert to node")
return nil, err
}
peers, err := h.getValidPeers(machine)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Cannot fetch peers")
return nil, err
}
profiles := getMapResponseUserProfiles(*machine, peers)
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Failed to convert peers to Tailscale nodes")
return nil, err
}
dnsConfig := getMapResponseDNSConfig(
h.cfg.DNSConfig,
h.cfg.BaseDomain,
*machine,
peers,
)
resp := tailcfg.MapResponse{
KeepAlive: false,
Node: node,
Peers: nodePeers,
DNSConfig: dnsConfig,
Domain: h.cfg.BaseDomain,
PacketFilter: h.aclRules,
DERPMap: h.DERPMap,
UserProfiles: profiles,
Debug: &tailcfg.Debug{
DisableLogTail: !h.cfg.LogTail.Enabled,
RandomizeClientPort: h.cfg.RandomizeClientPort,
},
}
log.Trace().
Str("func", "getMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
// Interface("payload", resp).
Msgf("Generated map response: %s", tailMapResponseToString(resp))
var respBody []byte
if mapRequest.Compress == "zstd" {
src, err := json.Marshal(resp)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Failed to marshal response for the client")
return nil, err
}
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody = h.privateKey.SealTo(machineKey, srcCompressed)
} else {
respBody, err = encode(resp, &machineKey, h.privateKey)
if err != nil {
return nil, err
}
}
// declare the incoming size on the first 4 bytes
data := make([]byte, reservedResponseHeaderSize)
binary.LittleEndian.PutUint32(data, uint32(len(respBody)))
data = append(data, respBody...)
return data, nil
}
func (h *Headscale) getMapKeepAliveResponse(
machineKey key.MachinePublic,
mapRequest tailcfg.MapRequest,
) ([]byte, error) {
mapResponse := tailcfg.MapResponse{
KeepAlive: true,
}
var respBody []byte
var err error
if mapRequest.Compress == "zstd" {
src, err := json.Marshal(mapResponse)
if err != nil {
log.Error().
Caller().
Str("func", "getMapKeepAliveResponse").
Err(err).
Msg("Failed to marshal keepalive response for the client")
return nil, err
}
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody = h.privateKey.SealTo(machineKey, srcCompressed)
} else {
respBody, err = encode(mapResponse, &machineKey, h.privateKey)
if err != nil {
return nil, err
}
}
data := make([]byte, reservedResponseHeaderSize)
binary.LittleEndian.PutUint32(data, uint32(len(respBody)))
data = append(data, respBody...)
return data, nil
}
func (h *Headscale) handleMachineLogOut(
writer http.ResponseWriter,
req *http.Request,
machineKey key.MachinePublic,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
log.Info().
Str("machine", machine.Hostname).
Msg("Client requested logout")
err := h.ExpireMachine(&machine)
if err != nil {
log.Error().
Caller().
Str("func", "handleMachineLogOut").
Err(err).
Msg("Failed to expire machine")
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
resp.AuthURL = ""
resp.MachineAuthorized = false
resp.User = *machine.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
func (h *Headscale) handleMachineValidRegistration(
writer http.ResponseWriter,
req *http.Request,
machineKey key.MachinePublic,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
// The machine registration is valid, respond with redirect to /map
log.Debug().
Str("machine", machine.Hostname).
Msg("Client is registered and we have the current NodeKey. All clear to /map")
resp.AuthURL = ""
resp.MachineAuthorized = true
resp.User = *machine.Namespace.toUser()
resp.Login = *machine.Namespace.toLogin()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("update", "web", "error", machine.Namespace.Name).
Inc()
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
machineRegistrations.WithLabelValues("update", "web", "success", machine.Namespace.Name).
Inc()
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
func (h *Headscale) handleMachineExpired(
writer http.ResponseWriter,
req *http.Request,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
// The client has registered before, but has expired
log.Debug().
Str("machine", machine.Hostname).
Msg("Machine registration has expired. Sending a authurl to register")
if registerRequest.Auth.AuthKey != "" {
h.handleAuthKey(writer, req, machineKey, registerRequest)
return
}
if h.cfg.OIDC.Issuer != "" {
resp.AuthURL = fmt.Sprintf("%s/oidc/register/%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), machineKey.String())
} else {
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), machineKey.String())
}
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("reauth", "web", "error", machine.Namespace.Name).
Inc()
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
machineRegistrations.WithLabelValues("reauth", "web", "success", machine.Namespace.Name).
Inc()
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
func (h *Headscale) handleMachineRefreshKey(
writer http.ResponseWriter,
req *http.Request,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
log.Debug().
Str("machine", machine.Hostname).
Msg("We have the OldNodeKey in the database. This is a key refresh")
machine.NodeKey = NodePublicKeyStripPrefix(registerRequest.NodeKey)
if err := h.db.Save(&machine).Error; err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to update machine key in the database")
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
resp.AuthURL = ""
resp.User = *machine.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
func (h *Headscale) handleMachineRegistrationNew(
writer http.ResponseWriter,
req *http.Request,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
) {
resp := tailcfg.RegisterResponse{}
// The machine registration is new, redirect the client to the registration URL
log.Debug().
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("The node is sending us a new NodeKey, sending auth url")
if h.cfg.OIDC.Issuer != "" {
resp.AuthURL = fmt.Sprintf(
"%s/oidc/register/%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"),
machineKey.String(),
)
} else {
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), MachinePublicKeyStripPrefix(machineKey))
}
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
}
// TODO: check if any locks are needed around IP allocation.
func (h *Headscale) handleAuthKey(
writer http.ResponseWriter,
req *http.Request,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
) {
machineKeyStr := MachinePublicKeyStripPrefix(machineKey)
log.Debug().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msgf("Processing auth key for %s", registerRequest.Hostinfo.Hostname)
resp := tailcfg.RegisterResponse{}
pak, err := h.checkKeyValidity(registerRequest.Auth.AuthKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Failed authentication via AuthKey")
resp.MachineAuthorized = false
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Cannot encode message")
http.Error(writer, "Internal server error", http.StatusInternalServerError)
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
return
}
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusUnauthorized)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("Failed authentication via AuthKey")
if pak != nil {
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
} else {
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", "unknown").Inc()
}
return
}
log.Debug().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("Authentication key was valid, proceeding to acquire IP addresses")
nodeKey := NodePublicKeyStripPrefix(registerRequest.NodeKey)
// retrieve machine information if it exist
// The error is not important, because if it does not
// exist, then this is a new machine and we will move
// on to registration.
machine, _ := h.GetMachineByMachineKey(machineKey)
if machine != nil {
log.Trace().
Caller().
Str("machine", machine.Hostname).
Msg("machine already registered, refreshing with new auth key")
machine.NodeKey = nodeKey
machine.AuthKeyID = uint(pak.ID)
err := h.RefreshMachine(machine, registerRequest.Expiry)
if err != nil {
log.Error().
Caller().
Str("machine", machine.Hostname).
Err(err).
Msg("Failed to refresh machine")
return
}
} else {
now := time.Now().UTC()
givenName, err := h.GenerateGivenName(registerRequest.Hostinfo.Hostname)
if err != nil {
log.Error().
Caller().
Str("func", "RegistrationHandler").
Str("hostinfo.name", registerRequest.Hostinfo.Hostname).
Err(err)
return
}
machineToRegister := Machine{
Hostname: registerRequest.Hostinfo.Hostname,
GivenName: givenName,
NamespaceID: pak.Namespace.ID,
MachineKey: machineKeyStr,
RegisterMethod: RegisterMethodAuthKey,
Expiry: &registerRequest.Expiry,
NodeKey: nodeKey,
LastSeen: &now,
AuthKeyID: uint(pak.ID),
}
machine, err = h.RegisterMachine(
machineToRegister,
)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("could not register machine")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
}
err = h.UsePreAuthKey(pak)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to use pre-auth key")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
resp.MachineAuthorized = true
resp.User = *pak.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
http.Error(writer, "Internal server error", http.StatusInternalServerError)
return
}
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "success", pak.Namespace.Name).
Inc()
writer.Header().Set("Content-Type", "application/json; charset=utf-8")
writer.WriteHeader(http.StatusOK)
_, err = writer.Write(respBody)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
log.Info().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Str("ips", strings.Join(machine.IPAddresses.ToStringSlice(), ", ")).
Msg("Successfully authenticated via AuthKey")
}

157
api_key.go Normal file
View File

@@ -0,0 +1,157 @@
package headscale
import (
"fmt"
"strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"golang.org/x/crypto/bcrypt"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
apiPrefixLength = 7
apiKeyLength = 32
errAPIKeyFailedToParse = Error("Failed to parse ApiKey")
)
// APIKey describes the datamodel for API keys used to remotely authenticate with
// headscale.
type APIKey struct {
ID uint64 `gorm:"primary_key"`
Prefix string `gorm:"uniqueIndex"`
Hash []byte
CreatedAt *time.Time
Expiration *time.Time
LastSeen *time.Time
}
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
func (h *Headscale) CreateAPIKey(
expiration *time.Time,
) (string, *APIKey, error) {
prefix, err := GenerateRandomStringURLSafe(apiPrefixLength)
if err != nil {
return "", nil, err
}
toBeHashed, err := GenerateRandomStringURLSafe(apiKeyLength)
if err != nil {
return "", nil, err
}
// Key to return to user, this will only be visible _once_
keyStr := prefix + "." + toBeHashed
hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost)
if err != nil {
return "", nil, err
}
key := APIKey{
Prefix: prefix,
Hash: hash,
Expiration: expiration,
}
if err := h.db.Save(&key).Error; err != nil {
return "", nil, fmt.Errorf("failed to save API key to database: %w", err)
}
return keyStr, &key, nil
}
// ListAPIKeys returns the list of ApiKeys for a namespace.
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
keys := []APIKey{}
if err := h.db.Find(&keys).Error; err != nil {
return nil, err
}
return keys, nil
}
// GetAPIKey returns a ApiKey for a given key.
func (h *Headscale) GetAPIKey(prefix string) (*APIKey, error) {
key := APIKey{}
if result := h.db.First(&key, "prefix = ?", prefix); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// GetAPIKeyByID returns a ApiKey for a given id.
func (h *Headscale) GetAPIKeyByID(id uint64) (*APIKey, error) {
key := APIKey{}
if result := h.db.Find(&APIKey{ID: id}).First(&key); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// DestroyAPIKey destroys a ApiKey. Returns error if the ApiKey
// does not exist.
func (h *Headscale) DestroyAPIKey(key APIKey) error {
if result := h.db.Unscoped().Delete(key); result.Error != nil {
return result.Error
}
return nil
}
// ExpireAPIKey marks a ApiKey as expired.
func (h *Headscale) ExpireAPIKey(key *APIKey) error {
if err := h.db.Model(&key).Update("Expiration", time.Now()).Error; err != nil {
return err
}
return nil
}
func (h *Headscale) ValidateAPIKey(keyStr string) (bool, error) {
prefix, hash, found := strings.Cut(keyStr, ".")
if !found {
return false, errAPIKeyFailedToParse
}
key, err := h.GetAPIKey(prefix)
if err != nil {
return false, fmt.Errorf("failed to validate api key: %w", err)
}
if key.Expiration.Before(time.Now()) {
return false, nil
}
if err := bcrypt.CompareHashAndPassword(key.Hash, []byte(hash)); err != nil {
return false, err
}
return true, nil
}
func (key *APIKey) toProto() *v1.ApiKey {
protoKey := v1.ApiKey{
Id: key.ID,
Prefix: key.Prefix,
}
if key.Expiration != nil {
protoKey.Expiration = timestamppb.New(*key.Expiration)
}
if key.CreatedAt != nil {
protoKey.CreatedAt = timestamppb.New(*key.CreatedAt)
}
if key.LastSeen != nil {
protoKey.LastSeen = timestamppb.New(*key.LastSeen)
}
return &protoKey
}

View File

@@ -1,4 +1,4 @@
package db package headscale
import ( import (
"time" "time"
@@ -7,7 +7,7 @@ import (
) )
func (*Suite) TestCreateAPIKey(c *check.C) { func (*Suite) TestCreateAPIKey(c *check.C) {
apiKeyStr, apiKey, err := db.CreateAPIKey(nil) apiKeyStr, apiKey, err := app.CreateAPIKey(nil)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
@@ -16,82 +16,74 @@ func (*Suite) TestCreateAPIKey(c *check.C) {
c.Assert(apiKey.Hash, check.NotNil) c.Assert(apiKey.Hash, check.NotNil)
c.Assert(apiKeyStr, check.Not(check.Equals), "") c.Assert(apiKeyStr, check.Not(check.Equals), "")
_, err = db.ListAPIKeys() _, err = app.ListAPIKeys()
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
keys, err := db.ListAPIKeys() keys, err := app.ListAPIKeys()
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(len(keys), check.Equals, 1) c.Assert(len(keys), check.Equals, 1)
c.Assert(channelUpdates, check.Equals, int32(0))
} }
func (*Suite) TestAPIKeyDoesNotExist(c *check.C) { func (*Suite) TestAPIKeyDoesNotExist(c *check.C) {
key, err := db.GetAPIKey("does-not-exist") key, err := app.GetAPIKey("does-not-exist")
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
c.Assert(key, check.IsNil) c.Assert(key, check.IsNil)
} }
func (*Suite) TestValidateAPIKeyOk(c *check.C) { func (*Suite) TestValidateAPIKeyOk(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour) nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2) apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr) valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true) c.Assert(valid, check.Equals, true)
c.Assert(channelUpdates, check.Equals, int32(0))
} }
func (*Suite) TestValidateAPIKeyNotOk(c *check.C) { func (*Suite) TestValidateAPIKeyNotOk(c *check.C) {
nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour) nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowMinus2) apiKeyStr, apiKey, err := app.CreateAPIKey(&nowMinus2)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr) valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, false) c.Assert(valid, check.Equals, false)
now := time.Now() now := time.Now()
apiKeyStrNow, apiKey, err := db.CreateAPIKey(&now) apiKeyStrNow, apiKey, err := app.CreateAPIKey(&now)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
validNow, err := db.ValidateAPIKey(apiKeyStrNow) validNow, err := app.ValidateAPIKey(apiKeyStrNow)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(validNow, check.Equals, false) c.Assert(validNow, check.Equals, false)
validSilly, err := db.ValidateAPIKey("nota.validkey") validSilly, err := app.ValidateAPIKey("nota.validkey")
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
c.Assert(validSilly, check.Equals, false) c.Assert(validSilly, check.Equals, false)
validWithErr, err := db.ValidateAPIKey("produceerrorkey") validWithErr, err := app.ValidateAPIKey("produceerrorkey")
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
c.Assert(validWithErr, check.Equals, false) c.Assert(validWithErr, check.Equals, false)
c.Assert(channelUpdates, check.Equals, int32(0))
} }
func (*Suite) TestExpireAPIKey(c *check.C) { func (*Suite) TestExpireAPIKey(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour) nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := db.CreateAPIKey(&nowPlus2) apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil) c.Assert(apiKey, check.NotNil)
valid, err := db.ValidateAPIKey(apiKeyStr) valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true) c.Assert(valid, check.Equals, true)
err = db.ExpireAPIKey(apiKey) err = app.ExpireAPIKey(apiKey)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(apiKey.Expiration, check.NotNil) c.Assert(apiKey.Expiration, check.NotNil)
notValid, err := db.ValidateAPIKey(apiKeyStr) notValid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
c.Assert(notValid, check.Equals, false) c.Assert(notValid, check.Equals, false)
c.Assert(channelUpdates, check.Equals, int32(0))
} }

View File

@@ -1,4 +1,4 @@
package hscontrol package headscale
import ( import (
"context" "context"
@@ -11,7 +11,6 @@ import (
"os" "os"
"os/signal" "os/signal"
"sort" "sort"
"strconv"
"strings" "strings"
"sync" "sync"
"syscall" "syscall"
@@ -19,20 +18,13 @@ import (
"github.com/coreos/go-oidc/v3/oidc" "github.com/coreos/go-oidc/v3/oidc"
"github.com/gorilla/mux" "github.com/gorilla/mux"
grpcMiddleware "github.com/grpc-ecosystem/go-grpc-middleware" grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime" "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/derp"
derpServer "github.com/juanfont/headscale/hscontrol/derp/server"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/patrickmn/go-cache" "github.com/patrickmn/go-cache"
zerolog "github.com/philip-bui/grpc-zerolog" zerolog "github.com/philip-bui/grpc-zerolog"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/puzpuzpuz/xsync/v2" "github.com/puzpuzpuz/xsync"
zl "github.com/rs/zerolog" zl "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"golang.org/x/crypto/acme" "golang.org/x/crypto/acme"
@@ -47,99 +39,104 @@ import (
"google.golang.org/grpc/peer" "google.golang.org/grpc/peer"
"google.golang.org/grpc/reflection" "google.golang.org/grpc/reflection"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
"gorm.io/gorm"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/dnstype" "tailscale.com/types/dnstype"
"tailscale.com/types/key" "tailscale.com/types/key"
) )
var ( const (
errSTUNAddressNotSet = errors.New("STUN address not set") errSTUNAddressNotSet = Error("STUN address not set")
errUnsupportedDatabase = errors.New("unsupported DB") errUnsupportedDatabase = Error("unsupported DB")
errUnsupportedLetsEncryptChallengeType = errors.New( errUnsupportedLetsEncryptChallengeType = Error(
"unknown value for Lets Encrypt challenge type", "unknown value for Lets Encrypt challenge type",
) )
) )
const ( const (
AuthPrefix = "Bearer " AuthPrefix = "Bearer "
updateInterval = 5000 Postgres = "postgres"
privateKeyFileMode = 0o600 Sqlite = "sqlite3"
updateInterval = 5000
HTTPReadTimeout = 30 * time.Second
HTTPShutdownTimeout = 3 * time.Second
privateKeyFileMode = 0o600
registerCacheExpiration = time.Minute * 15 registerCacheExpiration = time.Minute * 15
registerCacheCleanup = time.Minute * 20 registerCacheCleanup = time.Minute * 20
DisabledClientAuth = "disabled"
RelaxedClientAuth = "relaxed"
EnforcedClientAuth = "enforced"
) )
// Headscale represents the base app of the service. // Headscale represents the base app of the service.
type Headscale struct { type Headscale struct {
cfg *types.Config cfg *Config
db *db.HSDatabase db *gorm.DB
dbString string dbString string
dbType string dbType string
dbDebug bool dbDebug bool
privateKey2019 *key.MachinePrivate privateKey *key.MachinePrivate
noisePrivateKey *key.MachinePrivate
DERPMap *tailcfg.DERPMap DERPMap *tailcfg.DERPMap
DERPServer *derpServer.DERPServer DERPServer *DERPServer
ACLPolicy *policy.ACLPolicy aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
lastStateChange *xsync.MapOf[string, time.Time] lastStateChange *xsync.MapOf[time.Time]
oidcProvider *oidc.Provider oidcProvider *oidc.Provider
oauth2Config *oauth2.Config oauth2Config *oauth2.Config
registrationCache *cache.Cache registrationCache *cache.Cache
ipAllocationMutex sync.Mutex
shutdownChan chan struct{} shutdownChan chan struct{}
pollNetMapStreamWG sync.WaitGroup pollNetMapStreamWG sync.WaitGroup
stateUpdateChan chan struct{}
cancelStateUpdateChan chan struct{}
} }
func NewHeadscale(cfg *types.Config) (*Headscale, error) { // Look up the TLS constant relative to user-supplied TLS client
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath) // authentication mode. If an unknown mode is supplied, the default
// value, tls.RequireAnyClientCert, is returned. The returned boolean
// indicates if the supplied mode was valid.
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
switch mode {
case DisabledClientAuth:
// Client cert is _not_ required.
return tls.NoClientCert, true
case RelaxedClientAuth:
// Client cert required, but _not verified_.
return tls.RequireAnyClientCert, true
case EnforcedClientAuth:
// Client cert is _required and verified_.
return tls.RequireAndVerifyClientCert, true
default:
// Return the default when an unknown value is supplied.
return tls.RequireAnyClientCert, false
}
}
func NewHeadscale(cfg *Config) (*Headscale, error) {
privKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read or create private key: %w", err) return nil, fmt.Errorf("failed to read or create private key: %w", err)
} }
// TS2021 requires to have a different key from the legacy protocol.
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
if err != nil {
return nil, fmt.Errorf("failed to read or create Noise protocol private key: %w", err)
}
if privateKey.Equal(*noisePrivateKey) {
return nil, fmt.Errorf("private key and noise private key are the same: %w", err)
}
var dbString string var dbString string
switch cfg.DBtype { switch cfg.DBtype {
case db.Postgres: case Postgres:
dbString = fmt.Sprintf( dbString = fmt.Sprintf(
"host=%s dbname=%s user=%s", "host=%s port=%d dbname=%s user=%s password=%s sslmode=disable",
cfg.DBhost, cfg.DBhost,
cfg.DBport,
cfg.DBname, cfg.DBname,
cfg.DBuser, cfg.DBuser,
cfg.DBpass,
) )
case Sqlite:
if sslEnabled, err := strconv.ParseBool(cfg.DBssl); err == nil {
if !sslEnabled {
dbString += " sslmode=disable"
}
} else {
dbString += fmt.Sprintf(" sslmode=%s", cfg.DBssl)
}
if cfg.DBport != 0 {
dbString += fmt.Sprintf(" port=%d", cfg.DBport)
}
if cfg.DBpass != "" {
dbString += fmt.Sprintf(" password=%s", cfg.DBpass)
}
case db.Sqlite:
dbString = cfg.DBpath dbString = cfg.DBpath
default: default:
return nil, errUnsupportedDatabase return nil, errUnsupportedDatabase
@@ -154,44 +151,26 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
cfg: cfg, cfg: cfg,
dbType: cfg.DBtype, dbType: cfg.DBtype,
dbString: dbString, dbString: dbString,
privateKey2019: privateKey, privateKey: privKey,
noisePrivateKey: noisePrivateKey, aclRules: tailcfg.FilterAllowAll, // default allowall
registrationCache: registrationCache, registrationCache: registrationCache,
pollNetMapStreamWG: sync.WaitGroup{}, pollNetMapStreamWG: sync.WaitGroup{},
lastStateChange: xsync.NewMapOf[time.Time](),
stateUpdateChan: make(chan struct{}),
cancelStateUpdateChan: make(chan struct{}),
} }
go app.watchStateChannel() err = app.initDB()
database, err := db.NewHeadscaleDatabase(
cfg.DBtype,
dbString,
app.dbDebug,
app.stateUpdateChan,
cfg.IPPrefixes,
cfg.BaseDomain)
if err != nil { if err != nil {
return nil, err return nil, err
} }
app.db = database
if cfg.OIDC.Issuer != "" { if cfg.OIDC.Issuer != "" {
err = app.initOIDC() err = app.initOIDC()
if err != nil { if err != nil {
if cfg.OIDC.OnlyStartIfOIDCIsAvailable { return nil, err
return nil, err
} else {
log.Warn().Err(err).Msg("failed to set up OIDC provider, falling back to CLI based authentication")
}
} }
} }
if app.cfg.DNSConfig != nil && app.cfg.DNSConfig.Proxied { // if MagicDNS if app.cfg.DNSConfig != nil && app.cfg.DNSConfig.Proxied { // if MagicDNS
magicDNSDomains := util.GenerateMagicDNSRootDomains(app.cfg.IPPrefixes) magicDNSDomains := generateMagicDNSRootDomains(app.cfg.IPPrefixes)
// we might have routes already from Split DNS // we might have routes already from Split DNS
if app.cfg.DNSConfig.Routes == nil { if app.cfg.DNSConfig.Routes == nil {
app.cfg.DNSConfig.Routes = make(map[string][]*dnstype.Resolver) app.cfg.DNSConfig.Routes = make(map[string][]*dnstype.Resolver)
@@ -202,8 +181,7 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
} }
if cfg.DERP.ServerEnabled { if cfg.DERP.ServerEnabled {
// TODO(kradalby): replace this key with a dedicated DERP key. embeddedDERPServer, err := app.NewDERPServer()
embeddedDERPServer, err := derpServer.NewDERPServer(cfg.ServerURL, key.NodePrivate(*privateKey), &cfg.DERP)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -224,51 +202,52 @@ func (h *Headscale) redirect(w http.ResponseWriter, req *http.Request) {
func (h *Headscale) expireEphemeralNodes(milliSeconds int64) { func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond) ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C { for range ticker.C {
h.db.ExpireEphemeralMachines(h.cfg.EphemeralNodeInactivityTimeout) h.expireEphemeralNodesWorker()
} }
} }
// expireExpiredMachines expires machines that have an explicit expiry set func (h *Headscale) expireEphemeralNodesWorker() {
// after that expiry time has passed. namespaces, err := h.ListNamespaces()
func (h *Headscale) expireExpiredMachines(milliSeconds int64) { if err != nil {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond) log.Error().Err(err).Msg("Error listing namespaces")
for range ticker.C {
h.db.ExpireExpiredMachines(h.getLastStateChange()) return
} }
}
// scheduledDERPMapUpdateWorker refreshes the DERPMap stored on the global object for _, namespace := range namespaces {
// at a set interval. machines, err := h.ListMachinesInNamespace(namespace.Name)
func (h *Headscale) scheduledDERPMapUpdateWorker(cancelChan <-chan struct{}) {
log.Info().
Dur("frequency", h.cfg.DERP.UpdateFrequency).
Msg("Setting up a DERPMap update worker")
ticker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
for {
select {
case <-cancelChan:
return
case <-ticker.C:
log.Info().Msg("Fetching DERPMap updates")
h.DERPMap = derp.GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled {
region, _ := h.DERPServer.GenerateRegion()
h.DERPMap.Regions[region.RegionID] = &region
}
h.setLastStateChangeToNow()
}
}
}
func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
err := h.db.HandlePrimarySubnetFailover()
if err != nil { if err != nil {
log.Error().Err(err).Msg("failed to handle primary subnet failover") log.Error().
Err(err).
Str("namespace", namespace.Name).
Msg("Error listing machines in namespace")
return
}
expiredFound := false
for _, machine := range machines {
if machine.AuthKey != nil && machine.LastSeen != nil &&
machine.AuthKey.Ephemeral &&
time.Now().
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
expiredFound = true
log.Info().
Str("machine", machine.Hostname).
Msg("Ephemeral client removed from database")
err = h.db.Unscoped().Delete(machine).Error
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Msg("🤮 Cannot delete ephemeral machine from the database")
}
}
}
if expiredFound {
h.setLastStateChangeToNow(namespace.Name)
} }
} }
} }
@@ -330,7 +309,7 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
) )
} }
valid, err := h.db.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix)) valid, err := h.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix))
if err != nil { if err != nil {
log.Error(). log.Error().
Caller(). Caller().
@@ -381,7 +360,7 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
return return
} }
valid, err := h.db.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix)) valid, err := h.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix))
if err != nil { if err != nil {
log.Error(). log.Error().
Caller(). Caller().
@@ -436,38 +415,31 @@ func (h *Headscale) ensureUnixSocketIsAbsent() error {
func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router { func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
router := mux.NewRouter() router := mux.NewRouter()
router.HandleFunc(ts2021UpgradePath, h.NoiseUpgradeHandler).Methods(http.MethodPost)
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet) router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet)
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet) router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet)
router.HandleFunc("/register/{nkey}", h.RegisterWebAPI).Methods(http.MethodGet) router.HandleFunc("/register", h.RegisterWebAPI).Methods(http.MethodGet)
h.addLegacyHandlers(router) router.HandleFunc("/machine/{mkey}/map", h.PollNetMapHandler).Methods(http.MethodPost)
router.HandleFunc("/machine/{mkey}", h.RegistrationHandler).Methods(http.MethodPost)
router.HandleFunc("/oidc/register/{nkey}", h.RegisterOIDC).Methods(http.MethodGet) router.HandleFunc("/oidc/register/{mkey}", h.RegisterOIDC).Methods(http.MethodGet)
router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet) router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet)
router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet) router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet)
router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig). router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig).Methods(http.MethodGet)
Methods(http.MethodGet)
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet) router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet)
router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig). router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig).Methods(http.MethodGet)
Methods(http.MethodGet) router.HandleFunc("/swagger", SwaggerUI).Methods(http.MethodGet)
router.HandleFunc("/swagger/v1/openapiv2.json", SwaggerAPIv1).Methods(http.MethodGet)
// TODO(kristoffer): move swagger into a package
router.HandleFunc("/swagger", headscale.SwaggerUI).Methods(http.MethodGet)
router.HandleFunc("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1).
Methods(http.MethodGet)
if h.cfg.DERP.ServerEnabled { if h.cfg.DERP.ServerEnabled {
router.HandleFunc("/derp", h.DERPServer.DERPHandler) router.HandleFunc("/derp", h.DERPHandler)
router.HandleFunc("/derp/probe", derpServer.DERPProbeHandler) router.HandleFunc("/derp/probe", h.DERPProbeHandler)
router.HandleFunc("/bootstrap-dns", derpServer.DERPBootstrapDNSHandler(h.DERPMap)) router.HandleFunc("/bootstrap-dns", h.DERPBootstrapDNSHandler)
} }
apiRouter := router.PathPrefix("/api").Subrouter() apiRouter := router.PathPrefix("/api").Subrouter()
apiRouter.Use(h.httpAuthenticationMiddleware) apiRouter.Use(h.httpAuthenticationMiddleware)
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP) apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
router.PathPrefix("/").HandlerFunc(notFoundHandler) router.PathPrefix("/").HandlerFunc(stdoutHandler)
return router return router
} }
@@ -477,7 +449,7 @@ func (h *Headscale) Serve() error {
var err error var err error
// Fetch an initial DERP Map before we start serving // Fetch an initial DERP Map before we start serving
h.DERPMap = derp.GetDERPMap(h.cfg.DERP) h.DERPMap = GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled { if h.cfg.DERP.ServerEnabled {
// When embedded DERP is enabled we always need a STUN server // When embedded DERP is enabled we always need a STUN server
@@ -485,14 +457,8 @@ func (h *Headscale) Serve() error {
return errSTUNAddressNotSet return errSTUNAddressNotSet
} }
region, err := h.DERPServer.GenerateRegion() h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
if err != nil { go h.ServeSTUN()
return err
}
h.DERPMap.Regions[region.RegionID] = &region
go h.DERPServer.ServeSTUN()
} }
if h.cfg.DERP.AutoUpdate { if h.cfg.DERP.AutoUpdate {
@@ -501,12 +467,7 @@ func (h *Headscale) Serve() error {
go h.scheduledDERPMapUpdateWorker(derpMapCancelChannel) go h.scheduledDERPMapUpdateWorker(derpMapCancelChannel)
} }
// TODO(kradalby): These should have cancel channels and be cleaned
// up on shutdown.
go h.expireEphemeralNodes(updateInterval) go h.expireEphemeralNodes(updateInterval)
go h.expireExpiredMachines(updateInterval)
go h.failoverSubnetRoutes(updateInterval)
if zl.GlobalLevel() == zl.TraceLevel { if zl.GlobalLevel() == zl.TraceLevel {
zerolog.RespLog = true zerolog.RespLog = true
@@ -548,7 +509,7 @@ func (h *Headscale) Serve() error {
h.cfg.UnixSocket, h.cfg.UnixSocket,
[]grpc.DialOption{ []grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer), grpc.WithContextDialer(GrpcSocketDialer),
}..., }...,
) )
if err != nil { if err != nil {
@@ -601,7 +562,7 @@ func (h *Headscale) Serve() error {
grpcOptions := []grpc.ServerOption{ grpcOptions := []grpc.ServerOption{
grpc.UnaryInterceptor( grpc.UnaryInterceptor(
grpcMiddleware.ChainUnaryServer( grpc_middleware.ChainUnaryServer(
h.grpcAuthenticationInterceptor, h.grpcAuthenticationInterceptor,
zerolog.NewUnaryServerInterceptor(), zerolog.NewUnaryServerInterceptor(),
), ),
@@ -636,14 +597,13 @@ func (h *Headscale) Serve() error {
// //
// HTTP setup // HTTP setup
// //
// This is the regular router that we expose
// over our main Addr. It also serves the legacy Tailcale API
router := h.createRouter(grpcGatewayMux) router := h.createRouter(grpcGatewayMux)
httpServer := &http.Server{ httpServer := &http.Server{
Addr: h.cfg.Addr, Addr: h.cfg.Addr,
Handler: router, Handler: router,
ReadTimeout: types.HTTPReadTimeout, ReadTimeout: HTTPReadTimeout,
// Go does not handle timeouts in HTTP very well, and there is // Go does not handle timeouts in HTTP very well, and there is
// no good way to handle streaming timeouts, therefore we need to // no good way to handle streaming timeouts, therefore we need to
// keep this at unlimited and be careful to clean up connections // keep this at unlimited and be careful to clean up connections
@@ -673,7 +633,7 @@ func (h *Headscale) Serve() error {
promHTTPServer := &http.Server{ promHTTPServer := &http.Server{
Addr: h.cfg.MetricsAddr, Addr: h.cfg.MetricsAddr,
Handler: promMux, Handler: promMux,
ReadTimeout: types.HTTPReadTimeout, ReadTimeout: HTTPReadTimeout,
WriteTimeout: 0, WriteTimeout: 0,
} }
@@ -711,13 +671,11 @@ func (h *Headscale) Serve() error {
// TODO(kradalby): Reload config on SIGHUP // TODO(kradalby): Reload config on SIGHUP
if h.cfg.ACL.PolicyPath != "" { if h.cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(h.cfg.ACL.PolicyPath) aclPath := AbsolutePathFromConfigPath(h.cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath) err := h.LoadACLPolicy(aclPath)
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to reload ACL policy") log.Error().Err(err).Msg("Failed to reload ACL policy")
} }
h.ACLPolicy = pol
log.Info(). log.Info().
Str("path", aclPath). Str("path", aclPath).
Msg("ACL policy successfully reloaded, notifying nodes of change") Msg("ACL policy successfully reloaded, notifying nodes of change")
@@ -731,14 +689,10 @@ func (h *Headscale) Serve() error {
Msg("Received signal to stop, shutting down gracefully") Msg("Received signal to stop, shutting down gracefully")
close(h.shutdownChan) close(h.shutdownChan)
h.pollNetMapStreamWG.Wait() h.pollNetMapStreamWG.Wait()
// Gracefully shut down servers // Gracefully shut down servers
ctx, cancel := context.WithTimeout( ctx, cancel := context.WithTimeout(context.Background(), HTTPShutdownTimeout)
context.Background(),
types.HTTPShutdownTimeout,
)
if err := promHTTPServer.Shutdown(ctx); err != nil { if err := promHTTPServer.Shutdown(ctx); err != nil {
log.Error().Err(err).Msg("Failed to shutdown prometheus http") log.Error().Err(err).Msg("Failed to shutdown prometheus http")
} }
@@ -760,12 +714,12 @@ func (h *Headscale) Serve() error {
// Stop listening (and unlink the socket if unix type): // Stop listening (and unlink the socket if unix type):
socketListener.Close() socketListener.Close()
<-h.cancelStateUpdateChan
close(h.stateUpdateChan)
close(h.cancelStateUpdateChan)
// Close db connections // Close db connections
err = h.db.Close() db, err := h.db.DB()
if err != nil {
log.Error().Err(err).Msg("Failed to get db handle")
}
err = db.Close()
if err != nil { if err != nil {
log.Error().Err(err).Msg("Failed to close db") log.Error().Err(err).Msg("Failed to close db")
} }
@@ -775,6 +729,7 @@ func (h *Headscale) Serve() error {
// And we're done: // And we're done:
cancel() cancel()
os.Exit(0)
} }
} }
} }
@@ -806,28 +761,20 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
} }
switch h.cfg.TLS.LetsEncrypt.ChallengeType { switch h.cfg.TLS.LetsEncrypt.ChallengeType {
case types.TLSALPN01ChallengeType: case tlsALPN01ChallengeType:
// Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737) // Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737)
// The RFC requires that the validation is done on port 443; in other words, headscale // The RFC requires that the validation is done on port 443; in other words, headscale
// must be reachable on port 443. // must be reachable on port 443.
return certManager.TLSConfig(), nil return certManager.TLSConfig(), nil
case types.HTTP01ChallengeType: case http01ChallengeType:
// Configuration via autocert with HTTP-01. This requires listening on // Configuration via autocert with HTTP-01. This requires listening on
// port 80 for the certificate validation in addition to the headscale // port 80 for the certificate validation in addition to the headscale
// service, which can be configured to run on any other port. // service, which can be configured to run on any other port.
server := &http.Server{
Addr: h.cfg.TLS.LetsEncrypt.Listen,
Handler: certManager.HTTPHandler(http.HandlerFunc(h.redirect)),
ReadTimeout: types.HTTPReadTimeout,
}
go func() { go func() {
err := server.ListenAndServe()
log.Fatal(). log.Fatal().
Caller(). Caller().
Err(err). Err(http.ListenAndServe(h.cfg.TLS.LetsEncrypt.Listen, certManager.HTTPHandler(http.HandlerFunc(h.redirect)))).
Msg("failed to set up a HTTP server") Msg("failed to set up a HTTP server")
}() }()
@@ -847,7 +794,12 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://") log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
} }
log.Info().Msg(fmt.Sprintf(
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
h.cfg.TLS.ClientAuthMode))
tlsConfig := &tls.Config{ tlsConfig := &tls.Config{
ClientAuth: h.cfg.TLS.ClientAuthMode,
NextProtos: []string{"http/1.1"}, NextProtos: []string{"http/1.1"},
Certificates: make([]tls.Certificate, 1), Certificates: make([]tls.Certificate, 1),
MinVersion: tls.VersionTLS12, MinVersion: tls.VersionTLS12,
@@ -859,49 +811,35 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
} }
} }
// TODO(kradalby): baby steps, make this more robust. func (h *Headscale) setLastStateChangeToNow(namespaces ...string) {
func (h *Headscale) watchStateChannel() {
for {
select {
case <-h.stateUpdateChan:
h.setLastStateChangeToNow()
case <-h.cancelStateUpdateChan:
return
}
}
}
func (h *Headscale) setLastStateChangeToNow() {
var err error var err error
now := time.Now().UTC() now := time.Now().UTC()
users, err := h.db.ListUsers() if len(namespaces) == 0 {
if err != nil { namespaces, err = h.ListNamespacesStr()
log.Error(). if err != nil {
Caller(). log.Error().Caller().Err(err).Msg("failed to fetch all namespaces, failing to update last changed state.")
Err(err). }
Msg("failed to fetch all users, failing to update last changed state.")
} }
for _, user := range users { for _, namespace := range namespaces {
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix())) lastStateUpdate.WithLabelValues(namespace, "headscale").Set(float64(now.Unix()))
if h.lastStateChange == nil { if h.lastStateChange == nil {
h.lastStateChange = xsync.NewMapOf[time.Time]() h.lastStateChange = xsync.NewMapOf[time.Time]()
} }
h.lastStateChange.Store(user.Name, now) h.lastStateChange.Store(namespace, now)
} }
} }
func (h *Headscale) getLastStateChange(users ...types.User) time.Time { func (h *Headscale) getLastStateChange(namespaces ...string) time.Time {
times := []time.Time{} times := []time.Time{}
// getLastStateChange takes a list of users as a "filter", if no users // getLastStateChange takes a list of namespaces as a "filter", if no namespaces
// are past, then use the entier list of users and look for the last update // are past, then use the entier list of namespaces and look for the last update
if len(users) > 0 { if len(namespaces) > 0 {
for _, user := range users { for _, namespace := range namespaces {
if lastChange, ok := h.lastStateChange.Load(user.Name); ok { if lastChange, ok := h.lastStateChange.Load(namespace); ok {
times = append(times, lastChange) times = append(times, lastChange)
} }
} }
@@ -926,7 +864,7 @@ func (h *Headscale) getLastStateChange(users ...types.User) time.Time {
} }
} }
func notFoundHandler( func stdoutHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
) { ) {
@@ -938,7 +876,6 @@ func notFoundHandler(
Interface("url", req.URL). Interface("url", req.URL).
Bytes("body", body). Bytes("body", body).
Msg("Request did not match") Msg("Request did not match")
writer.WriteHeader(http.StatusNotFound)
} }
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) { func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
@@ -969,7 +906,7 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
} }
trimmedPrivateKey := strings.TrimSpace(string(privateKey)) trimmedPrivateKey := strings.TrimSpace(string(privateKey))
privateKeyEnsurePrefix := util.PrivateKeyEnsurePrefix(trimmedPrivateKey) privateKeyEnsurePrefix := PrivateKeyEnsurePrefix(trimmedPrivateKey)
var machineKey key.MachinePrivate var machineKey key.MachinePrivate
if err = machineKey.UnmarshalText([]byte(privateKeyEnsurePrefix)); err != nil { if err = machineKey.UnmarshalText([]byte(privateKeyEnsurePrefix)); err != nil {

79
app_test.go Normal file
View File

@@ -0,0 +1,79 @@
package headscale
import (
"io/ioutil"
"os"
"testing"
"gopkg.in/check.v1"
"inet.af/netaddr"
)
func Test(t *testing.T) {
check.TestingT(t)
}
var _ = check.Suite(&Suite{})
type Suite struct{}
var (
tmpDir string
app Headscale
)
func (s *Suite) SetUpTest(c *check.C) {
s.ResetDB(c)
}
func (s *Suite) TearDownTest(c *check.C) {
os.RemoveAll(tmpDir)
}
func (s *Suite) ResetDB(c *check.C) {
if len(tmpDir) != 0 {
os.RemoveAll(tmpDir)
}
var err error
tmpDir, err = ioutil.TempDir("", "autoygg-client-test")
if err != nil {
c.Fatal(err)
}
cfg := Config{
IPPrefixes: []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("10.27.0.0/23"),
},
}
app = Headscale{
cfg: &cfg,
dbType: "sqlite3",
dbString: tmpDir + "/headscale_test.db",
}
err = app.initDB()
if err != nil {
c.Fatal(err)
}
db, err := app.openDB()
if err != nil {
c.Fatal(err)
}
app.db = db
}
// Enusre an error is returned when an invalid auth mode
// is supplied.
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
_, isValid := LookupTLSClientAuthMode("invalid")
c.Assert(isValid, check.Equals, false)
}
// Ensure that all client auth modes return a nil error.
func (s *Suite) TestAuthModes(c *check.C) {
modes := []string{"disabled", "relaxed", "enforced"}
for _, v := range modes {
_, isValid := LookupTLSClientAuthMode(v)
c.Assert(isValid, check.Equals, true)
}
}

View File

@@ -1,47 +0,0 @@
package main
import (
"log"
"github.com/juanfont/headscale/integration"
"github.com/juanfont/headscale/integration/tsic"
"github.com/ory/dockertest/v3"
)
func main() {
log.Printf("creating docker pool")
pool, err := dockertest.NewPool("")
if err != nil {
log.Fatalf("could not connect to docker: %s", err)
}
log.Printf("creating docker network")
network, err := pool.CreateNetwork("docker-integration-net")
if err != nil {
log.Fatalf("failed to create or get network: %s", err)
}
for _, version := range integration.TailscaleVersions {
log.Printf("creating container image for Tailscale (%s)", version)
tsClient, err := tsic.New(
pool,
version,
network,
)
if err != nil {
log.Fatalf("failed to create tailscale node: %s", err)
}
err = tsClient.Shutdown()
if err != nil {
log.Fatalf("failed to shut down container: %s", err)
}
}
network.Close()
err = pool.RemoveNetwork(network)
if err != nil {
log.Fatalf("failed to remove network: %s", err)
}
}

View File

@@ -1,171 +0,0 @@
package main
//go:generate go run ./main.go
import (
"bytes"
"fmt"
"log"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"text/template"
)
var (
githubWorkflowPath = "../../.github/workflows/"
jobFileNameTemplate = `test-integration-v2-%s.yaml`
jobTemplate = template.Must(
template.New("jobTemplate").
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - {{.Name}}
on: [pull_request]
concurrency:
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^{{.Name}}$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"
`),
)
)
const workflowFilePerm = 0o600
func removeTests() {
glob := fmt.Sprintf(jobFileNameTemplate, "*")
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
if err != nil {
log.Fatalf("failed to find test files")
}
for _, file := range files {
err := os.Remove(file)
if err != nil {
log.Printf("failed to remove: %s", err)
}
}
}
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
ripgrep := exec.Command(
rgBin,
args...,
)
result, err := ripgrep.CombinedOutput()
if err != nil {
log.Printf("out: %s", result)
log.Fatalf("failed to run ripgrep: %s", err)
}
tests := strings.Split(string(result), "\n")
tests = tests[:len(tests)-1]
return tests
}
func main() {
type testConfig struct {
Name string
}
tests := findTests()
removeTests()
for _, test := range tests {
log.Printf("generating workflow for %s", test)
var content bytes.Buffer
if err := jobTemplate.Execute(&content, testConfig{
Name: test,
}); err != nil {
log.Fatalf("failed to render template: %s", err)
}
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
if err != nil {
log.Fatalf("failed to write github job: %s", err)
}
}
}

View File

@@ -5,8 +5,8 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model" "github.com/prometheus/common/model"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
@@ -83,7 +83,7 @@ var listAPIKeys = &cobra.Command{
} }
tableData = append(tableData, []string{ tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10), strconv.FormatUint(key.GetId(), headscale.Base10),
key.GetPrefix(), key.GetPrefix(),
expiration, expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat), key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
@@ -134,9 +134,7 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
expiration := time.Now().UTC().Add(time.Duration(duration)) expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace(). log.Trace().Dur("expiration", time.Duration(duration)).Msg("expiration has been set")
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration) request.Expiration = timestamppb.New(expiration)

View File

@@ -1,22 +0,0 @@
package cli
import (
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(configTestCmd)
}
var configTestCmd = &cobra.Command{
Use: "configtest",
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) {
_, err := getHeadscaleApp()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
},
}

View File

@@ -4,14 +4,14 @@ import (
"fmt" "fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
) )
const ( const (
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix") keyLength = 64
errPreAuthKeyTooShort = Error("key too short, must be 64 hexadecimal characters")
) )
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors // Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
@@ -27,14 +27,8 @@ func init() {
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("") log.Fatal().Err(err).Msg("")
} }
createNodeCmd.Flags().StringP("user", "u", "", "User") createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err = createNodeCmd.MarkFlagRequired("namespace")
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
createNodeNamespaceFlag.Hidden = true
err = createNodeCmd.MarkFlagRequired("user")
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("") log.Fatal().Err(err).Msg("")
} }
@@ -61,9 +55,9 @@ var createNodeCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return return
} }
@@ -93,8 +87,8 @@ var createNodeCmd = &cobra.Command{
return return
} }
if !util.NodePublicKeyRegex.Match([]byte(machineKey)) { if len(machineKey) != keyLength {
err = errPreAuthKeyMalformed err = errPreAuthKeyTooShort
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Error: %s", err), fmt.Sprintf("Error: %s", err),
@@ -116,10 +110,10 @@ var createNodeCmd = &cobra.Command{
} }
request := &v1.DebugCreateMachineRequest{ request := &v1.DebugCreateMachineRequest{
Key: machineKey, Key: machineKey,
Name: name, Name: name,
User: user, Namespace: namespace,
Routes: routes, Routes: routes,
} }
response, err := client.DebugCreateMachine(ctx, request) response, err := client.DebugCreateMachine(ctx, request)

View File

@@ -1,115 +0,0 @@
package cli
import (
"fmt"
"net"
"os"
"strconv"
"time"
"github.com/oauth2-proxy/mockoidc"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
const (
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
refreshTTL = 60 * time.Minute
)
var accessTTL = 2 * time.Minute
func init() {
rootCmd.AddCommand(mockOidcCmd)
}
var mockOidcCmd = &cobra.Command{
Use: "mockoidc",
Short: "Runs a mock OIDC server for testing",
Long: "This internal command runs a OpenID Connect for testing purposes",
Run: func(cmd *cobra.Command, args []string) {
err := mockOIDC()
if err != nil {
log.Error().Err(err).Msgf("Error running mock OIDC server")
os.Exit(1)
}
},
}
func mockOIDC() error {
clientID := os.Getenv("MOCKOIDC_CLIENT_ID")
if clientID == "" {
return errMockOidcClientIDNotDefined
}
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
if clientSecret == "" {
return errMockOidcClientSecretNotDefined
}
addrStr := os.Getenv("MOCKOIDC_ADDR")
if addrStr == "" {
return errMockOidcPortNotDefined
}
portStr := os.Getenv("MOCKOIDC_PORT")
if portStr == "" {
return errMockOidcPortNotDefined
}
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
if accessTTLOverride != "" {
newTTL, err := time.ParseDuration(accessTTLOverride)
if err != nil {
return err
}
accessTTL = newTTL
}
log.Info().Msgf("Access token TTL: %s", accessTTL)
port, err := strconv.Atoi(portStr)
if err != nil {
return err
}
mock, err := getMockOIDC(clientID, clientSecret)
if err != nil {
return err
}
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", addrStr, port))
if err != nil {
return err
}
err = mock.Start(listener, nil)
if err != nil {
return err
}
log.Info().Msgf("Mock OIDC server listening on %s", listener.Addr().String())
log.Info().Msgf("Issuer: %s", mock.Issuer())
c := make(chan struct{})
<-c
return nil
}
func getMockOIDC(clientID string, clientSecret string) (*mockoidc.MockOIDC, error) {
keypair, err := mockoidc.NewKeypair(nil)
if err != nil {
return nil, err
}
mock := mockoidc.MockOIDC{
ClientID: clientID,
ClientSecret: clientSecret,
AccessTTL: accessTTL,
RefreshTTL: refreshTTL,
CodeChallengeMethodsSupported: []string{"plain", "S256"},
Keypair: keypair,
SessionStore: mockoidc.NewSessionStore(),
UserQueue: &mockoidc.UserQueue{},
ErrorQueue: &mockoidc.ErrorQueue{},
}
return &mock, nil
}

View File

@@ -1,10 +1,10 @@
package cli package cli
import ( import (
"errors"
"fmt" "fmt"
survey "github.com/AlecAivazis/survey/v2" survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
@@ -13,24 +13,26 @@ import (
) )
func init() { func init() {
rootCmd.AddCommand(userCmd) rootCmd.AddCommand(namespaceCmd)
userCmd.AddCommand(createUserCmd) namespaceCmd.AddCommand(createNamespaceCmd)
userCmd.AddCommand(listUsersCmd) namespaceCmd.AddCommand(listNamespacesCmd)
userCmd.AddCommand(destroyUserCmd) namespaceCmd.AddCommand(destroyNamespaceCmd)
userCmd.AddCommand(renameUserCmd) namespaceCmd.AddCommand(renameNamespaceCmd)
} }
var errMissingParameter = errors.New("missing parameters") const (
errMissingParameter = headscale.Error("missing parameters")
)
var userCmd = &cobra.Command{ var namespaceCmd = &cobra.Command{
Use: "users", Use: "namespaces",
Short: "Manage the users of Headscale", Short: "Manage the namespaces of Headscale",
Aliases: []string{"user", "namespace", "namespaces", "ns"}, Aliases: []string{"namespace", "ns", "user", "users"},
} }
var createUserCmd = &cobra.Command{ var createNamespaceCmd = &cobra.Command{
Use: "create NAME", Use: "create NAME",
Short: "Creates a new user", Short: "Creates a new namespace",
Aliases: []string{"c", "new"}, Aliases: []string{"c", "new"},
Args: func(cmd *cobra.Command, args []string) error { Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 { if len(args) < 1 {
@@ -42,7 +44,7 @@ var createUserCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
userName := args[0] namespaceName := args[0]
ctx, client, conn, cancel := getHeadscaleCLIClient() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
@@ -50,15 +52,15 @@ var createUserCmd = &cobra.Command{
log.Trace().Interface("client", client).Msg("Obtained gRPC client") log.Trace().Interface("client", client).Msg("Obtained gRPC client")
request := &v1.CreateUserRequest{Name: userName} request := &v1.CreateNamespaceRequest{Name: namespaceName}
log.Trace().Interface("request", request).Msg("Sending CreateUser request") log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
response, err := client.CreateUser(ctx, request) response, err := client.CreateNamespace(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf( fmt.Sprintf(
"Cannot create user: %s", "Cannot create namespace: %s",
status.Convert(err).Message(), status.Convert(err).Message(),
), ),
output, output,
@@ -67,13 +69,13 @@ var createUserCmd = &cobra.Command{
return return
} }
SuccessOutput(response.User, "User created", output) SuccessOutput(response.Namespace, "Namespace created", output)
}, },
} }
var destroyUserCmd = &cobra.Command{ var destroyNamespaceCmd = &cobra.Command{
Use: "destroy NAME", Use: "destroy NAME",
Short: "Destroys a user", Short: "Destroys a namespace",
Aliases: []string{"delete"}, Aliases: []string{"delete"},
Args: func(cmd *cobra.Command, args []string) error { Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 { if len(args) < 1 {
@@ -85,17 +87,17 @@ var destroyUserCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
userName := args[0] namespaceName := args[0]
request := &v1.GetUserRequest{ request := &v1.GetNamespaceRequest{
Name: userName, Name: namespaceName,
} }
ctx, client, conn, cancel := getHeadscaleCLIClient() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
_, err := client.GetUser(ctx, request) _, err := client.GetNamespace(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
@@ -111,8 +113,8 @@ var destroyUserCmd = &cobra.Command{
if !force { if !force {
prompt := &survey.Confirm{ prompt := &survey.Confirm{
Message: fmt.Sprintf( Message: fmt.Sprintf(
"Do you want to remove the user '%s' and any associated preauthkeys?", "Do you want to remove the namespace '%s' and any associated preauthkeys?",
userName, namespaceName,
), ),
} }
err := survey.AskOne(prompt, &confirm) err := survey.AskOne(prompt, &confirm)
@@ -122,14 +124,14 @@ var destroyUserCmd = &cobra.Command{
} }
if confirm || force { if confirm || force {
request := &v1.DeleteUserRequest{Name: userName} request := &v1.DeleteNamespaceRequest{Name: namespaceName}
response, err := client.DeleteUser(ctx, request) response, err := client.DeleteNamespace(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf( fmt.Sprintf(
"Cannot destroy user: %s", "Cannot destroy namespace: %s",
status.Convert(err).Message(), status.Convert(err).Message(),
), ),
output, output,
@@ -137,16 +139,16 @@ var destroyUserCmd = &cobra.Command{
return return
} }
SuccessOutput(response, "User destroyed", output) SuccessOutput(response, "Namespace destroyed", output)
} else { } else {
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output) SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
} }
}, },
} }
var listUsersCmd = &cobra.Command{ var listNamespacesCmd = &cobra.Command{
Use: "list", Use: "list",
Short: "List all the users", Short: "List all the namespaces",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
@@ -155,13 +157,13 @@ var listUsersCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
request := &v1.ListUsersRequest{} request := &v1.ListNamespacesRequest{}
response, err := client.ListUsers(ctx, request) response, err := client.ListNamespaces(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()), fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
output, output,
) )
@@ -169,19 +171,19 @@ var listUsersCmd = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response.Users, "", output) SuccessOutput(response.Namespaces, "", output)
return return
} }
tableData := pterm.TableData{{"ID", "Name", "Created"}} tableData := pterm.TableData{{"ID", "Name", "Created"}}
for _, user := range response.GetUsers() { for _, namespace := range response.GetNamespaces() {
tableData = append( tableData = append(
tableData, tableData,
[]string{ []string{
user.GetId(), namespace.GetId(),
user.GetName(), namespace.GetName(),
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"), namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
}, },
) )
} }
@@ -198,9 +200,9 @@ var listUsersCmd = &cobra.Command{
}, },
} }
var renameUserCmd = &cobra.Command{ var renameNamespaceCmd = &cobra.Command{
Use: "rename OLD_NAME NEW_NAME", Use: "rename OLD_NAME NEW_NAME",
Short: "Renames a user", Short: "Renames a namespace",
Aliases: []string{"mv"}, Aliases: []string{"mv"},
Args: func(cmd *cobra.Command, args []string) error { Args: func(cmd *cobra.Command, args []string) error {
expectedArguments := 2 expectedArguments := 2
@@ -217,17 +219,17 @@ var renameUserCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
request := &v1.RenameUserRequest{ request := &v1.RenameNamespaceRequest{
OldName: args[0], OldName: args[0],
NewName: args[1], NewName: args[1],
} }
response, err := client.RenameUser(ctx, request) response, err := client.RenameNamespace(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf( fmt.Sprintf(
"Cannot rename user: %s", "Cannot rename namespace: %s",
status.Convert(err).Message(), status.Convert(err).Message(),
), ),
output, output,
@@ -236,6 +238,6 @@ var renameUserCmd = &cobra.Command{
return return
} }
SuccessOutput(response.User, "User renamed", output) SuccessOutput(response.Namespace, "Namespace renamed", output)
}, },
} }

View File

@@ -3,40 +3,28 @@ package cli
import ( import (
"fmt" "fmt"
"log" "log"
"net/netip"
"strconv" "strconv"
"strings" "strings"
"time" "time"
survey "github.com/AlecAivazis/survey/v2" survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
"inet.af/netaddr"
"tailscale.com/types/key" "tailscale.com/types/key"
) )
func init() { func init() {
rootCmd.AddCommand(nodeCmd) rootCmd.AddCommand(nodeCmd)
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user") listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags") listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
listNodesNamespaceFlag.Hidden = true
nodeCmd.AddCommand(listNodesCmd) nodeCmd.AddCommand(listNodesCmd)
registerNodeCmd.Flags().StringP("user", "u", "", "User") registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err := registerNodeCmd.MarkFlagRequired("namespace")
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
registerNodeNamespaceFlag.Hidden = true
err := registerNodeCmd.MarkFlagRequired("user")
if err != nil { if err != nil {
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
@@ -75,14 +63,9 @@ func init() {
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
moveNodeCmd.Flags().StringP("user", "u", "", "New user") moveNodeCmd.Flags().StringP("namespace", "n", "", "New namespace")
moveNodeCmd.Flags().StringP("namespace", "n", "", "User") err = moveNodeCmd.MarkFlagRequired("namespace")
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
moveNodeNamespaceFlag.Hidden = true
err = moveNodeCmd.MarkFlagRequired("user")
if err != nil { if err != nil {
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
@@ -110,9 +93,9 @@ var registerNodeCmd = &cobra.Command{
Short: "Registers a machine to your network", Short: "Registers a machine to your network",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return return
} }
@@ -125,7 +108,7 @@ var registerNodeCmd = &cobra.Command{
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Error getting node key from flag: %s", err), fmt.Sprintf("Error getting machine key from flag: %s", err),
output, output,
) )
@@ -133,8 +116,8 @@ var registerNodeCmd = &cobra.Command{
} }
request := &v1.RegisterMachineRequest{ request := &v1.RegisterMachineRequest{
Key: machineKey, Key: machineKey,
User: user, Namespace: namespace,
} }
response, err := client.RegisterMachine(ctx, request) response, err := client.RegisterMachine(ctx, request)
@@ -151,9 +134,7 @@ var registerNodeCmd = &cobra.Command{
return return
} }
SuccessOutput( SuccessOutput(response.Machine, "Machine register", output)
response.Machine,
fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
}, },
} }
@@ -163,9 +144,9 @@ var listNodesCmd = &cobra.Command{
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return return
} }
@@ -181,7 +162,7 @@ var listNodesCmd = &cobra.Command{
defer conn.Close() defer conn.Close()
request := &v1.ListMachinesRequest{ request := &v1.ListMachinesRequest{
User: user, Namespace: namespace,
} }
response, err := client.ListMachines(ctx, request) response, err := client.ListMachines(ctx, request)
@@ -201,7 +182,7 @@ var listNodesCmd = &cobra.Command{
return return
} }
tableData, err := nodesToPtables(user, showTags, response.Machines) tableData, err := nodesToPtables(namespace, showTags, response.Machines)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -405,7 +386,7 @@ var deleteNodeCmd = &cobra.Command{
var moveNodeCmd = &cobra.Command{ var moveNodeCmd = &cobra.Command{
Use: "move", Use: "move",
Short: "Move node to another user", Short: "Move node to another namespace",
Aliases: []string{"mv"}, Aliases: []string{"mv"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
@@ -421,11 +402,11 @@ var moveNodeCmd = &cobra.Command{
return return
} }
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Error getting user: %s", err), fmt.Sprintf("Error getting namespace: %s", err),
output, output,
) )
@@ -456,7 +437,7 @@ var moveNodeCmd = &cobra.Command{
moveRequest := &v1.MoveMachineRequest{ moveRequest := &v1.MoveMachineRequest{
MachineId: identifier, MachineId: identifier,
User: user, Namespace: namespace,
} }
moveResponse, err := client.MoveMachine(ctx, moveRequest) moveResponse, err := client.MoveMachine(ctx, moveRequest)
@@ -473,12 +454,12 @@ var moveNodeCmd = &cobra.Command{
return return
} }
SuccessOutput(moveResponse.Machine, "Node moved to another user", output) SuccessOutput(moveResponse.Machine, "Node moved to another namespace", output)
}, },
} }
func nodesToPtables( func nodesToPtables(
currentUser string, currentNamespace string,
showTags bool, showTags bool,
machines []*v1.Machine, machines []*v1.Machine,
) (pterm.TableData, error) { ) (pterm.TableData, error) {
@@ -486,13 +467,11 @@ func nodesToPtables(
"ID", "ID",
"Hostname", "Hostname",
"Name", "Name",
"MachineKey",
"NodeKey", "NodeKey",
"User", "Namespace",
"IP addresses", "IP addresses",
"Ephemeral", "Ephemeral",
"Last seen", "Last seen",
"Expiration",
"Online", "Online",
"Expired", "Expired",
} }
@@ -519,32 +498,22 @@ func nodesToPtables(
} }
var expiry time.Time var expiry time.Time
var expiryTime string
if machine.Expiry != nil { if machine.Expiry != nil {
expiry = machine.Expiry.AsTime() expiry = machine.Expiry.AsTime()
expiryTime = expiry.Format("2006-01-02 15:04:05")
} else {
expiryTime = "N/A"
}
var machineKey key.MachinePublic
err := machineKey.UnmarshalText(
[]byte(util.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
)
if err != nil {
machineKey = key.MachinePublic{}
} }
var nodeKey key.NodePublic var nodeKey key.NodePublic
err = nodeKey.UnmarshalText( err := nodeKey.UnmarshalText(
[]byte(util.NodePublicKeyEnsurePrefix(machine.NodeKey)), []byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
) )
if err != nil { if err != nil {
return nil, err return nil, err
} }
var online string var online string
if machine.Online { if lastSeen.After(
time.Now().Add(-5 * time.Minute),
) { // TODO: Find a better way to reliably show if online
online = pterm.LightGreen("online") online = pterm.LightGreen("online")
} else { } else {
online = pterm.LightRed("offline") online = pterm.LightRed("offline")
@@ -577,18 +546,18 @@ func nodesToPtables(
} }
validTags = strings.TrimLeft(validTags, ",") validTags = strings.TrimLeft(validTags, ",")
var user string var namespace string
if currentUser == "" || (currentUser == machine.User.Name) { if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
user = pterm.LightMagenta(machine.User.Name) namespace = pterm.LightMagenta(machine.Namespace.Name)
} else { } else {
// Shared into this user // Shared into this namespace
user = pterm.LightYellow(machine.User.Name) namespace = pterm.LightYellow(machine.Namespace.Name)
} }
var IPV4Address string var IPV4Address string
var IPV6Address string var IPV6Address string
for _, addr := range machine.IpAddresses { for _, addr := range machine.IpAddresses {
if netip.MustParseAddr(addr).Is4() { if netaddr.MustParseIP(addr).Is4() {
IPV4Address = addr IPV4Address = addr
} else { } else {
IPV6Address = addr IPV6Address = addr
@@ -596,16 +565,14 @@ func nodesToPtables(
} }
nodeData := []string{ nodeData := []string{
strconv.FormatUint(machine.Id, util.Base10), strconv.FormatUint(machine.Id, headscale.Base10),
machine.Name, machine.Name,
machine.GetGivenName(), machine.GetGivenName(),
machineKey.ShortString(),
nodeKey.ShortString(), nodeKey.ShortString(),
user, namespace,
strings.Join([]string{IPV4Address, IPV6Address}, ", "), strings.Join([]string{IPV4Address, IPV6Address}, ", "),
strconv.FormatBool(ephemeral), strconv.FormatBool(ephemeral),
lastSeenTime, lastSeenTime,
expiryTime,
online, online,
expired, expired,
} }

View File

@@ -3,7 +3,6 @@ package cli
import ( import (
"fmt" "fmt"
"strconv" "strconv"
"strings"
"time" "time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
@@ -20,14 +19,8 @@ const (
func init() { func init() {
rootCmd.AddCommand(preauthkeysCmd) rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User") preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
pakNamespaceFlag.Hidden = true
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("") log.Fatal().Err(err).Msg("")
} }
@@ -40,8 +33,6 @@ func init() {
Bool("ephemeral", false, "Preauthkey for ephemeral nodes") Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
createPreAuthKeyCmd.Flags(). createPreAuthKeyCmd.Flags().
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)") StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
createPreAuthKeyCmd.Flags().
StringSlice("tags", []string{}, "Tags to automatically assign to node")
} }
var preauthkeysCmd = &cobra.Command{ var preauthkeysCmd = &cobra.Command{
@@ -52,14 +43,14 @@ var preauthkeysCmd = &cobra.Command{
var listPreAuthKeys = &cobra.Command{ var listPreAuthKeys = &cobra.Command{
Use: "list", Use: "list",
Short: "List the preauthkeys for this user", Short: "List the preauthkeys for this namespace",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return return
} }
@@ -69,7 +60,7 @@ var listPreAuthKeys = &cobra.Command{
defer conn.Close() defer conn.Close()
request := &v1.ListPreAuthKeysRequest{ request := &v1.ListPreAuthKeysRequest{
User: user, Namespace: namespace,
} }
response, err := client.ListPreAuthKeys(ctx, request) response, err := client.ListPreAuthKeys(ctx, request)
@@ -90,16 +81,7 @@ var listPreAuthKeys = &cobra.Command{
} }
tableData := pterm.TableData{ tableData := pterm.TableData{
{ {"ID", "Key", "Reusable", "Ephemeral", "Used", "Expiration", "Created"},
"ID",
"Key",
"Reusable",
"Ephemeral",
"Used",
"Expiration",
"Created",
"Tags",
},
} }
for _, key := range response.PreAuthKeys { for _, key := range response.PreAuthKeys {
expiration := "-" expiration := "-"
@@ -114,14 +96,6 @@ var listPreAuthKeys = &cobra.Command{
reusable = fmt.Sprintf("%v", key.GetReusable()) reusable = fmt.Sprintf("%v", key.GetReusable())
} }
aclTags := ""
for _, tag := range key.AclTags {
aclTags += "," + tag
}
aclTags = strings.TrimLeft(aclTags, ",")
tableData = append(tableData, []string{ tableData = append(tableData, []string{
key.GetId(), key.GetId(),
key.GetKey(), key.GetKey(),
@@ -130,7 +104,6 @@ var listPreAuthKeys = &cobra.Command{
strconv.FormatBool(key.GetUsed()), strconv.FormatBool(key.GetUsed()),
expiration, expiration,
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"), key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
aclTags,
}) })
} }
@@ -149,33 +122,31 @@ var listPreAuthKeys = &cobra.Command{
var createPreAuthKeyCmd = &cobra.Command{ var createPreAuthKeyCmd = &cobra.Command{
Use: "create", Use: "create",
Short: "Creates a new preauthkey in the specified user", Short: "Creates a new preauthkey in the specified namespace",
Aliases: []string{"c", "new"}, Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return return
} }
reusable, _ := cmd.Flags().GetBool("reusable") reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral") ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags")
log.Trace(). log.Trace().
Bool("reusable", reusable). Bool("reusable", reusable).
Bool("ephemeral", ephemeral). Bool("ephemeral", ephemeral).
Str("user", user). Str("namespace", namespace).
Msg("Preparing to create preauthkey") Msg("Preparing to create preauthkey")
request := &v1.CreatePreAuthKeyRequest{ request := &v1.CreatePreAuthKeyRequest{
User: user, Namespace: namespace,
Reusable: reusable, Reusable: reusable,
Ephemeral: ephemeral, Ephemeral: ephemeral,
AclTags: tags,
} }
durationStr, _ := cmd.Flags().GetString("expiration") durationStr, _ := cmd.Flags().GetString("expiration")
@@ -193,9 +164,7 @@ var createPreAuthKeyCmd = &cobra.Command{
expiration := time.Now().UTC().Add(time.Duration(duration)) expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace(). log.Trace().Dur("expiration", time.Duration(duration)).Msg("expiration has been set")
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration) request.Expiration = timestamppb.New(expiration)
@@ -231,9 +200,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
}, },
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user") namespace, err := cmd.Flags().GetString("namespace")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return return
} }
@@ -243,8 +212,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
defer conn.Close() defer conn.Close()
request := &v1.ExpirePreAuthKeyRequest{ request := &v1.ExpirePreAuthKeyRequest{
User: user, Namespace: namespace,
Key: args[0], Key: args[0],
} }
response, err := client.ExpirePreAuthKey(ctx, request) response, err := client.ExpirePreAuthKey(ctx, request)

View File

@@ -5,25 +5,16 @@ import (
"os" "os"
"runtime" "runtime"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/tcnksm/go-latest" "github.com/tcnksm/go-latest"
) )
const (
deprecateNamespaceMessage = "use --user"
)
var cfgFile string = "" var cfgFile string = ""
func init() { func init() {
if len(os.Args) > 1 &&
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
return
}
cobra.OnInitialize(initConfig) cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags(). rootCmd.PersistentFlags().
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)") StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
@@ -34,29 +25,26 @@ func init() {
} }
func initConfig() { func initConfig() {
if cfgFile == "" {
cfgFile = os.Getenv("HEADSCALE_CONFIG")
}
if cfgFile != "" { if cfgFile != "" {
err := types.LoadConfig(cfgFile, true) err := headscale.LoadConfig(cfgFile, true)
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile) log.Fatal().Caller().Err(err)
} }
} else { } else {
err := types.LoadConfig("", false) err := headscale.LoadConfig("", false)
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config") log.Fatal().Caller().Err(err)
} }
} }
cfg, err := types.GetHeadscaleConfig() cfg, err := headscale.GetHeadscaleConfig()
if err != nil { if err != nil {
log.Fatal().Caller().Err(err) log.Fatal().Caller().Err(err)
} }
machineOutput := HasMachineOutputFlag() machineOutput := HasMachineOutputFlag()
zerolog.SetGlobalLevel(cfg.Log.Level) zerolog.SetGlobalLevel(cfg.LogLevel)
// If the user has requested a "machine" readable format, // If the user has requested a "machine" readable format,
// then disable login so the output remains valid. // then disable login so the output remains valid.
@@ -64,10 +52,6 @@ func initConfig() {
zerolog.SetGlobalLevel(zerolog.Disabled) zerolog.SetGlobalLevel(zerolog.Disabled)
} }
if cfg.Log.Format == types.JSONLogFormat {
log.Logger = log.Output(os.Stdout)
}
if !cfg.DisableUpdateCheck && !machineOutput { if !cfg.DisableUpdateCheck && !machineOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") && if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
Version != "dev" { Version != "dev" {

View File

@@ -3,45 +3,37 @@ package cli
import ( import (
"fmt" "fmt"
"log" "log"
"net/netip"
"strconv" "strconv"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" "google.golang.org/grpc/status"
) )
const (
Base10 = 10
)
func init() { func init() {
rootCmd.AddCommand(routesCmd) rootCmd.AddCommand(routesCmd)
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err := listRoutesCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(listRoutesCmd) routesCmd.AddCommand(listRoutesCmd)
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)") enableRouteCmd.Flags().
err := enableRouteCmd.MarkFlagRequired("route") StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
enableRouteCmd.Flags().BoolP("all", "a", false, "All routes from host")
err = enableRouteCmd.MarkFlagRequired("identifier")
if err != nil { if err != nil {
log.Fatalf(err.Error()) log.Fatalf(err.Error())
} }
routesCmd.AddCommand(enableRouteCmd) routesCmd.AddCommand(enableRouteCmd)
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)") nodeCmd.AddCommand(routesCmd)
err = disableRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(disableRouteCmd)
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = deleteRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(deleteRouteCmd)
} }
var routesCmd = &cobra.Command{ var routesCmd = &cobra.Command{
@@ -52,7 +44,7 @@ var routesCmd = &cobra.Command{
var listRoutesCmd = &cobra.Command{ var listRoutesCmd = &cobra.Command{
Use: "list", Use: "list",
Short: "List all routes", Short: "List routes advertised and enabled by a given node",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
@@ -72,51 +64,28 @@ var listRoutesCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
var routes []*v1.Route request := &v1.GetMachineRouteRequest{
MachineId: machineID,
if machineID == 0 {
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
routes = response.Routes
} else {
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
MachineId: machineID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
routes = response.Routes
} }
tableData := routesToPtables(routes) response, err := client.GetMachineRoute(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
tableData := routesToPtables(response.Routes)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -138,12 +107,16 @@ var listRoutesCmd = &cobra.Command{
var enableRouteCmd = &cobra.Command{ var enableRouteCmd = &cobra.Command{
Use: "enable", Use: "enable",
Short: "Set a route as enabled", Short: "Set the enabled routes for a given node",
Long: `This command will make as enabled a given route.`, Long: `This command will take a list of routes that will _replace_
the current set of routes on a given node.
If you would like to disable a route, simply run the command again, but
omit the route you do not want to enable.
`,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route") machineID, err := cmd.Flags().GetUint64("identifier")
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
@@ -158,13 +131,52 @@ var enableRouteCmd = &cobra.Command{
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{ var routes []string
RouteId: routeID,
}) isAll, _ := cmd.Flags().GetBool("all")
if isAll {
response, err := client.GetMachineRoute(ctx, &v1.GetMachineRouteRequest{
MachineId: machineID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot get machine routes: %s\n",
status.Convert(err).Message(),
),
output,
)
return
}
routes = response.GetRoutes().GetAdvertisedRoutes()
} else {
routes, err = cmd.Flags().GetStringSlice("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
return
}
}
request := &v1.EnableMachineRoutesRequest{
MachineId: machineID,
Routes: routes,
}
response, err := client.EnableMachineRoutes(ctx, request)
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()), fmt.Sprintf(
"Cannot register machine: %s\n",
status.Convert(err).Message(),
),
output, output,
) )
@@ -172,127 +184,50 @@ var enableRouteCmd = &cobra.Command{
} }
if output != "" { if output != "" {
SuccessOutput(response, "", output) SuccessOutput(response.Routes, "", output)
return return
} }
},
}
var disableRouteCmd = &cobra.Command{ tableData := routesToPtables(response.Routes)
Use: "disable", if err != nil {
Short: "Set as disabled a given route", ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
Long: `This command will make as disabled a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route") return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil { if err != nil {
ErrorOutput( ErrorOutput(
err, err,
fmt.Sprintf("Error getting machine id from flag: %s", err), fmt.Sprintf("Failed to render pterm table: %s", err),
output, output,
) )
return return
} }
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
var deleteRouteCmd = &cobra.Command{
Use: "delete",
Short: "Delete a given route",
Long: `This command will delete a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
}, },
} }
// routesToPtables converts the list of routes to a nice table. // routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes []*v1.Route) pterm.TableData { func routesToPtables(routes *v1.Routes) pterm.TableData {
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}} tableData := pterm.TableData{{"Route", "Enabled"}}
for _, route := range routes { for _, route := range routes.GetAdvertisedRoutes() {
var isPrimaryStr string enabled := isStringInSlice(routes.EnabledRoutes, route)
prefix, err := netip.ParsePrefix(route.Prefix)
if err != nil {
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
continue tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
}
if prefix == types.ExitRouteV4 || prefix == types.ExitRouteV6 {
isPrimaryStr = "-"
} else {
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
}
tableData = append(tableData,
[]string{
strconv.FormatUint(route.Id, Base10),
route.Machine.GivenName,
route.Prefix,
strconv.FormatBool(route.Advertised),
strconv.FormatBool(route.Enabled),
isPrimaryStr,
})
} }
return tableData return tableData
} }
func isStringInSlice(strs []string, s string) bool {
for _, s2 := range strs {
if s == s2 {
return true
}
}
return false
}

View File

@@ -8,33 +8,26 @@ import (
"os" "os"
"reflect" "reflect"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"google.golang.org/grpc" "google.golang.org/grpc"
"google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure" "google.golang.org/grpc/credentials/insecure"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v2"
) )
const ( const (
HeadscaleDateTimeFormat = "2006-01-02 15:04:05" HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
SocketWritePermissions = 0o666
) )
func getHeadscaleApp() (*hscontrol.Headscale, error) { func getHeadscaleApp() (*headscale.Headscale, error) {
cfg, err := types.GetHeadscaleConfig() cfg, err := headscale.GetHeadscaleConfig()
if err != nil { if err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf("failed to load configuration while creating headscale instance: %w", err)
"failed to load configuration while creating headscale instance: %w",
err,
)
} }
app, err := hscontrol.NewHeadscale(cfg) app, err := headscale.NewHeadscale(cfg)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -42,23 +35,21 @@ func getHeadscaleApp() (*hscontrol.Headscale, error) {
// We are doing this here, as in the future could be cool to have it also hot-reload // We are doing this here, as in the future could be cool to have it also hot-reload
if cfg.ACL.PolicyPath != "" { if cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath) aclPath := headscale.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath) err = app.LoadACLPolicy(aclPath)
if err != nil { if err != nil {
log.Fatal(). log.Fatal().
Str("path", aclPath). Str("path", aclPath).
Err(err). Err(err).
Msg("Could not load the ACL policy") Msg("Could not load the ACL policy")
} }
app.ACLPolicy = pol
} }
return app, nil return app, nil
} }
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) { func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg, err := types.GetHeadscaleConfig() cfg, err := headscale.GetHeadscaleConfig()
if err != nil { if err != nil {
log.Fatal(). log.Fatal().
Err(err). Err(err).
@@ -79,7 +70,7 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
address := cfg.CLI.Address address := cfg.CLI.Address
// If the address is not set, we assume that we are on the server hosting hscontrol. // If the address is not set, we assume that we are on the server hosting headscale.
if address == "" { if address == "" {
log.Debug(). log.Debug().
Str("socket", cfg.UnixSocket). Str("socket", cfg.UnixSocket).
@@ -87,23 +78,10 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
address = cfg.UnixSocket address = cfg.UnixSocket
// Try to give the user better feedback if we cannot write to the headscale
// socket.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
if err != nil {
if os.IsPermission(err) {
log.Fatal().
Err(err).
Str("socket", cfg.UnixSocket).
Msgf("Unable to read/write to headscale socket, do you have the correct permissions?")
}
}
socket.Close()
grpcOptions = append( grpcOptions = append(
grpcOptions, grpcOptions,
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(util.GrpcSocketDialer), grpc.WithContextDialer(headscale.GrpcSocketDialer),
) )
} else { } else {
// If we are not connecting to a local server, require an API key for authentication // If we are not connecting to a local server, require an API key for authentication

View File

@@ -6,25 +6,11 @@ import (
"github.com/efekarakus/termcolor" "github.com/efekarakus/termcolor"
"github.com/juanfont/headscale/cmd/headscale/cli" "github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/pkg/profile"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
) )
func main() { func main() {
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
err := os.MkdirAll(profilePath, os.ModePerm)
if err != nil {
log.Fatal().Err(err).Msg("failed to create profiling directory")
}
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
} else {
defer profile.Start().Stop()
}
}
var colors bool var colors bool
switch l := termcolor.SupportLevel(os.Stderr); l { switch l := termcolor.SupportLevel(os.Stderr); l {
case termcolor.Level16M: case termcolor.Level16M:
@@ -48,7 +34,7 @@ func main() {
zerolog.TimeFieldFormat = zerolog.TimeFormatUnix zerolog.TimeFieldFormat = zerolog.TimeFormatUnix
log.Logger = log.Output(zerolog.ConsoleWriter{ log.Logger = log.Output(zerolog.ConsoleWriter{
Out: os.Stderr, Out: os.Stdout,
TimeFormat: time.RFC3339, TimeFormat: time.RFC3339,
NoColor: !colors, NoColor: !colors,
}) })

View File

@@ -2,13 +2,13 @@ package main
import ( import (
"io/fs" "io/fs"
"io/ioutil"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
"testing" "testing"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/spf13/viper" "github.com/spf13/viper"
"gopkg.in/check.v1" "gopkg.in/check.v1"
) )
@@ -28,7 +28,7 @@ func (s *Suite) TearDownSuite(c *check.C) {
} }
func (*Suite) TestConfigFileLoading(c *check.C) { func (*Suite) TestConfigFileLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale") tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil { if err != nil {
c.Fatal(err) c.Fatal(err)
} }
@@ -51,12 +51,12 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
} }
// Load example config, it should load without validation errors // Load example config, it should load without validation errors
err = types.LoadConfig(cfgFile, true) err = headscale.LoadConfig(cfgFile, true)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly // Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3") c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite") c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
@@ -65,7 +65,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1") c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert( c.Assert(
util.GetFileMode("unix_socket_permission"), headscale.GetFileMode("unix_socket_permission"),
check.Equals, check.Equals,
fs.FileMode(0o770), fs.FileMode(0o770),
) )
@@ -73,7 +73,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
} }
func (*Suite) TestConfigLoading(c *check.C) { func (*Suite) TestConfigLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale") tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil { if err != nil {
c.Fatal(err) c.Fatal(err)
} }
@@ -94,12 +94,12 @@ func (*Suite) TestConfigLoading(c *check.C) {
} }
// Load example config, it should load without validation errors // Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly // Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3") c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite") c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
@@ -108,7 +108,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1") c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert( c.Assert(
util.GetFileMode("unix_socket_permission"), headscale.GetFileMode("unix_socket_permission"),
check.Equals, check.Equals,
fs.FileMode(0o770), fs.FileMode(0o770),
) )
@@ -117,7 +117,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
} }
func (*Suite) TestDNSConfigLoading(c *check.C) { func (*Suite) TestDNSConfigLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale") tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil { if err != nil {
c.Fatal(err) c.Fatal(err)
} }
@@ -138,10 +138,10 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
} }
// Load example config, it should load without validation errors // Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
dnsConfig, baseDomain := types.GetDNSConfig() dnsConfig, baseDomain := headscale.GetDNSConfig()
c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1") c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1")
c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1") c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1")
@@ -152,28 +152,26 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
func writeConfig(c *check.C, tmpDir string, configYaml []byte) { func writeConfig(c *check.C, tmpDir string, configYaml []byte) {
// Populate a custom config file // Populate a custom config file
configFile := filepath.Join(tmpDir, "config.yaml") configFile := filepath.Join(tmpDir, "config.yaml")
err := os.WriteFile(configFile, configYaml, 0o600) err := ioutil.WriteFile(configFile, configYaml, 0o600)
if err != nil { if err != nil {
c.Fatalf("Couldn't write file %s", configFile) c.Fatalf("Couldn't write file %s", configFile)
} }
} }
func (*Suite) TestTLSConfigValidation(c *check.C) { func (*Suite) TestTLSConfigValidation(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale") tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil { if err != nil {
c.Fatal(err) c.Fatal(err)
} }
// defer os.RemoveAll(tmpDir) // defer os.RemoveAll(tmpDir)
configYaml := []byte(`---
tls_letsencrypt_hostname: example.com configYaml := []byte(
tls_letsencrypt_challenge_type: "" "---\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"\"\ntls_cert_path: \"abc.pem\"",
tls_cert_path: abc.pem )
noise:
private_key_path: noise_private.key`)
writeConfig(c, tmpDir, configYaml) writeConfig(c, tmpDir, configYaml)
// Check configuration validation errors (1) // Check configuration validation errors (1)
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.NotNil) c.Assert(err, check.NotNil)
// check.Matches can not handle multiline strings // check.Matches can not handle multiline strings
tmp := strings.ReplaceAll(err.Error(), "\n", "***") tmp := strings.ReplaceAll(err.Error(), "\n", "***")
@@ -194,14 +192,10 @@ noise:
) )
// Check configuration validation errors (2) // Check configuration validation errors (2)
configYaml = []byte(`--- configYaml = []byte(
noise: "---\nserver_url: \"http://127.0.0.1:8080\"\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"TLS-ALPN-01\"",
private_key_path: noise_private.key )
server_url: http://127.0.0.1:8080
tls_letsencrypt_hostname: example.com
tls_letsencrypt_challenge_type: TLS-ALPN-01
`)
writeConfig(c, tmpDir, configYaml) writeConfig(c, tmpDir, configYaml)
err = types.LoadConfig(tmpDir, false) err = headscale.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil) c.Assert(err, check.IsNil)
} }

View File

@@ -14,9 +14,7 @@ server_url: http://127.0.0.1:8080
# Address to listen to / bind to on the server # Address to listen to / bind to on the server
# #
# For production: listen_addr: 0.0.0.0:8080
# listen_addr: 0.0.0.0:8080
listen_addr: 127.0.0.1:8080
# Address to listen to /metrics, you may want # Address to listen to /metrics, you may want
# to keep this endpoint private to your internal # to keep this endpoint private to your internal
@@ -29,10 +27,7 @@ metrics_listen_addr: 127.0.0.1:9090
# remotely with the CLI # remotely with the CLI
# Note: Remote access _only_ works if you have # Note: Remote access _only_ works if you have
# valid certificates. # valid certificates.
# grpc_listen_addr: 0.0.0.0:50443
# For production:
# grpc_listen_addr: 0.0.0.0:50443
grpc_listen_addr: 127.0.0.1:50443
# Allow the gRPC admin interface to run in INSECURE # Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will # mode. This is not recommended as the traffic will
@@ -40,30 +35,15 @@ grpc_listen_addr: 127.0.0.1:50443
# are doing. # are doing.
grpc_allow_insecure: false grpc_allow_insecure: false
# Private key used to encrypt the traffic between headscale # Private key used encrypt the traffic between headscale
# and Tailscale clients. # and Tailscale clients.
# The private key file will be autogenerated if it's missing. # The private key file which will be
# # autogenerated if it's missing
private_key_path: /var/lib/headscale/private.key private_key_path: /var/lib/headscale/private.key
# The Noise section includes specific configuration for the
# TS2021 Noise protocol
noise:
# The Noise private key is used to encrypt the
# traffic between headscale and Tailscale clients when
# using the new Noise-based protocol. It must be different
# from the legacy private key.
private_key_path: /var/lib/headscale/noise_private.key
# List of IP prefixes to allocate tailaddresses from. # List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address, # Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash. # and the associated prefix length, delimited by a slash.
# It must be within IP ranges supported by the Tailscale
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
# See below:
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# Any other range is NOT supported, and it will cause unexpected issues.
ip_prefixes: ip_prefixes:
- fd7a:115c:a1e0::/48 - fd7a:115c:a1e0::/48
- 100.64.0.0/10 - 100.64.0.0/10
@@ -89,7 +69,7 @@ derp:
region_code: "headscale" region_code: "headscale"
region_name: "Headscale Embedded DERP" region_name: "Headscale Embedded DERP"
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal. # Listens in UDP at the configured address for STUN connections to help on NAT traversal.
# When the embedded DERP server is enabled stun_listen_addr MUST be defined. # When the embedded DERP server is enabled stun_listen_addr MUST be defined.
# #
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/ # For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
@@ -123,20 +103,17 @@ disable_check_updates: false
# Time before an inactive ephemeral node is deleted? # Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m ephemeral_node_inactivity_timeout: 30m
# Period to check for node updates within the tailnet. A value too low will severely affect # Period to check for node updates in the tailnet. A value too low will severily affect
# CPU consumption of Headscale. A value too high (over 60s) will cause problems # CPU consumption of Headscale. A value too high (over 60s) will cause problems
# for the nodes, as they won't get updates or keep alive messages frequently enough. # to the nodes, as they won't get updates or keep alive messages in time.
# In case of doubts, do not touch the default 10s. # In case of doubts, do not touch the default 10s.
node_update_check_interval: 10s node_update_check_interval: 10s
# SQLite config # SQLite config
db_type: sqlite3 db_type: sqlite3
# For production:
db_path: /var/lib/headscale/db.sqlite db_path: /var/lib/headscale/db.sqlite
# # Postgres config # # Postgres config
# If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# db_type: postgres # db_type: postgres
# db_host: localhost # db_host: localhost
# db_port: 5432 # db_port: 5432
@@ -144,10 +121,6 @@ db_path: /var/lib/headscale/db.sqlite
# db_user: foo # db_user: foo
# db_pass: bar # db_pass: bar
# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# db_ssl: false
### TLS configuration ### TLS configuration
# #
## Let's encrypt / ACME ## Let's encrypt / ACME
@@ -164,9 +137,15 @@ acme_email: ""
# Domain name to request a TLS certificate for: # Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: "" tls_letsencrypt_hostname: ""
# Client (Tailscale/Browser) authentication mode (mTLS)
# Acceptable values:
# - disabled: client authentication disabled
# - relaxed: client certificate is required but not verified
# - enforced: client certificate is required and verified
tls_client_auth_mode: relaxed
# Path to store certificates and metadata needed by # Path to store certificates and metadata needed by
# letsencrypt # letsencrypt
# For production:
tls_letsencrypt_cache_dir: /var/lib/headscale/cache tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types: # Type of ACME challenge to use, currently supported types:
@@ -174,7 +153,7 @@ tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# See [docs/tls.md](docs/tls.md) for more information # See [docs/tls.md](docs/tls.md) for more information
tls_letsencrypt_challenge_type: HTTP-01 tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a # When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listening on: # verification endpoint, and it will be listning on:
# :http = port 80 # :http = port 80
tls_letsencrypt_listen: ":http" tls_letsencrypt_listen: ":http"
@@ -182,10 +161,7 @@ tls_letsencrypt_listen: ":http"
tls_cert_path: "" tls_cert_path: ""
tls_key_path: "" tls_key_path: ""
log: log_level: info
# Output formatting for logs: text or json
format: text
level: info
# Path to a file containg ACL policies. # Path to a file containg ACL policies.
# ACLs can be defined as YAML or HUJSON. # ACLs can be defined as YAML or HUJSON.
@@ -202,25 +178,10 @@ acl_policy_path: ""
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/ # - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
# #
dns_config: dns_config:
# Whether to prefer using Headscale provided DNS or use local.
override_local_dns: true
# List of DNS servers to expose to clients. # List of DNS servers to expose to clients.
nameservers: nameservers:
- 1.1.1.1 - 1.1.1.1
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
# "abc123" is example NextDNS ID, replace with yours.
#
# With metadata sharing:
# nameservers:
# - https://dns.nextdns.io/abc123
#
# Without metadata sharing:
# nameservers:
# - 2a07:a8c0::ab:c123
# - 2a07:a8c1::ab:c123
# Split DNS (see https://tailscale.com/kb/1054/dns/), # Split DNS (see https://tailscale.com/kb/1054/dns/),
# list of search domains and the DNS to query for each one. # list of search domains and the DNS to query for each one.
# #
@@ -234,17 +195,6 @@ dns_config:
# Search domains to inject. # Search domains to inject.
domains: [] domains: []
# Extra DNS records
# so far only A-records are supported (on the tailscale side)
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
# extra_records:
# - name: "grafana.myvpn.example.com"
# type: "A"
# value: "100.64.0.3"
#
# # you can also put it in one line
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/). # Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Only works if there is at least a nameserver defined. # Only works if there is at least a nameserver defined.
magic_dns: true magic_dns: true
@@ -252,12 +202,13 @@ dns_config:
# Defines the base domain to create the hostnames for MagicDNS. # Defines the base domain to create the hostnames for MagicDNS.
# `base_domain` must be a FQDNs, without the trailing dot. # `base_domain` must be a FQDNs, without the trailing dot.
# The FQDN of the hosts will be # The FQDN of the hosts will be
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_). # `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
base_domain: example.com base_domain: example.com
# Unix socket used for the CLI to connect without authentication # Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like: # Note: for local development, you probably want to change this to:
unix_socket: /var/run/headscale/headscale.sock # unix_socket: ./headscale.sock
unix_socket: /var/run/headscale.sock
unix_socket_permission: "0770" unix_socket_permission: "0770"
# #
# headscale supports experimental OpenID connect support, # headscale supports experimental OpenID connect support,
@@ -265,49 +216,29 @@ unix_socket_permission: "0770"
# help us test it. # help us test it.
# OpenID Connect # OpenID Connect
# oidc: # oidc:
# only_start_if_oidc_is_available: true
# issuer: "https://your-oidc.issuer.com/path" # issuer: "https://your-oidc.issuer.com/path"
# client_id: "your-oidc-client-id" # client_id: "your-oidc-client-id"
# client_secret: "your-oidc-client-secret" # client_secret: "your-oidc-client-secret"
# # Alternatively, set `client_secret_path` to read the secret from the file.
# # It resolves environment variables, making integration to systemd's
# # `LoadCredential` straightforward:
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
# # client_secret and client_secret_path are mutually exclusive.
# #
# # The amount of time from a node is authenticated with OpenID until it # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# # expires and needs to reauthenticate. # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
# # Setting the value to "0" will mean no expiry.
# expiry: 180d
#
# # Use the expiry from the token received from OpenID when the user logged
# # in, this will typically lead to frequent need to reauthenticate and should
# # only been enabled if you know what you are doing.
# # Note: enabling this will cause `oidc.expiry` to be ignored.
# use_expiry_from_token: false
#
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
# #
# scope: ["openid", "profile", "email", "custom"] # scope: ["openid", "profile", "email", "custom"]
# extra_params: # extra_params:
# domain_hint: example.com # domain_hint: example.com
# #
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# # authentication request will be rejected. # authentication request will be rejected.
# #
# allowed_domains: # allowed_domains:
# - example.com # - example.com
# # Note: Groups from keycloak have a leading '/'
# allowed_groups:
# - /headscale
# allowed_users: # allowed_users:
# - alice@example.com # - alice@example.com
# #
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed. # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name` # This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# user: `first-name.last-name.example.com` # namespace: `first-name.last-name.example.com`
# #
# strip_email_domain: true # strip_email_domain: true

View File

@@ -1,34 +1,26 @@
package types package headscale
import ( import (
"crypto/tls"
"errors" "errors"
"fmt" "fmt"
"io/fs" "io/fs"
"net/netip"
"net/url" "net/url"
"os"
"strings" "strings"
"time" "time"
"github.com/coreos/go-oidc/v3/oidc" "github.com/coreos/go-oidc/v3/oidc"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/viper" "github.com/spf13/viper"
"go4.org/netipx" "inet.af/netaddr"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/dnstype" "tailscale.com/types/dnstype"
) )
const ( const (
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days tlsALPN01ChallengeType = "TLS-ALPN-01"
maxDuration time.Duration = 1<<63 - 1 http01ChallengeType = "HTTP-01"
)
var errOidcMutuallyExclusive = errors.New(
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
) )
// Config contains the initial Headscale configuration. // Config contains the initial Headscale configuration.
@@ -40,11 +32,10 @@ type Config struct {
GRPCAllowInsecure bool GRPCAllowInsecure bool
EphemeralNodeInactivityTimeout time.Duration EphemeralNodeInactivityTimeout time.Duration
NodeUpdateCheckInterval time.Duration NodeUpdateCheckInterval time.Duration
IPPrefixes []netip.Prefix IPPrefixes []netaddr.IPPrefix
PrivateKeyPath string PrivateKeyPath string
NoisePrivateKeyPath string
BaseDomain string BaseDomain string
Log LogConfig LogLevel zerolog.Level
DisableUpdateCheck bool DisableUpdateCheck bool
DERP DERPConfig DERP DERPConfig
@@ -56,7 +47,6 @@ type Config struct {
DBname string DBname string
DBuser string DBuser string
DBpass string DBpass string
DBssl string
TLS TLSConfig TLS TLSConfig
@@ -79,8 +69,9 @@ type Config struct {
} }
type TLSConfig struct { type TLSConfig struct {
CertPath string CertPath string
KeyPath string KeyPath string
ClientAuthMode tls.ClientAuthType
LetsEncrypt LetsEncryptConfig LetsEncrypt LetsEncryptConfig
} }
@@ -93,18 +84,14 @@ type LetsEncryptConfig struct {
} }
type OIDCConfig struct { type OIDCConfig struct {
OnlyStartIfOIDCIsAvailable bool Issuer string
Issuer string ClientID string
ClientID string ClientSecret string
ClientSecret string Scope []string
Scope []string ExtraParams map[string]string
ExtraParams map[string]string AllowedDomains []string
AllowedDomains []string AllowedUsers []string
AllowedUsers []string StripEmaildomain bool
AllowedGroups []string
StripEmaildomain bool
Expiry time.Duration
UseExpiryFromToken bool
} }
type DERPConfig struct { type DERPConfig struct {
@@ -134,11 +121,6 @@ type ACLConfig struct {
PolicyPath string PolicyPath string
} }
type LogConfig struct {
Format string
Level zerolog.Level
}
func LoadConfig(path string, isFile bool) error { func LoadConfig(path string, isFile bool) error {
if isFile { if isFile {
viper.SetConfigFile(path) viper.SetConfigFile(path)
@@ -159,18 +141,17 @@ func LoadConfig(path string, isFile bool) error {
viper.AutomaticEnv() viper.AutomaticEnv()
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache") viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
viper.SetDefault("tls_letsencrypt_challenge_type", HTTP01ChallengeType) viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
viper.SetDefault("tls_client_auth_mode", "relaxed")
viper.SetDefault("log.level", "info") viper.SetDefault("log_level", "info")
viper.SetDefault("log.format", TextLogFormat)
viper.SetDefault("dns_config", nil) viper.SetDefault("dns_config", nil)
viper.SetDefault("dns_config.override_local_dns", true)
viper.SetDefault("derp.server.enabled", false) viper.SetDefault("derp.server.enabled", false)
viper.SetDefault("derp.server.stun.enabled", true) viper.SetDefault("derp.server.stun.enabled", true)
viper.SetDefault("unix_socket", "/var/run/headscale/headscale.sock") viper.SetDefault("unix_socket", "/var/run/headscale.sock")
viper.SetDefault("unix_socket_permission", "0o770") viper.SetDefault("unix_socket_permission", "0o770")
viper.SetDefault("grpc_listen_addr", ":50443") viper.SetDefault("grpc_listen_addr", ":50443")
@@ -179,13 +160,8 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("cli.timeout", "5s") viper.SetDefault("cli.timeout", "5s")
viper.SetDefault("cli.insecure", false) viper.SetDefault("cli.insecure", false)
viper.SetDefault("db_ssl", false)
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"}) viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
viper.SetDefault("oidc.strip_email_domain", true) viper.SetDefault("oidc.strip_email_domain", true)
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
viper.SetDefault("oidc.expiry", "180d")
viper.SetDefault("oidc.use_expiry_from_token", false)
viper.SetDefault("logtail.enabled", false) viper.SetDefault("logtail.enabled", false)
viper.SetDefault("randomize_client_port", false) viper.SetDefault("randomize_client_port", false)
@@ -194,10 +170,6 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("node_update_check_interval", "10s") viper.SetDefault("node_update_check_interval", "10s")
if IsCLIConfigured() {
return nil
}
if err := viper.ReadInConfig(); err != nil { if err := viper.ReadInConfig(); err != nil {
log.Warn().Err(err).Msg("Failed to read configuration from disk") log.Warn().Err(err).Msg("Failed to read configuration from disk")
@@ -211,20 +183,16 @@ func LoadConfig(path string, isFile bool) error {
errorText += "Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both\n" errorText += "Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both\n"
} }
if !viper.IsSet("noise") || viper.GetString("noise.private_key_path") == "" {
errorText += "Fatal config error: headscale now requires a new `noise.private_key_path` field in the config file for the Tailscale v2 protocol\n"
}
if (viper.GetString("tls_letsencrypt_hostname") != "") && if (viper.GetString("tls_letsencrypt_hostname") != "") &&
(viper.GetString("tls_letsencrypt_challenge_type") == TLSALPN01ChallengeType) && (viper.GetString("tls_letsencrypt_challenge_type") == tlsALPN01ChallengeType) &&
(!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) { (!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) {
// this is only a warning because there could be something sitting in front of headscale that redirects the traffic (e.g. an iptables rule) // this is only a warning because there could be something sitting in front of headscale that redirects the traffic (e.g. an iptables rule)
log.Warn(). log.Warn().
Msg("Warning: when using tls_letsencrypt_hostname with TLS-ALPN-01 as challenge type, headscale must be reachable on port 443, i.e. listen_addr should probably end in :443") Msg("Warning: when using tls_letsencrypt_hostname with TLS-ALPN-01 as challenge type, headscale must be reachable on port 443, i.e. listen_addr should probably end in :443")
} }
if (viper.GetString("tls_letsencrypt_challenge_type") != HTTP01ChallengeType) && if (viper.GetString("tls_letsencrypt_challenge_type") != http01ChallengeType) &&
(viper.GetString("tls_letsencrypt_challenge_type") != TLSALPN01ChallengeType) { (viper.GetString("tls_letsencrypt_challenge_type") != tlsALPN01ChallengeType) {
errorText += "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are HTTP-01 and TLS-ALPN-01\n" errorText += "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are HTTP-01 and TLS-ALPN-01\n"
} }
@@ -233,6 +201,19 @@ func LoadConfig(path string, isFile bool) error {
errorText += "Fatal config error: server_url must start with https:// or http://\n" errorText += "Fatal config error: server_url must start with https:// or http://\n"
} }
_, authModeValid := LookupTLSClientAuthMode(
viper.GetString("tls_client_auth_mode"),
)
if !authModeValid {
errorText += fmt.Sprintf(
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
viper.GetString("tls_client_auth_mode"),
DisabledClientAuth,
RelaxedClientAuth,
EnforcedClientAuth)
}
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds // Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
// to avoid races // to avoid races
minInactivityTimeout, _ := time.ParseDuration("65s") minInactivityTimeout, _ := time.ParseDuration("65s")
@@ -262,21 +243,26 @@ func LoadConfig(path string, isFile bool) error {
} }
func GetTLSConfig() TLSConfig { func GetTLSConfig() TLSConfig {
tlsClientAuthMode, _ := LookupTLSClientAuthMode(
viper.GetString("tls_client_auth_mode"),
)
return TLSConfig{ return TLSConfig{
LetsEncrypt: LetsEncryptConfig{ LetsEncrypt: LetsEncryptConfig{
Hostname: viper.GetString("tls_letsencrypt_hostname"), Hostname: viper.GetString("tls_letsencrypt_hostname"),
Listen: viper.GetString("tls_letsencrypt_listen"), Listen: viper.GetString("tls_letsencrypt_listen"),
CacheDir: util.AbsolutePathFromConfigPath( CacheDir: AbsolutePathFromConfigPath(
viper.GetString("tls_letsencrypt_cache_dir"), viper.GetString("tls_letsencrypt_cache_dir"),
), ),
ChallengeType: viper.GetString("tls_letsencrypt_challenge_type"), ChallengeType: viper.GetString("tls_letsencrypt_challenge_type"),
}, },
CertPath: util.AbsolutePathFromConfigPath( CertPath: AbsolutePathFromConfigPath(
viper.GetString("tls_cert_path"), viper.GetString("tls_cert_path"),
), ),
KeyPath: util.AbsolutePathFromConfigPath( KeyPath: AbsolutePathFromConfigPath(
viper.GetString("tls_key_path"), viper.GetString("tls_key_path"),
), ),
ClientAuthMode: tlsClientAuthMode,
} }
} }
@@ -341,59 +327,18 @@ func GetACLConfig() ACLConfig {
} }
} }
func GetLogConfig() LogConfig {
logLevelStr := viper.GetString("log.level")
logLevel, err := zerolog.ParseLevel(logLevelStr)
if err != nil {
logLevel = zerolog.DebugLevel
}
logFormatOpt := viper.GetString("log.format")
var logFormat string
switch logFormatOpt {
case "json":
logFormat = JSONLogFormat
case "text":
logFormat = TextLogFormat
case "":
logFormat = TextLogFormat
default:
log.Error().
Str("func", "GetLogConfig").
Msgf("Could not parse log format: %s. Valid choices are 'json' or 'text'", logFormatOpt)
}
return LogConfig{
Format: logFormat,
Level: logLevel,
}
}
func GetDNSConfig() (*tailcfg.DNSConfig, string) { func GetDNSConfig() (*tailcfg.DNSConfig, string) {
if viper.IsSet("dns_config") { if viper.IsSet("dns_config") {
dnsConfig := &tailcfg.DNSConfig{} dnsConfig := &tailcfg.DNSConfig{}
overrideLocalDNS := viper.GetBool("dns_config.override_local_dns")
if viper.IsSet("dns_config.nameservers") { if viper.IsSet("dns_config.nameservers") {
nameserversStr := viper.GetStringSlice("dns_config.nameservers") nameserversStr := viper.GetStringSlice("dns_config.nameservers")
nameservers := []netip.Addr{} nameservers := make([]netaddr.IP, len(nameserversStr))
resolvers := []*dnstype.Resolver{} resolvers := make([]*dnstype.Resolver, len(nameserversStr))
for _, nameserverStr := range nameserversStr { for index, nameserverStr := range nameserversStr {
// Search for explicit DNS-over-HTTPS resolvers nameserver, err := netaddr.ParseIP(nameserverStr)
if strings.HasPrefix(nameserverStr, "https://") {
resolvers = append(resolvers, &dnstype.Resolver{
Addr: nameserverStr,
})
// This nameserver can not be parsed as an IP address
continue
}
// Parse nameserver as a regular IP
nameserver, err := netip.ParseAddr(nameserverStr)
if err != nil { if err != nil {
log.Error(). log.Error().
Str("func", "getDNSConfig"). Str("func", "getDNSConfig").
@@ -401,76 +346,59 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
Msgf("Could not parse nameserver IP: %s", nameserverStr) Msgf("Could not parse nameserver IP: %s", nameserverStr)
} }
nameservers = append(nameservers, nameserver) nameservers[index] = nameserver
resolvers = append(resolvers, &dnstype.Resolver{ resolvers[index] = &dnstype.Resolver{
Addr: nameserver.String(), Addr: nameserver.String(),
}) }
} }
dnsConfig.Nameservers = nameservers dnsConfig.Nameservers = nameservers
dnsConfig.Resolvers = resolvers
if overrideLocalDNS {
dnsConfig.Resolvers = resolvers
} else {
dnsConfig.FallbackResolvers = resolvers
}
} }
if viper.IsSet("dns_config.restricted_nameservers") { if viper.IsSet("dns_config.restricted_nameservers") {
dnsConfig.Routes = make(map[string][]*dnstype.Resolver) if len(dnsConfig.Nameservers) > 0 {
domains := []string{} dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
restrictedDNS := viper.GetStringMapStringSlice( restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers", "dns_config.restricted_nameservers",
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make(
[]*dnstype.Resolver,
len(restrictedNameservers),
) )
for index, nameserverStr := range restrictedNameservers { for domain, restrictedNameservers := range restrictedDNS {
nameserver, err := netip.ParseAddr(nameserverStr) restrictedResolvers := make(
if err != nil { []*dnstype.Resolver,
log.Error(). len(restrictedNameservers),
Str("func", "getDNSConfig"). )
Err(err). for index, nameserverStr := range restrictedNameservers {
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr) nameserver, err := netaddr.ParseIP(nameserverStr)
} if err != nil {
restrictedResolvers[index] = &dnstype.Resolver{ log.Error().
Addr: nameserver.String(), Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
}
restrictedResolvers[index] = &dnstype.Resolver{
Addr: nameserver.String(),
}
} }
dnsConfig.Routes[domain] = restrictedResolvers
} }
dnsConfig.Routes[domain] = restrictedResolvers } else {
domains = append(domains, domain) log.Warn().
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
} }
dnsConfig.Domains = domains
} }
if viper.IsSet("dns_config.domains") { if viper.IsSet("dns_config.domains") {
domains := viper.GetStringSlice("dns_config.domains") dnsConfig.Domains = viper.GetStringSlice("dns_config.domains")
if len(dnsConfig.Resolvers) > 0 {
dnsConfig.Domains = domains
} else if domains != nil {
log.Warn().
Msg("Warning: dns_config.domains is set, but no nameservers are configured. Ignoring domains.")
}
}
if viper.IsSet("dns_config.extra_records") {
var extraRecords []tailcfg.DNSRecord
err := viper.UnmarshalKey("dns_config.extra_records", &extraRecords)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse dns_config.extra_records")
}
dnsConfig.ExtraRecords = extraRecords
} }
if viper.IsSet("dns_config.magic_dns") { if viper.IsSet("dns_config.magic_dns") {
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns") magicDNS := viper.GetBool("dns_config.magic_dns")
if len(dnsConfig.Nameservers) > 0 {
dnsConfig.Proxied = magicDNS
} else if magicDNS {
log.Warn().
Msg("Warning: dns_config.magic_dns is set, but no nameservers are configured. Ignoring magic_dns.")
}
} }
var baseDomain string var baseDomain string
@@ -487,62 +415,50 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
} }
func GetHeadscaleConfig() (*Config, error) { func GetHeadscaleConfig() (*Config, error) {
if IsCLIConfigured() {
return &Config{
CLI: CLIConfig{
Address: viper.GetString("cli.address"),
APIKey: viper.GetString("cli.api_key"),
Timeout: viper.GetDuration("cli.timeout"),
Insecure: viper.GetBool("cli.insecure"),
},
}, nil
}
dnsConfig, baseDomain := GetDNSConfig() dnsConfig, baseDomain := GetDNSConfig()
derpConfig := GetDERPConfig() derpConfig := GetDERPConfig()
logConfig := GetLogTailConfig() logConfig := GetLogTailConfig()
randomizeClientPort := viper.GetBool("randomize_client_port") randomizeClientPort := viper.GetBool("randomize_client_port")
configuredPrefixes := viper.GetStringSlice("ip_prefixes") configuredPrefixes := viper.GetStringSlice("ip_prefixes")
parsedPrefixes := make([]netip.Prefix, 0, len(configuredPrefixes)+1) parsedPrefixes := make([]netaddr.IPPrefix, 0, len(configuredPrefixes)+1)
logLevelStr := viper.GetString("log_level")
logLevel, err := zerolog.ParseLevel(logLevelStr)
if err != nil {
logLevel = zerolog.DebugLevel
}
legacyPrefixField := viper.GetString("ip_prefix")
if len(legacyPrefixField) > 0 {
log.
Warn().
Msgf(
"%s, %s",
"use of 'ip_prefix' for configuration is deprecated",
"please see 'ip_prefixes' in the shipped example.",
)
legacyPrefix, err := netaddr.ParseIPPrefix(legacyPrefixField)
if err != nil {
panic(fmt.Errorf("failed to parse ip_prefix: %w", err))
}
parsedPrefixes = append(parsedPrefixes, legacyPrefix)
}
for i, prefixInConfig := range configuredPrefixes { for i, prefixInConfig := range configuredPrefixes {
prefix, err := netip.ParsePrefix(prefixInConfig) prefix, err := netaddr.ParseIPPrefix(prefixInConfig)
if err != nil { if err != nil {
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err)) panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
} }
if prefix.Addr().Is4() {
builder := netipx.IPSetBuilder{}
builder.AddPrefix(tsaddr.CGNATRange())
ipSet, _ := builder.IPSet()
if !ipSet.ContainsPrefix(prefix) {
log.Warn().
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
prefixInConfig, tsaddr.CGNATRange())
}
}
if prefix.Addr().Is6() {
builder := netipx.IPSetBuilder{}
builder.AddPrefix(tsaddr.TailscaleULARange())
ipSet, _ := builder.IPSet()
if !ipSet.ContainsPrefix(prefix) {
log.Warn().
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
prefixInConfig, tsaddr.TailscaleULARange())
}
}
parsedPrefixes = append(parsedPrefixes, prefix) parsedPrefixes = append(parsedPrefixes, prefix)
} }
prefixes := make([]netip.Prefix, 0, len(parsedPrefixes)) prefixes := make([]netaddr.IPPrefix, 0, len(parsedPrefixes))
{ {
// dedup // dedup
normalizedPrefixes := make(map[string]int, len(parsedPrefixes)) normalizedPrefixes := make(map[string]int, len(parsedPrefixes))
for i, p := range parsedPrefixes { for i, p := range parsedPrefixes {
normalized, _ := netipx.RangeOfPrefix(p).Prefix() normalized, _ := p.Range().Prefix()
normalizedPrefixes[normalized.String()] = i normalizedPrefixes[normalized.String()] = i
} }
@@ -553,24 +469,11 @@ func GetHeadscaleConfig() (*Config, error) {
} }
if len(prefixes) < 1 { if len(prefixes) < 1 {
prefixes = append(prefixes, netip.MustParsePrefix("100.64.0.0/10")) prefixes = append(prefixes, netaddr.MustParseIPPrefix("100.64.0.0/10"))
log.Warn(). log.Warn().
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes) Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
} }
oidcClientSecret := viper.GetString("oidc.client_secret")
oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
if oidcClientSecretPath != "" && oidcClientSecret != "" {
return nil, errOidcMutuallyExclusive
}
if oidcClientSecretPath != "" {
secretBytes, err := os.ReadFile(os.ExpandEnv(oidcClientSecretPath))
if err != nil {
return nil, err
}
oidcClientSecret = string(secretBytes)
}
return &Config{ return &Config{
ServerURL: viper.GetString("server_url"), ServerURL: viper.GetString("server_url"),
Addr: viper.GetString("listen_addr"), Addr: viper.GetString("listen_addr"),
@@ -578,14 +481,12 @@ func GetHeadscaleConfig() (*Config, error) {
GRPCAddr: viper.GetString("grpc_listen_addr"), GRPCAddr: viper.GetString("grpc_listen_addr"),
GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"), GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"),
DisableUpdateCheck: viper.GetBool("disable_check_updates"), DisableUpdateCheck: viper.GetBool("disable_check_updates"),
LogLevel: logLevel,
IPPrefixes: prefixes, IPPrefixes: prefixes,
PrivateKeyPath: util.AbsolutePathFromConfigPath( PrivateKeyPath: AbsolutePathFromConfigPath(
viper.GetString("private_key_path"), viper.GetString("private_key_path"),
), ),
NoisePrivateKeyPath: util.AbsolutePathFromConfigPath(
viper.GetString("noise.private_key_path"),
),
BaseDomain: baseDomain, BaseDomain: baseDomain,
DERP: derpConfig, DERP: derpConfig,
@@ -599,13 +500,12 @@ func GetHeadscaleConfig() (*Config, error) {
), ),
DBtype: viper.GetString("db_type"), DBtype: viper.GetString("db_type"),
DBpath: util.AbsolutePathFromConfigPath(viper.GetString("db_path")), DBpath: AbsolutePathFromConfigPath(viper.GetString("db_path")),
DBhost: viper.GetString("db_host"), DBhost: viper.GetString("db_host"),
DBport: viper.GetInt("db_port"), DBport: viper.GetInt("db_port"),
DBname: viper.GetString("db_name"), DBname: viper.GetString("db_name"),
DBuser: viper.GetString("db_user"), DBuser: viper.GetString("db_user"),
DBpass: viper.GetString("db_pass"), DBpass: viper.GetString("db_pass"),
DBssl: viper.GetString("db_ssl"),
TLS: GetTLSConfig(), TLS: GetTLSConfig(),
@@ -615,44 +515,22 @@ func GetHeadscaleConfig() (*Config, error) {
ACMEURL: viper.GetString("acme_url"), ACMEURL: viper.GetString("acme_url"),
UnixSocket: viper.GetString("unix_socket"), UnixSocket: viper.GetString("unix_socket"),
UnixSocketPermission: util.GetFileMode("unix_socket_permission"), UnixSocketPermission: GetFileMode("unix_socket_permission"),
OIDC: OIDCConfig{ OIDC: OIDCConfig{
OnlyStartIfOIDCIsAvailable: viper.GetBool(
"oidc.only_start_if_oidc_is_available",
),
Issuer: viper.GetString("oidc.issuer"), Issuer: viper.GetString("oidc.issuer"),
ClientID: viper.GetString("oidc.client_id"), ClientID: viper.GetString("oidc.client_id"),
ClientSecret: oidcClientSecret, ClientSecret: viper.GetString("oidc.client_secret"),
Scope: viper.GetStringSlice("oidc.scope"), Scope: viper.GetStringSlice("oidc.scope"),
ExtraParams: viper.GetStringMapString("oidc.extra_params"), ExtraParams: viper.GetStringMapString("oidc.extra_params"),
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"), AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"), AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"), StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
Expiry: func() time.Duration {
// if set to 0, we assume no expiry
if value := viper.GetString("oidc.expiry"); value == "0" {
return maxDuration
} else {
expiry, err := model.ParseDuration(value)
if err != nil {
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
return defaultOIDCExpiryTime
}
return time.Duration(expiry)
}
}(),
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
}, },
LogTail: logConfig, LogTail: logConfig,
RandomizeClientPort: randomizeClientPort, RandomizeClientPort: randomizeClientPort,
ACL: GetACLConfig(),
CLI: CLIConfig{ CLI: CLIConfig{
Address: viper.GetString("cli.address"), Address: viper.GetString("cli.address"),
APIKey: viper.GetString("cli.api_key"), APIKey: viper.GetString("cli.api_key"),
@@ -660,10 +538,6 @@ func GetHeadscaleConfig() (*Config, error) {
Insecure: viper.GetBool("cli.insecure"), Insecure: viper.GetBool("cli.insecure"),
}, },
Log: GetLogConfig(), ACL: GetACLConfig(),
}, nil }, nil
} }
func IsCLIConfigured() bool {
return viper.GetString("cli.address") != "" && viper.GetString("cli.api_key") != ""
}

304
db.go Normal file
View File

@@ -0,0 +1,304 @@
package headscale
import (
"context"
"database/sql/driver"
"encoding/json"
"errors"
"fmt"
"time"
"github.com/glebarez/sqlite"
"github.com/rs/zerolog/log"
"gorm.io/driver/postgres"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"inet.af/netaddr"
"tailscale.com/tailcfg"
)
const (
dbVersion = "1"
errValueNotFound = Error("not found")
)
// KV is a key-value store in a psql table. For future use...
type KV struct {
Key string
Value string
}
func (h *Headscale) initDB() error {
db, err := h.openDB()
if err != nil {
return err
}
h.db = db
if h.dbType == Postgres {
db.Exec(`create extension if not exists "uuid-ossp";`)
}
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
// GivenName is used as the primary source of DNS names, make sure
// the field is populated and normalized if it was not when the
// machine was registered.
_ = db.Migrator().RenameColumn(&Machine{}, "nickname", "given_name")
// If the Machine table has a column for registered,
// find all occourences of "false" and drop them. Then
// remove the column.
if db.Migrator().HasColumn(&Machine{}, "registered") {
log.Info().
Msg(`Database has legacy "registered" column in machine, removing...`)
machines := Machines{}
if err := h.db.Not("registered").Find(&machines).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for _, machine := range machines {
log.Info().
Str("machine", machine.Hostname).
Str("machine_key", machine.MachineKey).
Msg("Deleting unregistered machine")
if err := h.db.Delete(&Machine{}, machine.ID).Error; err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Str("machine_key", machine.MachineKey).
Msg("Error deleting unregistered machine")
}
}
err := db.Migrator().DropColumn(&Machine{}, "registered")
if err != nil {
log.Error().Err(err).Msg("Error dropping registered column")
}
}
err = db.AutoMigrate(&Machine{})
if err != nil {
return err
}
if db.Migrator().HasColumn(&Machine{}, "given_name") {
machines := Machines{}
if err := h.db.Find(&machines).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for item, machine := range machines {
if machine.GivenName == "" {
normalizedHostname, err := NormalizeToFQDNRules(
machine.Hostname,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
log.Error().
Caller().
Str("hostname", machine.Hostname).
Err(err).
Msg("Failed to normalize machine hostname in DB migration")
}
err = h.RenameMachine(&machines[item], normalizedHostname)
if err != nil {
log.Error().
Caller().
Str("hostname", machine.Hostname).
Err(err).
Msg("Failed to save normalized machine name in DB migration")
}
}
}
}
err = db.AutoMigrate(&KV{})
if err != nil {
return err
}
err = db.AutoMigrate(&Namespace{})
if err != nil {
return err
}
err = db.AutoMigrate(&PreAuthKey{})
if err != nil {
return err
}
_ = db.Migrator().DropTable("shared_machines")
err = db.AutoMigrate(&APIKey{})
if err != nil {
return err
}
err = h.setValue("db_version", dbVersion)
return err
}
func (h *Headscale) openDB() (*gorm.DB, error) {
var db *gorm.DB
var err error
var log logger.Interface
if h.dbDebug {
log = logger.Default
} else {
log = logger.Default.LogMode(logger.Silent)
}
switch h.dbType {
case Sqlite:
db, err = gorm.Open(
sqlite.Open(h.dbString+"?_synchronous=1&_journal_mode=WAL"),
&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
},
)
db.Exec("PRAGMA foreign_keys=ON")
// The pure Go SQLite library does not handle locking in
// the same way as the C based one and we cant use the gorm
// connection pool as of 2022/02/23.
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(1)
sqlDB.SetMaxOpenConns(1)
sqlDB.SetConnMaxIdleTime(time.Hour)
case Postgres:
db, err = gorm.Open(postgres.Open(h.dbString), &gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
})
}
if err != nil {
return nil, err
}
return db, nil
}
// getValue returns the value for the given key in KV.
func (h *Headscale) getValue(key string) (string, error) {
var row KV
if result := h.db.First(&row, "key = ?", key); errors.Is(
result.Error,
gorm.ErrRecordNotFound,
) {
return "", errValueNotFound
}
return row.Value, nil
}
// setValue sets value for the given key in KV.
func (h *Headscale) setValue(key string, value string) error {
keyValue := KV{
Key: key,
Value: value,
}
if _, err := h.getValue(key); err == nil {
h.db.Model(&keyValue).Where("key = ?", key).Update("value", value)
return nil
}
if err := h.db.Create(keyValue).Error; err != nil {
return fmt.Errorf("failed to create key value pair in the database: %w", err)
}
return nil
}
func (h *Headscale) pingDB() error {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
db, err := h.db.DB()
if err != nil {
return err
}
return db.PingContext(ctx)
}
// This is a "wrapper" type around tailscales
// Hostinfo to allow us to add database "serialization"
// methods. This allows us to use a typed values throughout
// the code and not have to marshal/unmarshal and error
// check all over the code.
type HostInfo tailcfg.Hostinfo
func (hi *HostInfo) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, hi)
case string:
return json.Unmarshal([]byte(value), hi)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (hi HostInfo) Value() (driver.Value, error) {
bytes, err := json.Marshal(hi)
return string(bytes), err
}
type IPPrefixes []netaddr.IPPrefix
func (i *IPPrefixes) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i IPPrefixes) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}
type StringList []string
func (i *StringList) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i StringList) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}

View File

@@ -1,16 +1,17 @@
package derp package headscale
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"io" "io"
"io/ioutil"
"net/http" "net/http"
"net/url" "net/url"
"os" "os"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v2"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
) )
@@ -31,16 +32,16 @@ func loadDERPMapFromPath(path string) (*tailcfg.DERPMap, error) {
} }
func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) { func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
ctx, cancel := context.WithTimeout(context.Background(), types.HTTPReadTimeout) ctx, cancel := context.WithTimeout(context.Background(), HTTPReadTimeout)
defer cancel() defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, addr.String(), nil) req, err := http.NewRequestWithContext(ctx, "GET", addr.String(), nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
client := http.Client{ client := http.Client{
Timeout: types.HTTPReadTimeout, Timeout: HTTPReadTimeout,
} }
resp, err := client.Do(req) resp, err := client.Do(req)
@@ -49,7 +50,7 @@ func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
} }
defer resp.Body.Close() defer resp.Body.Close()
body, err := io.ReadAll(resp.Body) body, err := ioutil.ReadAll(resp.Body)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -80,7 +81,7 @@ func mergeDERPMaps(derpMaps []*tailcfg.DERPMap) *tailcfg.DERPMap {
return &result return &result
} }
func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap { func GetDERPMap(cfg DERPConfig) *tailcfg.DERPMap {
derpMaps := make([]*tailcfg.DERPMap, 0) derpMaps := make([]*tailcfg.DERPMap, 0)
for _, path := range cfg.Paths { for _, path := range cfg.Paths {
@@ -132,3 +133,26 @@ func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap {
return derpMap return derpMap
} }
func (h *Headscale) scheduledDERPMapUpdateWorker(cancelChan <-chan struct{}) {
log.Info().
Dur("frequency", h.cfg.DERP.UpdateFrequency).
Msg("Setting up a DERPMap update worker")
ticker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
for {
select {
case <-cancelChan:
return
case <-ticker.C:
log.Info().Msg("Fetching DERPMap updates")
h.DERPMap = GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled {
h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
}
h.setLastStateChangeToNow()
}
}
}

View File

@@ -1,4 +1,4 @@
package server package headscale
import ( import (
"context" "context"
@@ -6,13 +6,11 @@ import (
"fmt" "fmt"
"net" "net"
"net/http" "net/http"
"net/netip"
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"time" "time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"tailscale.com/derp" "tailscale.com/derp"
"tailscale.com/net/stun" "tailscale.com/net/stun"
@@ -27,30 +25,23 @@ import (
const fastStartHeader = "Derp-Fast-Start" const fastStartHeader = "Derp-Fast-Start"
type DERPServer struct { type DERPServer struct {
serverURL string
key key.NodePrivate
cfg *types.DERPConfig
tailscaleDERP *derp.Server tailscaleDERP *derp.Server
region tailcfg.DERPRegion
} }
func NewDERPServer( func (h *Headscale) NewDERPServer() (*DERPServer, error) {
serverURL string,
derpKey key.NodePrivate,
cfg *types.DERPConfig,
) (*DERPServer, error) {
log.Trace().Caller().Msg("Creating new embedded DERP server") log.Trace().Caller().Msg("Creating new embedded DERP server")
server := derp.NewServer(derpKey, log.Debug().Msgf) server := derp.NewServer(key.NodePrivate(*h.privateKey), log.Info().Msgf)
region, err := h.generateRegionLocalDERP()
if err != nil {
return nil, err
}
return &DERPServer{ return &DERPServer{server, region}, nil
serverURL: serverURL,
key: derpKey,
cfg: cfg,
tailscaleDERP: server,
}, nil
} }
func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) { func (h *Headscale) generateRegionLocalDERP() (tailcfg.DERPRegion, error) {
serverURL, err := url.Parse(d.serverURL) serverURL, err := url.Parse(h.cfg.ServerURL)
if err != nil { if err != nil {
return tailcfg.DERPRegion{}, err return tailcfg.DERPRegion{}, err
} }
@@ -73,21 +64,21 @@ func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
} }
localDERPregion := tailcfg.DERPRegion{ localDERPregion := tailcfg.DERPRegion{
RegionID: d.cfg.ServerRegionID, RegionID: h.cfg.DERP.ServerRegionID,
RegionCode: d.cfg.ServerRegionCode, RegionCode: h.cfg.DERP.ServerRegionCode,
RegionName: d.cfg.ServerRegionName, RegionName: h.cfg.DERP.ServerRegionName,
Avoid: false, Avoid: false,
Nodes: []*tailcfg.DERPNode{ Nodes: []*tailcfg.DERPNode{
{ {
Name: fmt.Sprintf("%d", d.cfg.ServerRegionID), Name: fmt.Sprintf("%d", h.cfg.DERP.ServerRegionID),
RegionID: d.cfg.ServerRegionID, RegionID: h.cfg.DERP.ServerRegionID,
HostName: host, HostName: host,
DERPPort: port, DERPPort: port,
}, },
}, },
} }
_, portSTUNStr, err := net.SplitHostPort(d.cfg.STUNAddr) _, portSTUNStr, err := net.SplitHostPort(h.cfg.DERP.STUNAddr)
if err != nil { if err != nil {
return tailcfg.DERPRegion{}, err return tailcfg.DERPRegion{}, err
} }
@@ -102,18 +93,15 @@ func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
return localDERPregion, nil return localDERPregion, nil
} }
func (d *DERPServer) DERPHandler( func (h *Headscale) DERPHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
) { ) {
log.Trace().Caller().Msgf("/derp request from %v", req.RemoteAddr) log.Trace().Caller().Msgf("/derp request from %v", req.RemoteAddr)
upgrade := strings.ToLower(req.Header.Get("Upgrade")) up := strings.ToLower(req.Header.Get("Upgrade"))
if up != "websocket" && up != "derp" {
if upgrade != "websocket" && upgrade != "derp" { if up != "" {
if upgrade != "" { log.Warn().Caller().Msgf("Weird websockets connection upgrade: %q", up)
log.Warn().
Caller().
Msg("No Upgrade header in DERP server request. If headscale is behind a reverse proxy, make sure it is configured to pass WebSockets through.")
} }
writer.Header().Set("Content-Type", "text/plain") writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusUpgradeRequired) writer.WriteHeader(http.StatusUpgradeRequired)
@@ -164,28 +152,28 @@ func (d *DERPServer) DERPHandler(
log.Trace().Caller().Msgf("Hijacked connection from %v", req.RemoteAddr) log.Trace().Caller().Msgf("Hijacked connection from %v", req.RemoteAddr)
if !fastStart { if !fastStart {
pubKey := d.key.Public() pubKey := h.privateKey.Public()
pubKeyStr, _ := pubKey.MarshalText() //nolint pubKeyStr := pubKey.UntypedHexString() // nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+ fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+ "Upgrade: DERP\r\n"+
"Connection: Upgrade\r\n"+ "Connection: Upgrade\r\n"+
"Derp-Version: %v\r\n"+ "Derp-Version: %v\r\n"+
"Derp-Public-Key: %s\r\n\r\n", "Derp-Public-Key: %s\r\n\r\n",
derp.ProtocolVersion, derp.ProtocolVersion,
string(pubKeyStr)) pubKeyStr)
} }
d.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String()) h.DERPServer.tailscaleDERP.Accept(netConn, conn, netConn.RemoteAddr().String())
} }
// DERPProbeHandler is the endpoint that js/wasm clients hit to measure // DERPProbeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries. // DERP latency, since they can't do UDP STUN queries.
func DERPProbeHandler( func (h *Headscale) DERPProbeHandler(
writer http.ResponseWriter, writer http.ResponseWriter,
req *http.Request, req *http.Request,
) { ) {
switch req.Method { switch req.Method {
case http.MethodHead, http.MethodGet: case "HEAD", "GET":
writer.Header().Set("Access-Control-Allow-Origin", "*") writer.Header().Set("Access-Control-Allow-Origin", "*")
writer.WriteHeader(http.StatusOK) writer.WriteHeader(http.StatusOK)
default: default:
@@ -207,47 +195,43 @@ func DERPProbeHandler(
// The initial implementation is here https://github.com/tailscale/tailscale/pull/1406 // The initial implementation is here https://github.com/tailscale/tailscale/pull/1406
// They have a cache, but not clear if that is really necessary at Headscale, uh, scale. // They have a cache, but not clear if that is really necessary at Headscale, uh, scale.
// An example implementation is found here https://derp.tailscale.com/bootstrap-dns // An example implementation is found here https://derp.tailscale.com/bootstrap-dns
func DERPBootstrapDNSHandler( func (h *Headscale) DERPBootstrapDNSHandler(
derpMap *tailcfg.DERPMap, writer http.ResponseWriter,
) func(http.ResponseWriter, *http.Request) { req *http.Request,
return func( ) {
writer http.ResponseWriter, dnsEntries := make(map[string][]net.IP)
req *http.Request,
) {
dnsEntries := make(map[string][]net.IP)
resolvCtx, cancel := context.WithTimeout(req.Context(), time.Minute) resolvCtx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel() defer cancel()
var resolver net.Resolver var resolver net.Resolver
for _, region := range derpMap.Regions { for _, region := range h.DERPMap.Regions {
for _, node := range region.Nodes { // we don't care if we override some nodes for _, node := range region.Nodes { // we don't care if we override some nodes
addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName) addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName)
if err != nil { if err != nil {
log.Trace(). log.Trace().
Caller(). Caller().
Err(err). Err(err).
Msgf("bootstrap DNS lookup failed %q", node.HostName) Msgf("bootstrap DNS lookup failed %q", node.HostName)
continue continue
}
dnsEntries[node.HostName] = addrs
} }
dnsEntries[node.HostName] = addrs
} }
writer.Header().Set("Content-Type", "application/json") }
writer.WriteHeader(http.StatusOK) writer.Header().Set("Content-Type", "application/json")
err := json.NewEncoder(writer).Encode(dnsEntries) writer.WriteHeader(http.StatusOK)
if err != nil { err := json.NewEncoder(writer).Encode(dnsEntries)
log.Error(). if err != nil {
Caller(). log.Error().
Err(err). Caller().
Msg("Failed to write response") Err(err).
} Msg("Failed to write response")
} }
} }
// ServeSTUN starts a STUN server on the configured addr. // ServeSTUN starts a STUN server on the configured addr.
func (d *DERPServer) ServeSTUN() { func (h *Headscale) ServeSTUN() {
packetConn, err := net.ListenPacket("udp", d.cfg.STUNAddr) packetConn, err := net.ListenPacket("udp", h.cfg.DERP.STUNAddr)
if err != nil { if err != nil {
log.Fatal().Msgf("failed to open STUN listener: %v", err) log.Fatal().Msgf("failed to open STUN listener: %v", err)
} }
@@ -292,8 +276,7 @@ func serverSTUNListener(ctx context.Context, packetConn *net.UDPConn) {
continue continue
} }
addr, _ := netip.AddrFromSlice(udpAddr.IP) res := stun.Response(txid, udpAddr.IP, uint16(udpAddr.Port))
res := stun.Response(txid, netip.AddrPortFrom(addr, uint16(udpAddr.Port)))
_, err = packetConn.WriteTo(res, udpAddr) _, err = packetConn.WriteTo(res, udpAddr)
if err != nil { if err != nil {
log.Trace().Caller().Err(err).Msgf("Issue writing to UDP") log.Trace().Caller().Err(err).Msgf("Issue writing to UDP")

View File

@@ -1,87 +1,23 @@
package util package headscale
import ( import (
"errors"
"fmt" "fmt"
"net/netip"
"regexp"
"strings" "strings"
"github.com/spf13/viper" mapset "github.com/deckarep/golang-set/v2"
"go4.org/netipx" "inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/util/dnsname" "tailscale.com/util/dnsname"
) )
const ( const (
ByteSize = 8 ByteSize = 8
ipv4AddressLength = 32
ipv6AddressLength = 128
// value related to RFC 1123 and 952.
LabelHostnameLength = 63
) )
var invalidCharsInUserRegex = regexp.MustCompile("[^a-z0-9-.]+") const (
ipv4AddressLength = 32
var ErrInvalidUserName = errors.New("invalid user name") ipv6AddressLength = 128
)
func NormalizeToFQDNRulesConfigFromViper(name string) (string, error) {
strip := viper.GetBool("oidc.strip_email_domain")
return NormalizeToFQDNRules(name, strip)
}
// NormalizeToFQDNRules will replace forbidden chars in user
// it can also return an error if the user doesn't respect RFC 952 and 1123.
func NormalizeToFQDNRules(name string, stripEmailDomain bool) (string, error) {
name = strings.ToLower(name)
name = strings.ReplaceAll(name, "'", "")
atIdx := strings.Index(name, "@")
if stripEmailDomain && atIdx > 0 {
name = name[:atIdx]
} else {
name = strings.ReplaceAll(name, "@", ".")
}
name = invalidCharsInUserRegex.ReplaceAllString(name, "-")
for _, elt := range strings.Split(name, ".") {
if len(elt) > LabelHostnameLength {
return "", fmt.Errorf(
"label %v is more than 63 chars: %w",
elt,
ErrInvalidUserName,
)
}
}
return name, nil
}
func CheckForFQDNRules(name string) error {
if len(name) > LabelHostnameLength {
return fmt.Errorf(
"DNS segment must not be over 63 chars. %v doesn't comply with this rule: %w",
name,
ErrInvalidUserName,
)
}
if strings.ToLower(name) != name {
return fmt.Errorf(
"DNS segment should be lowercase. %v doesn't comply with this rule: %w",
name,
ErrInvalidUserName,
)
}
if invalidCharsInUserRegex.MatchString(name) {
return fmt.Errorf(
"DNS segment should only be composed of lowercase ASCII letters numbers, hyphen and dots. %v doesn't comply with theses rules: %w",
name,
ErrInvalidUserName,
)
}
return nil
}
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`. // generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS // This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
@@ -103,11 +39,11 @@ func CheckForFQDNRules(name string) error {
// From the netmask we can find out the wildcard bits (the bits that are not set in the netmask). // From the netmask we can find out the wildcard bits (the bits that are not set in the netmask).
// This allows us to then calculate the subnets included in the subsequent class block and generate the entries. // This allows us to then calculate the subnets included in the subsequent class block and generate the entries.
func GenerateMagicDNSRootDomains(ipPrefixes []netip.Prefix) []dnsname.FQDN { func generateMagicDNSRootDomains(ipPrefixes []netaddr.IPPrefix) []dnsname.FQDN {
fqdns := make([]dnsname.FQDN, 0, len(ipPrefixes)) fqdns := make([]dnsname.FQDN, 0, len(ipPrefixes))
for _, ipPrefix := range ipPrefixes { for _, ipPrefix := range ipPrefixes {
var generateDNSRoot func(netip.Prefix) []dnsname.FQDN var generateDNSRoot func(netaddr.IPPrefix) []dnsname.FQDN
switch ipPrefix.Addr().BitLen() { switch ipPrefix.IP().BitLen() {
case ipv4AddressLength: case ipv4AddressLength:
generateDNSRoot = generateIPv4DNSRootDomain generateDNSRoot = generateIPv4DNSRootDomain
@@ -118,7 +54,7 @@ func GenerateMagicDNSRootDomains(ipPrefixes []netip.Prefix) []dnsname.FQDN {
panic( panic(
fmt.Sprintf( fmt.Sprintf(
"unsupported IP version with address length %d", "unsupported IP version with address length %d",
ipPrefix.Addr().BitLen(), ipPrefix.IP().BitLen(),
), ),
) )
} }
@@ -129,9 +65,9 @@ func GenerateMagicDNSRootDomains(ipPrefixes []netip.Prefix) []dnsname.FQDN {
return fqdns return fqdns
} }
func generateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN { func generateIPv4DNSRootDomain(ipPrefix netaddr.IPPrefix) []dnsname.FQDN {
// Conversion to the std lib net.IPnet, a bit easier to operate // Conversion to the std lib net.IPnet, a bit easier to operate
netRange := netipx.PrefixIPNet(ipPrefix) netRange := ipPrefix.IPNet()
maskBits, _ := netRange.Mask.Size() maskBits, _ := netRange.Mask.Size()
// lastOctet is the last IP byte covered by the mask // lastOctet is the last IP byte covered by the mask
@@ -165,11 +101,11 @@ func generateIPv4DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
return fqdns return fqdns
} }
func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN { func generateIPv6DNSRootDomain(ipPrefix netaddr.IPPrefix) []dnsname.FQDN {
const nibbleLen = 4 const nibbleLen = 4
maskBits, _ := netipx.PrefixIPNet(ipPrefix).Mask.Size() maskBits, _ := ipPrefix.IPNet().Mask.Size()
expanded := ipPrefix.Addr().StringExpanded() expanded := ipPrefix.IP().StringExpanded()
nibbleStr := strings.Map(func(r rune) rune { nibbleStr := strings.Map(func(r rune) rune {
if r == ':' { if r == ':' {
return -1 return -1
@@ -214,3 +150,38 @@ func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
return fqdns return fqdns
} }
func getMapResponseDNSConfig(
dnsConfigOrig *tailcfg.DNSConfig,
baseDomain string,
machine Machine,
peers Machines,
) *tailcfg.DNSConfig {
var dnsConfig *tailcfg.DNSConfig
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
dnsConfig = dnsConfigOrig.Clone()
dnsConfig.Domains = append(
dnsConfig.Domains,
fmt.Sprintf(
"%s.%s",
machine.Namespace.Name,
baseDomain,
),
)
namespaceSet := mapset.NewSet[Namespace]()
namespaceSet.Add(machine.Namespace)
for _, p := range peers {
namespaceSet.Add(p.Namespace)
}
for _, namespace := range namespaceSet.ToSlice() {
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
dnsConfig.Routes[dnsRoute] = nil
}
} else {
dnsConfig = dnsConfigOrig
}
return dnsConfig
}

386
dns_test.go Normal file
View File

@@ -0,0 +1,386 @@
package headscale
import (
"fmt"
"gopkg.in/check.v1"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("100.64.0.0/10"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "64.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "100.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "127.100.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
func (s *Suite) TestMagicDNSRootDomains172(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("172.16.0.0/16"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "0.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
found = false
for _, domain := range domains {
if domain == "255.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
// Happens when netmask is a multiple of 4 bits (sounds likely).
func (s *Suite) TestMagicDNSRootDomainsIPv6Single(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("fd7a:115c:a1e0::/48"),
}
domains := generateMagicDNSRootDomains(prefixes)
c.Assert(len(domains), check.Equals, 1)
c.Assert(
domains[0].WithTrailingDot(),
check.Equals,
"0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa.",
)
}
func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("fd7a:115c:a1e0::/50"),
}
domains := generateMagicDNSRootDomains(prefixes)
yieldsRoot := func(dom string) bool {
for _, candidate := range domains {
if candidate.WithTrailingDot() == dom {
return true
}
}
return false
}
c.Assert(len(domains), check.Equals, 4)
c.Assert(yieldsRoot("0.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("1.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("2.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("3.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
namespaceShared1, err := app.CreateNamespace("shared1")
c.Assert(err, check.IsNil)
namespaceShared2, err := app.CreateNamespace("shared2")
c.Assert(err, check.IsNil)
namespaceShared3, err := app.CreateNamespace("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Hostname: "test_get_shared_nodes_1",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_2",
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_3",
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_4",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.4")},
AuthKeyID: uint(PreAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]*dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: true,
}
peersOfMachineInShared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachineInShared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
_, ok := dnsConfig.Routes[domainRouteShared1]
c.Assert(ok, check.Equals, true)
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared2]
c.Assert(ok, check.Equals, true)
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared3]
c.Assert(ok, check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
namespaceShared1, err := app.CreateNamespace("shared1")
c.Assert(err, check.IsNil)
namespaceShared2, err := app.CreateNamespace("shared2")
c.Assert(err, check.IsNil)
namespaceShared3, err := app.CreateNamespace("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
preAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Hostname: "test_get_shared_nodes_1",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_2",
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_3",
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_4",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.4")},
AuthKeyID: uint(preAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
Routes: make(map[string][]*dnstype.Resolver),
Domains: []string{baseDomain},
Proxied: false,
}
peersOfMachine1Shared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachine1Shared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 0)
c.Assert(len(dnsConfig.Domains), check.Equals, 1)
}

53
docs/README.md Normal file
View File

@@ -0,0 +1,53 @@
# headscale documentation
This page contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
## Official documentation
### How-to
- [Running headscale on Linux](running-headscale-linux.md)
- [Control headscale remotely](remote-cli.md)
- [Using a Windows client with headscale](windows-client.md)
### References
- [Configuration](../config-example.yaml)
- [Glossary](glossary.md)
- [TLS](tls.md)
## Community documentation
Community documentation is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
- [Running headscale in a container](running-headscale-container.md)
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
## Misc
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use namespaces (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the Namespace borders are no longer applied. All machines
whichever the Namespace have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
The [ACLs](acls.md) document should help understand a fictional case of setting
up ACLs in a small company. All concepts presented in this document could be
applied outside of business oriented usage.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.

View File

@@ -1,15 +1,4 @@
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment. # ACLs use case example
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACLs use case example
Let's build an example use case for a small business (It may be the place where Let's build an example use case for a small business (It may be the place where
ACL's are the most useful). ACL's are the most useful).
@@ -40,21 +29,19 @@ servers.
## ACL setup ## ACL setup
Note: Users will be created automatically when users authenticate with the Note: Namespaces will be created automatically when users authenticate with the
Headscale server. Headscale server.
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson) ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
or YAML. Check the [test ACLs](../tests/acls) for further information. or YAML. Check the [test ACLs](../tests/acls) for further information.
When registering the servers we will need to add the flag When registering the servers we will need to add the flag
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is `--advertised-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
registering the server should be allowed to do it. Since anyone can add tags to registering the server should be allowed to do it. Since anyone can add tags to
a server they can register, the check of the tags is done on headscale server a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the user that is and only valid tags are applied. A tag is valid if the namespace that is
registering it is allowed to do it. registering it is allowed to do it.
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
Here are the ACL's to implement the same permissions as above: Here are the ACL's to implement the same permissions as above:
```json ```json
@@ -177,8 +164,8 @@ Here are the ACL's to implement the same permissions as above:
"dst": ["tag:dev-app-servers:80,443"] "dst": ["tag:dev-app-servers:80,443"]
}, },
// We still have to allow internal users communications since nothing guarantees that each user have // We still have to allow internal namespaces communications since nothing guarantees that each user have
// their own users. // their own namespaces.
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] }, { "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] }, { "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] }, { "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },

Some files were not shown because too many files have changed in this diff Show More