Compare commits

..

6 Commits

Author SHA1 Message Date
Kristoffer Dalby
86768c8ac4 take 3
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-03 17:00:36 +01:00
Kristoffer Dalby
88af29d5f5 derp
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-03 16:55:35 +01:00
Kristoffer Dalby
9cedc2942b verbose and test
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-03 16:34:13 +01:00
Kristoffer Dalby
062b9a5611 test
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-03 15:49:02 +01:00
Kristoffer Dalby
887302e8f1 setup ko image builder for goreleaser
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-03 15:38:33 +01:00
Kristoffer Dalby
b1b90d165d make dockerfiles testing only note
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-03 15:36:41 +01:00
258 changed files with 12527 additions and 15247 deletions

View File

@@ -1,15 +0,0 @@
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
language: "en-GB"
early_access: false
reviews:
profile: "chill"
request_changes_workflow: false
high_level_summary: true
poem: true
review_status: true
collapse_walkthrough: false
auto_review:
enabled: true
drafts: true
chat:
auto_reply: true

52
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,52 @@
---
name: "Bug report"
about: "Create a bug report to help us improve"
title: ""
labels: ["bug"]
assignees: ""
---
<!--
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
to make sure that it is truly a bug.
The issue tracker is not the place to ask for support or how to set up Headscale.
Bug reports without the sufficient information will be closed.
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Bug description
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
and how it is currently different. If you are unsure if it is a bug, consider discussing
it on our Discord server first. -->
## Environment
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->
- OS:
- Headscale version:
- Tailscale version:
<!--
We do not support running Headscale in a container nor behind a (reverse) proxy.
If either of these are true for your environment, ask the community in Discord
instead of filing a bug report.
-->
- [ ] Headscale is behind a (reverse) proxy
- [ ] Headscale runs in a container
## To Reproduce
<!-- Steps to reproduce the behavior. -->

View File

@@ -1,83 +0,0 @@
name: 🐞 Bug
description: File a bug/issue
title: "[Bug] <title>"
labels: ["bug", "needs triage"]
body:
- type: checkboxes
attributes:
label: Is this a support request?
description: This issue tracker is for bugs and feature requests only. If you need help, please use ask in our Discord community
options:
- label: This is not a support request
required: true
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the bug you encountered.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: A concise description of what you expected to happen.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
1. With this config...
1. Run '...'
1. See error...
validations:
required: true
- type: textarea
attributes:
label: Environment
description: |
examples:
- **OS**: Ubuntu 20.04
- **Headscale version**: 0.22.3
- **Tailscale version**: 1.64.0
value: |
- OS:
- Headscale version:
- Tailscale version:
render: markdown
validations:
required: true
- type: checkboxes
attributes:
label: Runtime environment
options:
- label: Headscale is behind a (reverse) proxy
required: false
- label: Headscale runs in a container
required: false
- type: textarea
attributes:
label: Anything else?
description: |
Links? References? Anything that will give us more context about the issue you are encountering!
- Client netmap dump (see below)
- ACL configuration
- Headscale configuration
Dump the netmap of tailscale clients:
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
Please provide information describing the netmap, which client, which headscale version etc.
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
validations:
required: false

View File

@@ -0,0 +1,26 @@
---
name: "Feature request"
about: "Suggest an idea for headscale"
title: ""
labels: ["enhancement"]
assignees: ""
---
<!--
We typically have a clear roadmap for what we want to improve and reserve the right
to close feature requests that does not fit in the roadmap, or fit with the scope
of the project, or we actually want to implement ourselves.
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Why
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
## Description
<!-- A clear and precise description of what new or changed feature you want. -->

View File

@@ -1,36 +0,0 @@
name: 🚀 Feature Request
description: Suggest an idea for Headscale
title: "[Feature] <title>"
labels: [enhancement]
body:
- type: textarea
attributes:
label: Use case
description: Please describe the use case for this feature.
placeholder: |
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
validations:
required: true
- type: textarea
attributes:
label: Description
description: A clear and precise description of what new or changed feature you want.
validations:
required: true
- type: checkboxes
attributes:
label: Contribution
description: Are you willing to contribute to the implementation of this feature?
options:
- label: I can write the design doc for this feature
required: false
- label: I can contribute this feature
required: false
- type: textarea
attributes:
label: How can it be implemented?
description: Free text for your ideas on how this feature could be implemented.
validations:
required: false

View File

@@ -12,7 +12,7 @@ If you find mistakes in the documentation, please submit a fix to the documentat
<!-- Please tick if the following things apply. You… --> <!-- Please tick if the following things apply. You… -->
- [ ] have read the [CONTRIBUTING.md](./CONTRIBUTING.md) file - [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
- [ ] raised a GitHub issue or discussed it on the projects chat beforehand - [ ] raised a GitHub issue or discussed it on the projects chat beforehand
- [ ] added unit tests - [ ] added unit tests
- [ ] added integration tests - [ ] added integration tests

View File

@@ -16,29 +16,31 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
permissions: write-all permissions: write-all
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v34
with: with:
filters: | files: |
files: *.nix
- '*.nix' go.*
- 'go.*' **/*.go
- '**/*.go' integration_test/
- 'integration_test/' config-example.yaml
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main - uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main - uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- name: Run build - name: Run build
id: build id: build
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: | run: |
nix build |& tee build-result nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}" BUILD_STATUS="${PIPESTATUS[0]}"
@@ -64,8 +66,8 @@ jobs:
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}' body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
}) })
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v3
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
with: with:
name: headscale-linux name: headscale-linux
path: result/bin/headscale path: result/bin/headscale

View File

@@ -1,41 +0,0 @@
name: Check integration tests workflow
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Generate and check integration tests
if: steps.changed-files.outputs.files == 'true'
run: |
nix develop --command bash -c "cd cmd/gh-action-integration-generator/ && go generate"
git diff --exit-code .github/workflows/test-integration.yaml
- name: Show missing tests
if: failure()
run: |
git diff .github/workflows/test-integration.yaml

35
.github/workflows/contributors.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Contributors
on:
push:
branches:
- main
workflow_dispatch:
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Delete upstream contributor branch
# Allow continue on failure to account for when the
# upstream branch is deleted or does not exist.
continue-on-error: true
run: git push origin --delete update-contributors
- name: Create up-to-date contributors branch
run: git checkout -B update-contributors
- name: Push empty contributors branch
run: git push origin update-contributors
- name: Switch back to main
run: git checkout main
- uses: BobAnkh/add-contributors@v0.2.2
with:
CONTRIBUTOR: "## Contributors"
COLUMN_PER_ROW: "6"
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: "100"
FONT_SIZE: "14"
PATH: "/README.md"
COMMIT_MESSAGE: "docs(README): update contributors"
AVATAR_SHAPE: "round"
BRANCH: "update-contributors"
PULL_REQUEST: "main"

View File

@@ -1,27 +0,0 @@
name: Test documentation build
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict

View File

@@ -1,5 +1,4 @@
name: Build documentation name: Build documentation
on: on:
push: push:
branches: branches:
@@ -16,7 +15,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Install python - name: Install python
uses: actions/setup-python@v4 uses: actions/setup-python@v4
with: with:
@@ -31,22 +30,16 @@ jobs:
- name: Build docs - name: Build docs
run: mkdocs build --strict run: mkdocs build --strict
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@v3 uses: actions/upload-pages-artifact@v1
with: with:
path: ./site path: ./site
deploy: deploy:
environment: environment:
name: github-pages name: github-pages
url: ${{ steps.deployment.outputs.page_url }} url: ${{ steps.deployment.outputs.page_url }}
permissions:
pages: write
id-token: write
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: build needs: build
steps: steps:
- name: Configure Pages
uses: actions/configure-pages@v4
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@v4 uses: actions/deploy-pages@v1

View File

@@ -1,5 +1,6 @@
name: GitHub Actions Version Updater name: GitHub Actions Version Updater
# Controls when the action will run.
on: on:
schedule: schedule:
# Automatically run on every Sunday # Automatically run on every Sunday
@@ -10,13 +11,13 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
# [Required] Access token with `workflow` scope. # [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }} token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater - name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@v0.8.1 uses: saadmk11/github-actions-version-updater@v0.7.1
with: with:
# [Required] Access token with `workflow` scope. # [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }} token: ${{ secrets.WORKFLOW_SECRET }}

View File

@@ -1,6 +1,7 @@
---
name: Lint name: Lint
on: [pull_request] on: [push, pull_request]
concurrency: concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }} group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
@@ -10,66 +11,70 @@ jobs:
golangci-lint: golangci-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v34
with: with:
filters: | files: |
files: *.nix
- '*.nix' go.*
- 'go.*' **/*.go
- '**/*.go' integration_test/
- 'integration_test/' config-example.yaml
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: golangci-lint - name: golangci-lint
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- golangci-lint run --new-from-rev=${{github.event.pull_request.base.sha}} --out-format=colored-line-number uses: golangci/golangci-lint-action@v2
with:
version: v1.51.2
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
# blocked because new linters has appared and other
# parts of the code is affected.
only-new-issues: true
prettier-lint: prettier-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v14.1
with: with:
filters: | files: |
files: *.nix
- '*.nix' **/*.md
- '**/*.md' **/*.yml
- '**/*.yml' **/*.yaml
- '**/*.yaml' **/*.ts
- '**/*.ts' **/*.js
- '**/*.js' **/*.sass
- '**/*.sass' **/*.css
- '**/*.css' **/*.scss
- '**/*.scss' **/*.html
- '**/*.html'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- name: Prettify code - name: Prettify code
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- prettier --no-error-on-unmatched-pattern --ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html} uses: creyD/prettier_action@v4.3
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
only_changed: false
dry: true
proto-lint: proto-lint:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v2
- uses: DeterminateSystems/nix-installer-action@main - uses: bufbuild/buf-setup-action@v1.7.0
- uses: DeterminateSystems/magic-nix-cache-action@main - uses: bufbuild/buf-lint-action@v1
with:
- name: Buf lint input: "proto"
run: nix develop --command -- buf lint proto

138
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,138 @@
---
name: Release Docker
on:
push:
tags:
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -12,27 +12,14 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v3
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: DeterminateSystems/nix-installer-action@main - uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main - uses: DeterminateSystems/magic-nix-cache-action@main
- name: Run goreleaser - name: Run goreleaser
run: nix develop --command -- goreleaser release --clean run: nix develop --command -- goreleaser release --clean --verbose
env: env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,5 +1,4 @@
name: Close inactive issues name: Close inactive issues
on: on:
schedule: schedule:
- cron: "30 1 * * *" - cron: "30 1 * * *"
@@ -11,7 +10,7 @@ jobs:
issues: write issues: write
pull-requests: write pull-requests: write
steps: steps:
- uses: actions/stale@v9 - uses: actions/stale@v5
with: with:
days-before-issue-stale: 90 days-before-issue-stale: 90
days-before-issue-close: 7 days-before-issue-close: 7
@@ -20,5 +19,4 @@ jobs:
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale." close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1 days-before-pr-stale: -1
days-before-pr-close: -1 days-before-pr-close: -1
exempt-issue-labels: "no-stale-bot"
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowStarDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLAllowStarDst:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLAllowStarDst
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowStarDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUser80Dst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLAllowUser80Dst:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLAllowUser80Dst
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUser80Dst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUserDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLAllowUserDst:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLAllowUserDst
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUserDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDenyAllPort80
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLDenyAllPort80:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLDenyAllPort80
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDenyAllPort80$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLDevice1CanAccessDevice2:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLDevice1CanAccessDevice2
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDevice1CanAccessDevice2$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLHostsInNetMapTable
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLHostsInNetMapTable:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLHostsInNetMapTable
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLHostsInNetMapTable$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReach
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLNamedHostsCanReach:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLNamedHostsCanReach
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReach$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLNamedHostsCanReachBySubnet:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLNamedHostsCanReachBySubnet
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReachBySubnet$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestApiKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestApiKeyCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestApiKeyCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestApiKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestAuthKeyLogoutAndRelogin:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestAuthKeyLogoutAndRelogin
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthKeyLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestAuthWebFlowAuthenticationPingAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestAuthWebFlowAuthenticationPingAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestAuthWebFlowLogoutAndRelogin:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestAuthWebFlowLogoutAndRelogin
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestCreateTailscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestCreateTailscale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestCreateTailscale
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestCreateTailscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestDERPServerScenario
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestDERPServerScenario:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestDERPServerScenario
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestDERPServerScenario$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnableDisableAutoApprovedRoute
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestEnableDisableAutoApprovedRoute:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestEnableDisableAutoApprovedRoute
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnableDisableAutoApprovedRoute$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnablingRoutes
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestEnablingRoutes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestEnablingRoutes
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnablingRoutes$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestEphemeral:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestEphemeral
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestExpireNode
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestExpireNode:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestExpireNode
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestExpireNode$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHASubnetRouterFailover
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestHASubnetRouterFailover:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestHASubnetRouterFailover
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHASubnetRouterFailover$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHeadscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestHeadscale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestHeadscale
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHeadscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeAdvertiseTagNoACLCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeAdvertiseTagNoACLCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeAdvertiseTagNoACLCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeAdvertiseTagNoACLCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeAdvertiseTagWithACLCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeAdvertiseTagWithACLCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeAdvertiseTagWithACLCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeAdvertiseTagWithACLCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeExpireCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeExpireCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeExpireCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeExpireCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeMoveCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeMoveCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeMoveCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeMoveCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeOnlineLastSeenStatus
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeOnlineLastSeenStatus:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeOnlineLastSeenStatus
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeOnlineLastSeenStatus$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeRenameCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeRenameCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeRenameCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeRenameCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeTagCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeTagCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeTagCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeTagCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestOIDCAuthenticationPingAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestOIDCAuthenticationPingAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestOIDCExpireNodesBasedOnTokenExpiry:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestOIDCExpireNodesBasedOnTokenExpiry
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByHostname
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPingAllByHostname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPingAllByHostname
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByHostname$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByIP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPingAllByIP:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPingAllByIP
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPreAuthKeyCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPreAuthKeyCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPreAuthKeyCommandReusableEphemeral:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPreAuthKeyCommandReusableEphemeral
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandReusableEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPreAuthKeyCommandWithoutExpiry:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPreAuthKeyCommandWithoutExpiry
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandWithoutExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestResolveMagicDNS
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestResolveMagicDNS:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestResolveMagicDNS
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestResolveMagicDNS$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHIsBlockedInACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHIsBlockedInACL:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHIsBlockedInACL
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHIsBlockedInACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHMultipleUsersAllToAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHMultipleUsersAllToAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleUsersAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHNoSSHConfigured
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHNoSSHConfigured:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHNoSSHConfigured
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHNoSSHConfigured$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneUserToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHOneUserToAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHOneUserToAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneUserToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHUserOnlyIsolation
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHUserOnlyIsolation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHUserOnlyIsolation
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHUserOnlyIsolation$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSubnetRouteACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSubnetRouteACL:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSubnetRouteACL
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSubnetRouteACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTaildrop
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestTaildrop:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestTaildrop
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTaildrop$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestTailscaleNodesJoiningHeadcale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestTailscaleNodesJoiningHeadcale
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTailscaleNodesJoiningHeadcale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestUserCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestUserCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestUserCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestUserCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,123 +0,0 @@
name: Integration Tests
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
integration-test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
test:
- TestACLHostsInNetMapTable
- TestACLAllowUser80Dst
- TestACLDenyAllPort80
- TestACLAllowUserDst
- TestACLAllowStarDst
- TestACLNamedHostsCanReachBySubnet
- TestACLNamedHostsCanReach
- TestACLDevice1CanAccessDevice2
- TestPolicyUpdateWhileRunningWithCLIInDatabase
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndRelogin
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
- TestPreAuthKeyCommandReusableEphemeral
- TestPreAuthKeyCorrectUserLoggedInCommand
- TestApiKeyCommand
- TestNodeTagCommand
- TestNodeAdvertiseTagNoACLCommand
- TestNodeAdvertiseTagWithACLCommand
- TestNodeCommand
- TestNodeExpireCommand
- TestNodeRenameCommand
- TestNodeMoveCommand
- TestPolicyCommand
- TestPolicyBrokenConfigCommand
- TestResolveMagicDNS
- TestValidateResolvConf
- TestDERPServerScenario
- TestPingAllByIP
- TestPingAllByIPPublicDERP
- TestAuthKeyLogoutAndRelogin
- TestEphemeral
- TestEphemeralInAlternateTimezone
- TestEphemeral2006DeletedTooQuickly
- TestPingAllByHostname
- TestTaildrop
- TestExpireNode
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- Test2118DeletingOnlineNodePanics
- TestEnablingRoutes
- TestHASubnetRouterFailover
- TestEnableDisableAutoApprovedRoute
- TestAutoApprovedSubRoute2068
- TestSubnetRouteACL
- TestHeadscale
- TestCreateTailscale
- TestTailscaleNodesJoiningHeadcale
- TestSSHOneUserToAll
- TestSSHMultipleUsersAllToAll
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
database: [postgres, sqlite]
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@v3
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true'
- uses: satackey/action-docker-layer-caching@main
if: steps.changed-files.outputs.files == 'true'
continue-on-error: true
- name: Run Integration Test
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.files == 'true'
env:
USE_POSTGRES: ${{ matrix.database == 'postgres' && '1' || '0' }}
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
--env HEADSCALE_INTEGRATION_POSTGRES=${{env.USE_POSTGRES}} \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^${{ matrix.test }}$"
- uses: actions/upload-artifact@v4
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ matrix.test }}-${{matrix.database}}-logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v4
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ matrix.test }}-${{matrix.database}}-pprof
path: "control_logs/*.pprof.tar"

View File

@@ -11,27 +11,26 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v3
with: with:
fetch-depth: 2 fetch-depth: 2
- name: Get changed files - name: Get changed files
id: changed-files id: changed-files
uses: dorny/paths-filter@v3 uses: tj-actions/changed-files@v34
with: with:
filters: | files: |
files: *.nix
- '*.nix' go.*
- 'go.*' **/*.go
- '**/*.go' integration_test/
- 'integration_test/' config-example.yaml
- 'config-example.yaml'
- uses: DeterminateSystems/nix-installer-action@main - uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main - uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
- name: Run tests - name: Run tests
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- gotestsum run: nix develop --check

View File

@@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v4 uses: actions/checkout@v3
- name: Install Nix - name: Install Nix
uses: DeterminateSystems/nix-installer-action@main uses: DeterminateSystems/nix-installer-action@main
- name: Update flake.lock - name: Update flake.lock

1
.gitignore vendored
View File

@@ -22,7 +22,6 @@ dist/
/headscale /headscale
config.json config.json
config.yaml config.yaml
config*.yaml
derp.yaml derp.yaml
*.hujson *.hujson
*.key *.key

View File

@@ -12,13 +12,19 @@ linters:
disable: disable:
- depguard - depguard
- exhaustivestruct
- revive - revive
- lll - lll
- interfacer
- scopelint
- maligned
- golint
- gofmt - gofmt
- gochecknoglobals - gochecknoglobals
- gochecknoinits - gochecknoinits
- gocognit - gocognit
- funlen - funlen
- exhaustivestruct
- tagliatelle - tagliatelle
- godox - godox
- ireturn - ireturn
@@ -28,6 +34,13 @@ linters:
- musttag # causes issues with imported libs - musttag # causes issues with imported libs
- depguard - depguard
# deprecated
- structcheck # replaced by unused
- ifshort # deprecated by the owner
- varcheck # replaced by unused
- nosnakecase # replaced by revive
- deadcode # replaced by unused
# We should strive to enable these: # We should strive to enable these:
- wrapcheck - wrapcheck
- dupl - dupl

View File

@@ -1,8 +1,7 @@
--- ---
version: 2
before: before:
hooks: hooks:
- go mod tidy -compat=1.22 - go mod tidy -compat=1.20
- go mod vendor - go mod vendor
release: release:
@@ -82,12 +81,8 @@ nfpms:
kos: kos:
- id: ghcr - id: ghcr
repository: ghcr.io/juanfont/headscale repository: ghcr.io/kradalby/headscale
base_image: gcr.io/distroless/base-debian11
# bare tells KO to only use the repository
# for tagging and naming the container.
bare: true
base_image: gcr.io/distroless/base-debian12
build: headscale build: headscale
main: ./cmd/headscale main: ./cmd/headscale
env: env:
@@ -97,95 +92,35 @@ kos:
- linux/386 - linux/386
- linux/arm64 - linux/arm64
- linux/arm/v7 - linux/arm/v7
- linux/arm/v6
- linux/arm/v5
tags: tags:
- "{{ if not .Prerelease }}latest{{ end }}" - latest
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}" - '{{ .Tag }}'
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}" - '{{ .Major }}.{{ .Minor }}.{{ .Patch }}'
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}" - '{{ .Major }}.{{ .Minor }}'
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}" - '{{ .Major }}'
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}" - '{{ if not .Prerelease }}stable{{ end }}'
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
- id: dockerhub # - id: dockerhub
build: headscale # build: headscale
base_image: gcr.io/distroless/base-debian12 # base_image: gcr.io/distroless/base-debian11
repository: headscale/headscale # repository: headscale/headscale
bare: true # platforms:
platforms: # - linux/amd64
- linux/amd64 # - linux/386
- linux/386 # - linux/arm64
- linux/arm64 # - linux/arm/v7
- linux/arm/v7 # - linux/arm/v6
tags: # - linux/arm/v5
- "{{ if not .Prerelease }}latest{{ end }}" # tags:
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}" # - latest
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}" # - '{{.Tag}}'
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
- id: ghcr-debug
repository: ghcr.io/juanfont/headscale
bare: true
base_image: gcr.io/distroless/base-debian12:debug
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
- id: dockerhub-debug
build: headscale
base_image: gcr.io/distroless/base-debian12:debug
repository: headscale/headscale
bare: true
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable-debug{{ end }}"
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "sha-{{ .ShortCommit }}-debug"
checksum: checksum:
name_template: "checksums.txt" name_template: "checksums.txt"
snapshot: snapshot:
version_template: "{{ .Tag }}-next" name_template: "{{ .Tag }}-next"
changelog: changelog:
sort: asc sort: asc
filters: filters:

View File

@@ -1,6 +1 @@
.github/workflows/test-integration-v2* .github/workflows/test-integration-v2*
docs/dns-records.md
docs/running-headscale-container.md
docs/running-headscale-linux-manual.md
docs/running-headscale-linux.md
docs/running-headscale-openbsd.md

View File

@@ -24,57 +24,26 @@ after improving the test harness as part of adopting [#1460](https://github.com/
### BREAKING ### BREAKING
- Code reorganisation, a lot of code has moved, please review the following PRs accordingly [#1473](https://github.com/juanfont/headscale/pull/1473) - Code reorganisation, a lot of code has moved, please review the following PRs accordingly [#1473](https://github.com/juanfont/headscale/pull/1473)
- Change the structure of database configuration, see [config-example.yaml](./config-example.yaml) for the new structure. [#1700](https://github.com/juanfont/headscale/pull/1700)
- Old structure has been remove and the configuration _must_ be converted.
- Adds additional configuration for PostgreSQL for setting max open, idle connection and idle connection lifetime.
- API: Machine is now Node [#1553](https://github.com/juanfont/headscale/pull/1553) - API: Machine is now Node [#1553](https://github.com/juanfont/headscale/pull/1553)
- Remove support for older Tailscale clients [#1611](https://github.com/juanfont/headscale/pull/1611) - Remove support for older Tailscale clients [#1611](https://github.com/juanfont/headscale/pull/1611)
- The oldest supported client is 1.42 - The latest supported client is 1.36
- Headscale checks that _at least_ one DERP is defined at start [#1564](https://github.com/juanfont/headscale/pull/1564) - Headscale checks that _at least_ one DERP is defined at start [#1564](https://github.com/juanfont/headscale/pull/1564)
- If no DERP is configured, the server will fail to start, this can be because it cannot load the DERPMap from file or url. - If no DERP is configured, the server will fail to start, this can be because it cannot load the DERPMap from file or url.
- Embedded DERP server requires a private key [#1611](https://github.com/juanfont/headscale/pull/1611) - Embedded DERP server requires a private key [#1611](https://github.com/juanfont/headscale/pull/1611)
- Add a filepath entry to [`derp.server.private_key_path`](https://github.com/juanfont/headscale/blob/b35993981297e18393706b2c963d6db882bba6aa/config-example.yaml#L95) - Add a filepath entry to [`derp.server.private_key_path`](https://github.com/juanfont/headscale/blob/b35993981297e18393706b2c963d6db882bba6aa/config-example.yaml#L95)
- Docker images are now built with goreleaser (ko) [#1716](https://github.com/juanfont/headscale/pull/1716) [#1763](https://github.com/juanfont/headscale/pull/1763)
- Entrypoint of container image has changed from shell to headscale, require change from `headscale serve` to `serve`
- `/var/lib/headscale` and `/var/run/headscale` is no longer created automatically, see [container docs](./docs/running-headscale-container.md)
- Prefixes are now defined per v4 and v6 range. [#1756](https://github.com/juanfont/headscale/pull/1756)
- `ip_prefixes` option is now `prefixes.v4` and `prefixes.v6`
- `prefixes.allocation` can be set to assign IPs at `sequential` or `random`. [#1869](https://github.com/juanfont/headscale/pull/1869)
- MagicDNS domains no longer contain usernames []()
- This is in preperation to fix Headscales implementation of tags which currently does not correctly remove the link between a tagged device and a user. As tagged devices will not have a user, this will require a change to the DNS generation, removing the username, see [#1369](https://github.com/juanfont/headscale/issues/1369) for more information.
- `use_username_in_magic_dns` can be used to turn this behaviour on again, but note that this option _will be removed_ when tags are fixed.
- dns.base_domain can no longer be the same as (or part of) server_url.
- This option brings Headscales behaviour in line with Tailscale.
- YAML files are no longer supported for headscale policy. [#1792](https://github.com/juanfont/headscale/pull/1792)
- HuJSON is now the only supported format for policy.
- DNS configuration has been restructured [#2034](https://github.com/juanfont/headscale/pull/2034)
- Please review the new [config-example.yaml](./config-example.yaml) for the new structure.
### Changes ### Changes
- Use versioned migrations [#1644](https://github.com/juanfont/headscale/pull/1644) Use versioned migrations [#1644](https://github.com/juanfont/headscale/pull/1644)
- Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484) Make the OIDC callback page better [#1484](https://github.com/juanfont/headscale/pull/1484)
- SSH support [#1487](https://github.com/juanfont/headscale/pull/1487) SSH support [#1487](https://github.com/juanfont/headscale/pull/1487)
- State management has been improved [#1492](https://github.com/juanfont/headscale/pull/1492) State management has been improved [#1492](https://github.com/juanfont/headscale/pull/1492)
- Use error group handling to ensure tests actually pass [#1535](https://github.com/juanfont/headscale/pull/1535) based on [#1460](https://github.com/juanfont/headscale/pull/1460) Use error group handling to ensure tests actually pass [#1535](https://github.com/juanfont/headscale/pull/1535) based on [#1460](https://github.com/juanfont/headscale/pull/1460)
- Fix hang on SIGTERM [#1492](https://github.com/juanfont/headscale/pull/1492) taken from [#1480](https://github.com/juanfont/headscale/pull/1480) Fix hang on SIGTERM [#1492](https://github.com/juanfont/headscale/pull/1492) taken from [#1480](https://github.com/juanfont/headscale/pull/1480)
- Send logs to stderr by default [#1524](https://github.com/juanfont/headscale/pull/1524) Send logs to stderr by default [#1524](https://github.com/juanfont/headscale/pull/1524)
- Fix [TS-2023-006](https://tailscale.com/security-bulletins/#ts-2023-006) security UPnP issue [#1563](https://github.com/juanfont/headscale/pull/1563) Fix [TS-2023-006](https://tailscale.com/security-bulletins/#ts-2023-006) security UPnP issue [#1563](https://github.com/juanfont/headscale/pull/1563)
- Turn off gRPC logging [#1640](https://github.com/juanfont/headscale/pull/1640) fixes [#1259](https://github.com/juanfont/headscale/issues/1259) Turn off gRPC logging [#1640](https://github.com/juanfont/headscale/pull/1640) fixes [#1259](https://github.com/juanfont/headscale/issues/1259)
- Added the possibility to manually create a DERP-map entry which can be customized, instead of automatically creating it. [#1565](https://github.com/juanfont/headscale/pull/1565) Added the possibility to manually create a DERP-map entry which can be customized, instead of automatically creating it. [#1565](https://github.com/juanfont/headscale/pull/1565)
- Add support for deleting api keys [#1702](https://github.com/juanfont/headscale/pull/1702)
- Add command to backfill IP addresses for nodes missing IPs from configured prefixes. [#1869](https://github.com/juanfont/headscale/pull/1869)
- Log available update as warning [#1877](https://github.com/juanfont/headscale/pull/1877)
- Add `autogroup:internet` to Policy [#1917](https://github.com/juanfont/headscale/pull/1917)
- Restore foreign keys and add constraints [#1562](https://github.com/juanfont/headscale/pull/1562)
- Make registration page easier to use on mobile devices
- Make write-ahead-log default on and configurable for SQLite [#1985](https://github.com/juanfont/headscale/pull/1985)
- Add APIs for managing headscale policy. [#1792](https://github.com/juanfont/headscale/pull/1792)
- Fix for registering nodes using preauthkeys when running on a postgres database in a non-UTC timezone. [#764](https://github.com/juanfont/headscale/issues/764)
- Make sure integration tests cover postgres for all scenarios
- CLI commands (all except `serve`) only requires minimal configuration, no more errors or warnings from unset settings [#2109](https://github.com/juanfont/headscale/pull/2109)
- CLI results are now concistently sent to stdout and errors to stderr [#2109](https://github.com/juanfont/headscale/pull/2109)
- Fix issue where shutting down headscale would hang [#2113](https://github.com/juanfont/headscale/pull/2113)
## 0.22.3 (2023-05-12) ## 0.22.3 (2023-05-12)
@@ -87,7 +56,7 @@ after improving the test harness as part of adopting [#1460](https://github.com/
### Changes ### Changes
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382) - Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
- Profiles are continuously generated in our integration tests. - Profiles are continously generated in our integration tests.
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391) - Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379) - Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381) - Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
@@ -178,7 +147,7 @@ after improving the test harness as part of adopting [#1460](https://github.com/
- SSH ACLs status: - SSH ACLs status:
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication) - Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**. - Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
- If you decided to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients. - If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback. - We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`. - This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
@@ -228,7 +197,7 @@ after improving the test harness as part of adopting [#1460](https://github.com/
### Changes ### Changes
- Updated dependencies (including the library that lacked armhf support) [#722](https://github.com/juanfont/headscale/pull/722) - Updated dependencies (including the library that lacked armhf support) [#722](https://github.com/juanfont/headscale/pull/722)
- Fix missing group expansion in function `excludeCorrectlyTaggedNodes` [#563](https://github.com/juanfont/headscale/issues/563) - Fix missing group expansion in function `excludeCorretlyTaggedNodes` [#563](https://github.com/juanfont/headscale/issues/563)
- Improve registration protocol implementation and switch to NodeKey as main identifier [#725](https://github.com/juanfont/headscale/pull/725) - Improve registration protocol implementation and switch to NodeKey as main identifier [#725](https://github.com/juanfont/headscale/pull/725)
- Add ability to connect to PostgreSQL via unix socket [#734](https://github.com/juanfont/headscale/pull/734) - Add ability to connect to PostgreSQL via unix socket [#734](https://github.com/juanfont/headscale/pull/734)
@@ -248,7 +217,7 @@ after improving the test harness as part of adopting [#1460](https://github.com/
- Fix send on closed channel crash in polling [#542](https://github.com/juanfont/headscale/pull/542) - Fix send on closed channel crash in polling [#542](https://github.com/juanfont/headscale/pull/542)
- Fixed spurious calls to setLastStateChangeToNow from ephemeral nodes [#566](https://github.com/juanfont/headscale/pull/566) - Fixed spurious calls to setLastStateChangeToNow from ephemeral nodes [#566](https://github.com/juanfont/headscale/pull/566)
- Add command for moving nodes between namespaces [#362](https://github.com/juanfont/headscale/issues/362) - Add command for moving nodes between namespaces [#362](https://github.com/juanfont/headscale/issues/362)
- Added more configuration parameters for OpenID Connect (scopes, free-form parameters, domain and user allowlist) - Added more configuration parameters for OpenID Connect (scopes, free-form paramters, domain and user allowlist)
- Add command to set tags on a node [#525](https://github.com/juanfont/headscale/issues/525) - Add command to set tags on a node [#525](https://github.com/juanfont/headscale/issues/525)
- Add command to view tags of nodes [#356](https://github.com/juanfont/headscale/issues/356) - Add command to view tags of nodes [#356](https://github.com/juanfont/headscale/issues/356)
- Add --all (-a) flag to enable routes command [#360](https://github.com/juanfont/headscale/issues/360) - Add --all (-a) flag to enable routes command [#360](https://github.com/juanfont/headscale/issues/360)
@@ -296,10 +265,10 @@ after improving the test harness as part of adopting [#1460](https://github.com/
- Fix a bug were the same IP could be assigned to multiple hosts if joined in quick succession [#346](https://github.com/juanfont/headscale/pull/346) - Fix a bug were the same IP could be assigned to multiple hosts if joined in quick succession [#346](https://github.com/juanfont/headscale/pull/346)
- Simplify the code behind registration of machines [#366](https://github.com/juanfont/headscale/pull/366) - Simplify the code behind registration of machines [#366](https://github.com/juanfont/headscale/pull/366)
- Nodes are now only written to database if they are registered successfully - Nodes are now only written to database if they are registrated successfully
- Fix a limitation in the ACLs that prevented users to write rules with `*` as source [#374](https://github.com/juanfont/headscale/issues/374) - Fix a limitation in the ACLs that prevented users to write rules with `*` as source [#374](https://github.com/juanfont/headscale/issues/374)
- Reduce the overhead of marshal/unmarshal for Hostinfo, routes and endpoints by using specific types in Machine [#371](https://github.com/juanfont/headscale/pull/371) - Reduce the overhead of marshal/unmarshal for Hostinfo, routes and endpoints by using specific types in Machine [#371](https://github.com/juanfont/headscale/pull/371)
- Apply normalization function to FQDN on hostnames when hosts registers and retrieve information [#363](https://github.com/juanfont/headscale/issues/363) - Apply normalization function to FQDN on hostnames when hosts registers and retrieve informations [#363](https://github.com/juanfont/headscale/issues/363)
- Fix a bug that prevented the use of `tailscale logout` with OIDC [#508](https://github.com/juanfont/headscale/issues/508) - Fix a bug that prevented the use of `tailscale logout` with OIDC [#508](https://github.com/juanfont/headscale/issues/508)
- Added Tailscale repo HEAD and unstable releases channel to the integration tests targets [#513](https://github.com/juanfont/headscale/pull/513) - Added Tailscale repo HEAD and unstable releases channel to the integration tests targets [#513](https://github.com/juanfont/headscale/pull/513)

View File

@@ -1,34 +0,0 @@
# Contributing
Headscale is "Open Source, acknowledged contribution", this means that any contribution will have to be discussed with the maintainers before being added to the project.
This model has been chosen to reduce the risk of burnout by limiting the maintenance overhead of reviewing and validating third-party code.
## Why do we have this model?
Headscale has a small maintainer team that tries to balance working on the project, fixing bugs and reviewing contributions.
When we work on issues ourselves, we develop first hand knowledge of the code and it makes it possible for us to maintain and own the code as the project develops.
Code contributions are seen as a positive thing. People enjoy and engage with our project, but it also comes with some challenges; we have to understand the code, we have to understand the feature, we might have to become familiar with external libraries or services and we think about security implications. All those steps are required during the reviewing process. After the code has been merged, the feature has to be maintained. Any changes reliant on external services must be updated and expanded accordingly.
The review and day-1 maintenance adds a significant burden on the maintainers. Often we hope that the contributor will help out, but we found that most of the time, they disappear after their new feature was added.
This means that when someone contributes, we are mostly happy about it, but we do have to run it through a series of checks to establish if we actually can maintain this feature.
## What do we require?
A general description is provided here and an explicit list is provided in our pull request template.
All new features have to start out with a design document, which should be discussed on the issue tracker (not discord). It should include a use case for the feature, how it can be implemented, who will implement it and a plan for maintaining it.
All features have to be end-to-end tested (integration tests) and have good unit test coverage to ensure that they work as expected. This will also ensure that the feature continues to work as expected over time. If a change cannot be tested, a strong case for why this is not possible needs to be presented.
The contributor should help to maintain the feature over time. In case the feature is not maintained probably, the maintainers reserve themselves the right to remove features they redeem as unmaintainable. This should help to improve the quality of the software and keep it in a maintainable state.
## Bug fixes
Headscale is open to code contributions for bug fixes without discussion.
## Documentation
If you find mistakes in the documentation, please submit a fix to the documentation.

33
Dockerfile Normal file
View File

@@ -0,0 +1,33 @@
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.21-bookworm AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale
# Production image
FROM docker.io/debian:bookworm-slim
RUN apt-get update \
&& apt-get install -y ca-certificates \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
RUN mkdir -p /var/run/headscale
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -2,24 +2,31 @@
# and are in no way endorsed by Headscale's maintainers as an # and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution. # official nor supported release or distribution.
FROM docker.io/golang:1.23-bookworm FROM docker.io/golang:1.21-bookworm AS build
ARG VERSION=dev ARG VERSION=dev
ENV GOPATH /go ENV GOPATH /go
WORKDIR /go/src/headscale WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN test -e /go/bin/headscale
# Debug image
FROM docker.io/golang:1.21-bookworm
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
RUN apt-get update \ RUN apt-get update \
&& apt-get install --no-install-recommends --yes less jq \ && apt-get install --no-install-recommends --yes less jq \
&& rm -rf /var/lib/apt/lists/* \ && rm -rf /var/lib/apt/lists/* \
&& apt-get clean && apt-get clean
RUN mkdir -p /var/run/headscale RUN mkdir -p /var/run/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale && test -e /go/bin/headscale
# Need to reset the entrypoint or everything will run as a busybox script # Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT [] ENTRYPOINT []
EXPOSE 8080/tcp EXPOSE 8080/tcp

View File

@@ -1,43 +1,21 @@
# Copyright (c) Tailscale Inc & AUTHORS # This Dockerfile and the images produced are for testing headscale,
# SPDX-License-Identifier: BSD-3-Clause # and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
# This Dockerfile is more or less lifted from tailscale/tailscale FROM golang:latest
# to ensure a similar build process when testing the HEAD of tailscale.
FROM golang:1.23-alpine AS build-env RUN apt-get update \
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /go/src RUN useradd --shell=/bin/bash --create-home ssh-it-user
RUN apk add --no-cache git
# Replace `RUN git...` with `COPY` and a local checked out version of Tailscale in `./tailscale`
# to test specific commits of the Tailscale client. This is useful when trying to find out why
# something specific broke between two versions of Tailscale with for example `git bisect`.
# COPY ./tailscale .
RUN git clone https://github.com/tailscale/tailscale.git RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR /go/src/tailscale WORKDIR /go/tailscale
RUN git checkout main \
# see build_docker.sh && sh build_dist.sh tailscale.com/cmd/tailscale \
ARG VERSION_LONG="" && sh build_dist.sh tailscale.com/cmd/tailscaled \
ENV VERSION_LONG=$VERSION_LONG && cp tailscale /usr/local/bin/ \
ARG VERSION_SHORT="" && cp tailscaled /usr/local/bin/
ENV VERSION_SHORT=$VERSION_SHORT
ARG VERSION_GIT_HASH=""
ENV VERSION_GIT_HASH=$VERSION_GIT_HASH
ARG TARGETARCH
RUN GOARCH=$TARGETARCH go install -ldflags="\
-X tailscale.com/version.longStamp=$VERSION_LONG \
-X tailscale.com/version.shortStamp=$VERSION_SHORT \
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be
# using build_docker.sh which sets an entrypoint for the image.
RUN mkdir /tailscale && ln -s /usr/local/bin/containerboot /tailscale/run.sh

View File

@@ -31,7 +31,6 @@ test_integration:
--name headscale-test-suite \ --name headscale-test-suite \
-v $$PWD:$$PWD -w $$PWD/integration \ -v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
-v $$PWD/control_logs:/tmp/control \
golang:1 \ golang:1 \
go run gotest.tools/gotestsum@latest -- -failfast ./... -timeout 120m -parallel 8 go run gotest.tools/gotestsum@latest -- -failfast ./... -timeout 120m -parallel 8

970
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -6,10 +6,108 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"log" "log"
"os"
"os/exec" "os/exec"
"path"
"path/filepath"
"strings" "strings"
"text/template"
) )
var (
githubWorkflowPath = "../../.github/workflows/"
jobFileNameTemplate = `test-integration-v2-%s.yaml`
jobTemplate = template.Must(
template.New("jobTemplate").
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - {{.Name}}
on: [pull_request]
concurrency:
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
cancel-in-progress: true
jobs:
{{.Name}}:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run {{.Name}}
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^{{.Name}}$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"
`),
)
)
const workflowFilePerm = 0o600
func removeTests() {
glob := fmt.Sprintf(jobFileNameTemplate, "*")
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
if err != nil {
log.Fatalf("failed to find test files")
}
for _, file := range files {
err := os.Remove(file)
if err != nil {
log.Printf("failed to remove: %s", err)
}
}
}
func findTests() []string { func findTests() []string {
rgBin, err := exec.LookPath("rg") rgBin, err := exec.LookPath("rg")
if err != nil { if err != nil {
@@ -26,44 +124,50 @@ func findTests() []string {
"--no-heading", "--no-heading",
} }
cmd := exec.Command(rgBin, args...) log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
var out bytes.Buffer
cmd.Stdout = &out ripgrep := exec.Command(
err = cmd.Run() rgBin,
args...,
)
result, err := ripgrep.CombinedOutput()
if err != nil { if err != nil {
log.Fatalf("failed to run command: %s", err) log.Printf("out: %s", result)
log.Fatalf("failed to run ripgrep: %s", err)
} }
tests := strings.Split(strings.TrimSpace(out.String()), "\n") tests := strings.Split(string(result), "\n")
tests = tests[:len(tests)-1]
return tests return tests
} }
func updateYAML(tests []string) { func main() {
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", ")) type testConfig struct {
Name string
yqCommand := fmt.Sprintf(
"yq eval '.jobs.integration-test.strategy.matrix.test = %s' ../../.github/workflows/test-integration.yaml -i",
testsForYq,
)
cmd := exec.Command("bash", "-c", yqCommand)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
log.Fatalf("failed to run yq command: %s", err)
} }
fmt.Println("YAML file updated successfully")
}
func main() {
tests := findTests() tests := findTests()
quotedTests := make([]string, len(tests)) removeTests()
for i, test := range tests {
quotedTests[i] = fmt.Sprintf("\"%s\"", test) for _, test := range tests {
log.Printf("generating workflow for %s", test)
var content bytes.Buffer
if err := jobTemplate.Execute(&content, testConfig{
Name: test,
}); err != nil {
log.Fatalf("failed to render template: %s", err)
} }
updateYAML(quotedTests) testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
if err != nil {
log.Fatalf("failed to write github job: %s", err)
}
}
} }

View File

@@ -29,16 +29,11 @@ func init() {
apiKeysCmd.AddCommand(createAPIKeyCmd) apiKeysCmd.AddCommand(createAPIKeyCmd)
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix") expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil { err := expireAPIKeyCmd.MarkFlagRequired("prefix")
if err != nil {
log.Fatal().Err(err).Msg("") log.Fatal().Err(err).Msg("")
} }
apiKeysCmd.AddCommand(expireAPIKeyCmd) apiKeysCmd.AddCommand(expireAPIKeyCmd)
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
} }
var apiKeysCmd = &cobra.Command{ var apiKeysCmd = &cobra.Command{
@@ -54,7 +49,7 @@ var listAPIKeys = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -67,10 +62,14 @@ var listAPIKeys = &cobra.Command{
fmt.Sprintf("Error getting the list of keys: %s", err), fmt.Sprintf("Error getting the list of keys: %s", err),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response.GetApiKeys(), "", output) SuccessOutput(response.GetApiKeys(), "", output)
return
} }
tableData := pterm.TableData{ tableData := pterm.TableData{
@@ -98,6 +97,8 @@ var listAPIKeys = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err), fmt.Sprintf("Failed to render pterm table: %s", err),
output, output,
) )
return
} }
}, },
} }
@@ -113,6 +114,9 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
log.Trace().
Msg("Preparing to create ApiKey")
request := &v1.CreateApiKeyRequest{} request := &v1.CreateApiKeyRequest{}
durationStr, _ := cmd.Flags().GetString("expiration") durationStr, _ := cmd.Flags().GetString("expiration")
@@ -124,13 +128,19 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
fmt.Sprintf("Could not parse duration: %s\n", err), fmt.Sprintf("Could not parse duration: %s\n", err),
output, output,
) )
return
} }
expiration := time.Now().UTC().Add(time.Duration(duration)) expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace().
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration) request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -141,6 +151,8 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
fmt.Sprintf("Cannot create Api Key: %s\n", err), fmt.Sprintf("Cannot create Api Key: %s\n", err),
output, output,
) )
return
} }
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output) SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
@@ -161,9 +173,11 @@ var expireAPIKeyCmd = &cobra.Command{
fmt.Sprintf("Error getting prefix from CLI flag: %s", err), fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output, output,
) )
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -178,45 +192,10 @@ var expireAPIKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot expire Api Key: %s\n", err), fmt.Sprintf("Cannot expire Api Key: %s\n", err),
output, output,
) )
return
} }
SuccessOutput(response, "Key expired", output) SuccessOutput(response, "Key expired", output)
}, },
} }
var deleteAPIKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete an ApiKey",
Aliases: []string{"remove", "del"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
prefix, err := cmd.Flags().GetString("prefix")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeleteApiKeyRequest{
Prefix: prefix,
}
response, err := client.DeleteApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
}
SuccessOutput(response, "Key deleted", output)
},
}

View File

@@ -14,7 +14,7 @@ var configTestCmd = &cobra.Command{
Short: "Test the configuration.", Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.", Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
_, err := newHeadscaleServerWithConfig() _, err := getHeadscaleApp()
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing") log.Fatal().Caller().Err(err).Msg("Error initializing")
} }

View File

@@ -64,9 +64,11 @@ var createNodeCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -77,6 +79,8 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting node from flag: %s", err), fmt.Sprintf("Error getting node from flag: %s", err),
output, output,
) )
return
} }
machineKey, err := cmd.Flags().GetString("key") machineKey, err := cmd.Flags().GetString("key")
@@ -86,6 +90,8 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting key from flag: %s", err), fmt.Sprintf("Error getting key from flag: %s", err),
output, output,
) )
return
} }
var mkey key.MachinePublic var mkey key.MachinePublic
@@ -96,6 +102,8 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Failed to parse machine key from flag: %s", err), fmt.Sprintf("Failed to parse machine key from flag: %s", err),
output, output,
) )
return
} }
routes, err := cmd.Flags().GetStringSlice("route") routes, err := cmd.Flags().GetStringSlice("route")
@@ -105,6 +113,8 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting routes from flag: %s", err), fmt.Sprintf("Error getting routes from flag: %s", err),
output, output,
) )
return
} }
request := &v1.DebugCreateNodeRequest{ request := &v1.DebugCreateNodeRequest{
@@ -121,6 +131,8 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()), fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()),
output, output,
) )
return
} }
SuccessOutput(response.GetNode(), "Node created", output) SuccessOutput(response.GetNode(), "Node created", output)

View File

@@ -4,7 +4,6 @@ import (
"fmt" "fmt"
"log" "log"
"net/netip" "net/netip"
"slices"
"strconv" "strconv"
"strings" "strings"
"time" "time"
@@ -98,8 +97,6 @@ func init() {
tagCmd.Flags(). tagCmd.Flags().
StringSliceP("tags", "t", []string{}, "List of tags to add to the node") StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
nodeCmd.AddCommand(tagCmd) nodeCmd.AddCommand(tagCmd)
nodeCmd.AddCommand(backfillNodeIPsCmd)
} }
var nodeCmd = &cobra.Command{ var nodeCmd = &cobra.Command{
@@ -116,9 +113,11 @@ var registerNodeCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -129,6 +128,8 @@ var registerNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting node key from flag: %s", err), fmt.Sprintf("Error getting node key from flag: %s", err),
output, output,
) )
return
} }
request := &v1.RegisterNodeRequest{ request := &v1.RegisterNodeRequest{
@@ -146,6 +147,8 @@ var registerNodeCmd = &cobra.Command{
), ),
output, output,
) )
return
} }
SuccessOutput( SuccessOutput(
@@ -163,13 +166,17 @@ var listNodesCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
} }
showTags, err := cmd.Flags().GetBool("tags") showTags, err := cmd.Flags().GetBool("tags")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -184,15 +191,21 @@ var listNodesCmd = &cobra.Command{
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()), fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response.GetNodes(), "", output) SuccessOutput(response.GetNodes(), "", output)
return
} }
tableData, err := nodesToPtables(user, showTags, response.GetNodes()) tableData, err := nodesToPtables(user, showTags, response.GetNodes())
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
} }
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render() err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
@@ -202,6 +215,8 @@ var listNodesCmd = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err), fmt.Sprintf("Failed to render pterm table: %s", err),
output, output,
) )
return
} }
}, },
} }
@@ -225,7 +240,7 @@ var expireNodeCmd = &cobra.Command{
return return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -268,7 +283,7 @@ var renameNodeCmd = &cobra.Command{
return return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -317,7 +332,7 @@ var deleteNodeCmd = &cobra.Command{
return return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -417,7 +432,7 @@ var moveNodeCmd = &cobra.Command{
return return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -462,57 +477,6 @@ var moveNodeCmd = &cobra.Command{
}, },
} }
var backfillNodeIPsCmd = &cobra.Command{
Use: "backfillips",
Short: "Backfill IPs missing from nodes",
Long: `
Backfill IPs can be used to add/remove IPs from nodes
based on the current configuration of Headscale.
If there are nodes that does not have IPv4 or IPv6
even if prefixes for both are configured in the config,
this command can be used to assign IPs of the sort to
all nodes that are missing.
If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer
be assigned to nodes.`,
Run: func(cmd *cobra.Command, args []string) {
var err error
output, _ := cmd.Flags().GetString("output")
confirm := false
prompt := &survey.Confirm{
Message: "Are you sure that you want to assign/remove IPs to/from nodes?",
}
err = survey.AskOne(prompt, &confirm)
if err != nil {
return
}
if confirm {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Error backfilling IPs: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(changes, "Node IPs backfilled successfully", output)
}
},
}
func nodesToPtables( func nodesToPtables(
currentUser string, currentUser string,
showTags bool, showTags bool,
@@ -600,14 +564,14 @@ func nodesToPtables(
forcedTags = strings.TrimLeft(forcedTags, ",") forcedTags = strings.TrimLeft(forcedTags, ",")
var invalidTags string var invalidTags string
for _, tag := range node.GetInvalidTags() { for _, tag := range node.GetInvalidTags() {
if !slices.Contains(node.GetForcedTags(), tag) { if !contains(node.GetForcedTags(), tag) {
invalidTags += "," + pterm.LightRed(tag) invalidTags += "," + pterm.LightRed(tag)
} }
} }
invalidTags = strings.TrimLeft(invalidTags, ",") invalidTags = strings.TrimLeft(invalidTags, ",")
var validTags string var validTags string
for _, tag := range node.GetValidTags() { for _, tag := range node.GetValidTags() {
if !slices.Contains(node.GetForcedTags(), tag) { if !contains(node.GetForcedTags(), tag) {
validTags += "," + pterm.LightGreen(tag) validTags += "," + pterm.LightGreen(tag)
} }
} }
@@ -663,7 +627,7 @@ var tagCmd = &cobra.Command{
Aliases: []string{"tags", "t"}, Aliases: []string{"tags", "t"},
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()

View File

@@ -1,87 +0,0 @@
package cli
import (
"fmt"
"io"
"os"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(policyCmd)
policyCmd.AddCommand(getPolicy)
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := setPolicy.MarkFlagRequired("file"); err != nil {
log.Fatal().Err(err).Msg("")
}
policyCmd.AddCommand(setPolicy)
}
var policyCmd = &cobra.Command{
Use: "policy",
Short: "Manage the Headscale ACL Policy",
}
var getPolicy = &cobra.Command{
Use: "get",
Short: "Print the current ACL Policy",
Aliases: []string{"show", "view", "fetch"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.GetPolicyRequest{}
response, err := client.GetPolicy(ctx, request)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
}
// TODO(pallabpain): Maybe print this better?
// This does not pass output as we dont support yaml, json or json-line
// output for this command. It is HuJSON already.
SuccessOutput("", response.GetPolicy(), "")
},
}
var setPolicy = &cobra.Command{
Use: "set",
Short: "Updates the ACL Policy",
Long: `
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
Aliases: []string{"put", "update"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
}
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
if _, err := client.SetPolicy(ctx, request); err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
SuccessOutput(nil, "Policy updated.", "")
},
}

View File

@@ -60,9 +60,11 @@ var listPreAuthKeys = &cobra.Command{
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -83,6 +85,8 @@ var listPreAuthKeys = &cobra.Command{
if output != "" { if output != "" {
SuccessOutput(response.GetPreAuthKeys(), "", output) SuccessOutput(response.GetPreAuthKeys(), "", output)
return
} }
tableData := pterm.TableData{ tableData := pterm.TableData{
@@ -103,6 +107,13 @@ var listPreAuthKeys = &cobra.Command{
expiration = ColourTime(key.GetExpiration().AsTime()) expiration = ColourTime(key.GetExpiration().AsTime())
} }
var reusable string
if key.GetEphemeral() {
reusable = "N/A"
} else {
reusable = fmt.Sprintf("%v", key.GetReusable())
}
aclTags := "" aclTags := ""
for _, tag := range key.GetAclTags() { for _, tag := range key.GetAclTags() {
@@ -114,7 +125,7 @@ var listPreAuthKeys = &cobra.Command{
tableData = append(tableData, []string{ tableData = append(tableData, []string{
key.GetId(), key.GetId(),
key.GetKey(), key.GetKey(),
strconv.FormatBool(key.GetReusable()), reusable,
strconv.FormatBool(key.GetEphemeral()), strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()), strconv.FormatBool(key.GetUsed()),
expiration, expiration,
@@ -130,6 +141,8 @@ var listPreAuthKeys = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err), fmt.Sprintf("Failed to render pterm table: %s", err),
output, output,
) )
return
} }
}, },
} }
@@ -144,12 +157,20 @@ var createPreAuthKeyCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
} }
reusable, _ := cmd.Flags().GetBool("reusable") reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral") ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags") tags, _ := cmd.Flags().GetStringSlice("tags")
log.Trace().
Bool("reusable", reusable).
Bool("ephemeral", ephemeral).
Str("user", user).
Msg("Preparing to create preauthkey")
request := &v1.CreatePreAuthKeyRequest{ request := &v1.CreatePreAuthKeyRequest{
User: user, User: user,
Reusable: reusable, Reusable: reusable,
@@ -166,6 +187,8 @@ var createPreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Could not parse duration: %s\n", err), fmt.Sprintf("Could not parse duration: %s\n", err),
output, output,
) )
return
} }
expiration := time.Now().UTC().Add(time.Duration(duration)) expiration := time.Now().UTC().Add(time.Duration(duration))
@@ -176,7 +199,7 @@ var createPreAuthKeyCmd = &cobra.Command{
request.Expiration = timestamppb.New(expiration) request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -187,6 +210,8 @@ var createPreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err), fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
output, output,
) )
return
} }
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output) SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
@@ -209,9 +234,11 @@ var expirePreAuthKeyCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user") user, err := cmd.Flags().GetString("user")
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -227,6 +254,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err), fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
output, output,
) )
return
} }
SuccessOutput(response, "Key expired", output) SuccessOutput(response, "Key expired", output)

View File

@@ -9,7 +9,6 @@ import (
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/tcnksm/go-latest" "github.com/tcnksm/go-latest"
) )
@@ -50,21 +49,26 @@ func initConfig() {
} }
} }
cfg, err := types.GetHeadscaleConfig()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Failed to get headscale configuration")
}
machineOutput := HasMachineOutputFlag() machineOutput := HasMachineOutputFlag()
zerolog.SetGlobalLevel(cfg.Log.Level)
// If the user has requested a "node" readable format, // If the user has requested a "node" readable format,
// then disable login so the output remains valid. // then disable login so the output remains valid.
if machineOutput { if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled) zerolog.SetGlobalLevel(zerolog.Disabled)
} }
// logFormat := viper.GetString("log.format") if cfg.Log.Format == types.JSONLogFormat {
// if logFormat == types.JSONLogFormat { log.Logger = log.Output(os.Stdout)
// log.Logger = log.Output(os.Stdout) }
// }
disableUpdateCheck := viper.GetBool("disable_check_updates") if !cfg.DisableUpdateCheck && !machineOutput {
if !disableUpdateCheck && !machineOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") && if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
Version != "dev" { Version != "dev" {
githubTag := &latest.GithubTag{ githubTag := &latest.GithubTag{
@@ -74,7 +78,7 @@ func initConfig() {
res, err := latest.Check(githubTag, Version) res, err := latest.Check(githubTag, Version)
if err == nil && res.Outdated { if err == nil && res.Outdated {
//nolint //nolint
log.Warn().Msgf( fmt.Printf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n", "An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current, res.Current,
Version, Version,

View File

@@ -64,9 +64,11 @@ var listRoutesCmd = &cobra.Command{
fmt.Sprintf("Error getting machine id from flag: %s", err), fmt.Sprintf("Error getting machine id from flag: %s", err),
output, output,
) )
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -80,10 +82,14 @@ var listRoutesCmd = &cobra.Command{
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()), fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response.GetRoutes(), "", output) SuccessOutput(response.GetRoutes(), "", output)
return
} }
routes = response.GetRoutes() routes = response.GetRoutes()
@@ -97,10 +103,14 @@ var listRoutesCmd = &cobra.Command{
fmt.Sprintf("Cannot get routes for node %d: %s", machineID, status.Convert(err).Message()), fmt.Sprintf("Cannot get routes for node %d: %s", machineID, status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response.GetRoutes(), "", output) SuccessOutput(response.GetRoutes(), "", output)
return
} }
routes = response.GetRoutes() routes = response.GetRoutes()
@@ -109,6 +119,8 @@ var listRoutesCmd = &cobra.Command{
tableData := routesToPtables(routes) tableData := routesToPtables(routes)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output) ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
} }
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render() err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
@@ -118,6 +130,8 @@ var listRoutesCmd = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err), fmt.Sprintf("Failed to render pterm table: %s", err),
output, output,
) )
return
} }
}, },
} }
@@ -136,9 +150,11 @@ var enableRouteCmd = &cobra.Command{
fmt.Sprintf("Error getting machine id from flag: %s", err), fmt.Sprintf("Error getting machine id from flag: %s", err),
output, output,
) )
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -151,10 +167,14 @@ var enableRouteCmd = &cobra.Command{
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()), fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response, "", output) SuccessOutput(response, "", output)
return
} }
}, },
} }
@@ -173,9 +193,11 @@ var disableRouteCmd = &cobra.Command{
fmt.Sprintf("Error getting machine id from flag: %s", err), fmt.Sprintf("Error getting machine id from flag: %s", err),
output, output,
) )
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -188,10 +210,14 @@ var disableRouteCmd = &cobra.Command{
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()), fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response, "", output) SuccessOutput(response, "", output)
return
} }
}, },
} }
@@ -210,9 +236,11 @@ var deleteRouteCmd = &cobra.Command{
fmt.Sprintf("Error getting machine id from flag: %s", err), fmt.Sprintf("Error getting machine id from flag: %s", err),
output, output,
) )
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -225,10 +253,14 @@ var deleteRouteCmd = &cobra.Command{
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()), fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response, "", output) SuccessOutput(response, "", output)
return
} }
}, },
} }

View File

@@ -1,9 +1,6 @@
package cli package cli
import ( import (
"errors"
"net/http"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -19,14 +16,14 @@ var serveCmd = &cobra.Command{
return nil return nil
}, },
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
app, err := newHeadscaleServerWithConfig() app, err := getHeadscaleApp()
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing") log.Fatal().Caller().Err(err).Msg("Error initializing")
} }
err = app.Serve() err = app.Serve()
if err != nil && !errors.Is(err, http.ErrServerClosed) { if err != nil {
log.Fatal().Caller().Err(err).Msg("Headscale ran into an error and had to shut down.") log.Fatal().Caller().Err(err).Msg("Error starting server")
} }
}, },
} }

View File

@@ -44,7 +44,7 @@ var createUserCmd = &cobra.Command{
userName := args[0] userName := args[0]
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -63,6 +63,8 @@ var createUserCmd = &cobra.Command{
), ),
output, output,
) )
return
} }
SuccessOutput(response.GetUser(), "User created", output) SuccessOutput(response.GetUser(), "User created", output)
@@ -89,7 +91,7 @@ var destroyUserCmd = &cobra.Command{
Name: userName, Name: userName,
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -100,6 +102,8 @@ var destroyUserCmd = &cobra.Command{
fmt.Sprintf("Error: %s", status.Convert(err).Message()), fmt.Sprintf("Error: %s", status.Convert(err).Message()),
output, output,
) )
return
} }
confirm := false confirm := false
@@ -130,6 +134,8 @@ var destroyUserCmd = &cobra.Command{
), ),
output, output,
) )
return
} }
SuccessOutput(response, "User destroyed", output) SuccessOutput(response, "User destroyed", output)
} else { } else {
@@ -145,7 +151,7 @@ var listUsersCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -158,10 +164,14 @@ var listUsersCmd = &cobra.Command{
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()), fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
output, output,
) )
return
} }
if output != "" { if output != "" {
SuccessOutput(response.GetUsers(), "", output) SuccessOutput(response.GetUsers(), "", output)
return
} }
tableData := pterm.TableData{{"ID", "Name", "Created"}} tableData := pterm.TableData{{"ID", "Name", "Created"}}
@@ -182,6 +192,8 @@ var listUsersCmd = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err), fmt.Sprintf("Failed to render pterm table: %s", err),
output, output,
) )
return
} }
}, },
} }
@@ -201,7 +213,7 @@ var renameUserCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) { Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output") output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
@@ -220,6 +232,8 @@ var renameUserCmd = &cobra.Command{
), ),
output, output,
) )
return
} }
SuccessOutput(response.GetUser(), "User renamed", output) SuccessOutput(response.GetUser(), "User renamed", output)

View File

@@ -6,9 +6,11 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"os" "os"
"reflect"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol" "github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util" "github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
@@ -23,8 +25,8 @@ const (
SocketWritePermissions = 0o666 SocketWritePermissions = 0o666
) )
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) { func getHeadscaleApp() (*hscontrol.Headscale, error) {
cfg, err := types.LoadServerConfig() cfg, err := types.GetHeadscaleConfig()
if err != nil { if err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf(
"failed to load configuration while creating headscale instance: %w", "failed to load configuration while creating headscale instance: %w",
@@ -37,11 +39,26 @@ func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
return nil, err return nil, err
} }
// We are doing this here, as in the future could be cool to have it also hot-reload
if cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath)
if err != nil {
log.Fatal().
Str("path", aclPath).
Err(err).
Msg("Could not load the ACL policy")
}
app.ACLPolicy = pol
}
return app, nil return app, nil
} }
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) { func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg, err := types.LoadCLIConfig() cfg, err := types.GetHeadscaleConfig()
if err != nil { if err != nil {
log.Fatal(). log.Fatal().
Err(err). Err(err).
@@ -72,7 +89,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
// Try to give the user better feedback if we cannot write to the headscale // Try to give the user better feedback if we cannot write to the headscale
// socket. // socket.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) // nolint socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
if err != nil { if err != nil {
if os.IsPermission(err) { if os.IsPermission(err) {
log.Fatal(). log.Fatal().
@@ -130,7 +147,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
return ctx, client, conn, cancel return ctx, client, conn, cancel
} }
func output(result interface{}, override string, outputFormat string) string { func SuccessOutput(result interface{}, override string, outputFormat string) {
var jsonBytes []byte var jsonBytes []byte
var err error var err error
switch outputFormat { switch outputFormat {
@@ -150,27 +167,22 @@ func output(result interface{}, override string, outputFormat string) string {
log.Fatal().Err(err).Msg("failed to unmarshal output") log.Fatal().Err(err).Msg("failed to unmarshal output")
} }
default: default:
// nolint //nolint
return override fmt.Println(override)
return
} }
return string(jsonBytes) //nolint
fmt.Println(string(jsonBytes))
} }
// SuccessOutput prints the result to stdout and exits with status code 0.
func SuccessOutput(result interface{}, override string, outputFormat string) {
fmt.Println(output(result, override, outputFormat))
os.Exit(0)
}
// ErrorOutput prints an error message to stderr and exits with status code 1.
func ErrorOutput(errResult error, override string, outputFormat string) { func ErrorOutput(errResult error, override string, outputFormat string) {
type errOutput struct { type errOutput struct {
Error string `json:"error"` Error string `json:"error"`
} }
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errResult.Error()}, override, outputFormat)) SuccessOutput(errOutput{errResult.Error()}, override, outputFormat)
os.Exit(1)
} }
func HasMachineOutputFlag() bool { func HasMachineOutputFlag() bool {
@@ -200,3 +212,13 @@ func (t tokenAuth) GetRequestMetadata(
func (tokenAuth) RequireTransportSecurity() bool { func (tokenAuth) RequireTransportSecurity() bool {
return true return true
} }
func contains[T string](ts []T, t T) bool {
for _, v := range ts {
if reflect.DeepEqual(v, t) {
return true
}
}
return false
}

View File

@@ -4,13 +4,27 @@ import (
"os" "os"
"time" "time"
"github.com/jagottsicher/termcolor" "github.com/efekarakus/termcolor"
"github.com/juanfont/headscale/cmd/headscale/cli" "github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/pkg/profile"
"github.com/rs/zerolog" "github.com/rs/zerolog"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
) )
func main() { func main() {
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
err := os.MkdirAll(profilePath, os.ModePerm)
if err != nil {
log.Fatal().Err(err).Msg("failed to create profiling directory")
}
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
} else {
defer profile.Start().Stop()
}
}
var colors bool var colors bool
switch l := termcolor.SupportLevel(os.Stderr); l { switch l := termcolor.SupportLevel(os.Stderr); l {
case termcolor.Level16M: case termcolor.Level16M:

View File

@@ -4,6 +4,7 @@ import (
"io/fs" "io/fs"
"os" "os"
"path/filepath" "path/filepath"
"strings"
"testing" "testing"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
@@ -57,11 +58,12 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite") c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite") c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "") c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http") c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert( c.Assert(
util.GetFileMode("unix_socket_permission"), util.GetFileMode("unix_socket_permission"),
check.Equals, check.Equals,
@@ -99,11 +101,12 @@ func (*Suite) TestConfigLoading(c *check.C) {
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080") c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080") c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090") c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite") c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite") c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "") c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http") c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01") c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert( c.Assert(
util.GetFileMode("unix_socket_permission"), util.GetFileMode("unix_socket_permission"),
check.Equals, check.Equals,
@@ -112,3 +115,93 @@ func (*Suite) TestConfigLoading(c *check.C) {
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false) c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false) c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false)
} }
func (*Suite) TestDNSConfigLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
if err != nil {
c.Fatal(err)
}
defer os.RemoveAll(tmpDir)
path, err := os.Getwd()
if err != nil {
c.Fatal(err)
}
// Symlink the example config file
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
filepath.Join(tmpDir, "config.yaml"),
)
if err != nil {
c.Fatal(err)
}
// Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil)
dnsConfig, baseDomain := types.GetDNSConfig()
c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1")
c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1")
c.Assert(dnsConfig.Proxied, check.Equals, true)
c.Assert(baseDomain, check.Equals, "example.com")
}
func writeConfig(c *check.C, tmpDir string, configYaml []byte) {
// Populate a custom config file
configFile := filepath.Join(tmpDir, "config.yaml")
err := os.WriteFile(configFile, configYaml, 0o600)
if err != nil {
c.Fatalf("Couldn't write file %s", configFile)
}
}
func (*Suite) TestTLSConfigValidation(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
if err != nil {
c.Fatal(err)
}
// defer os.RemoveAll(tmpDir)
configYaml := []byte(`---
tls_letsencrypt_hostname: example.com
tls_letsencrypt_challenge_type: ""
tls_cert_path: abc.pem
noise:
private_key_path: noise_private.key`)
writeConfig(c, tmpDir, configYaml)
// Check configuration validation errors (1)
err = types.LoadConfig(tmpDir, false)
c.Assert(err, check.NotNil)
// check.Matches can not handle multiline strings
tmp := strings.ReplaceAll(err.Error(), "\n", "***")
c.Assert(
tmp,
check.Matches,
".*Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both.*",
)
c.Assert(
tmp,
check.Matches,
".*Fatal config error: the only supported values for tls_letsencrypt_challenge_type are.*",
)
c.Assert(
tmp,
check.Matches,
".*Fatal config error: server_url must start with https:// or http://.*",
)
// Check configuration validation errors (2)
configYaml = []byte(`---
noise:
private_key_path: noise_private.key
server_url: http://127.0.0.1:8080
tls_letsencrypt_hostname: example.com
tls_letsencrypt_challenge_type: TLS-ALPN-01
`)
writeConfig(c, tmpDir, configYaml)
err = types.LoadConfig(tmpDir, false)
c.Assert(err, check.IsNil)
}

View File

@@ -57,14 +57,9 @@ noise:
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71 # IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33 # IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# Any other range is NOT supported, and it will cause unexpected issues. # Any other range is NOT supported, and it will cause unexpected issues.
prefixes: ip_prefixes:
v6: fd7a:115c:a1e0::/48 - fd7a:115c:a1e0::/48
v4: 100.64.0.0/10 - 100.64.0.0/10
# Strategy used for allocation of IPs to nodes, available options:
# - sequential (default): assigns the next free IP from the previous given IP.
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
allocation: sequential
# DERP is a relay system that Tailscale uses when a direct # DERP is a relay system that Tailscale uses when a direct
# connection cannot be established. # connection cannot be established.
@@ -105,7 +100,7 @@ derp:
automatically_add_embedded_derp_region: true automatically_add_embedded_derp_region: true
# For better connection stability (especially when using an Exit-Node and DNS is not working), # For better connection stability (especially when using an Exit-Node and DNS is not working),
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using: # it is possible to optionall add the public IPv4 and IPv6 address to the Derp-Map using:
ipv4: 1.2.3.4 ipv4: 1.2.3.4
ipv6: 2001:db8::1 ipv6: 2001:db8::1
@@ -137,54 +132,30 @@ disable_check_updates: false
# Time before an inactive ephemeral node is deleted? # Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m ephemeral_node_inactivity_timeout: 30m
database: # Period to check for node updates within the tailnet. A value too low will severely affect
# Database type. Available options: sqlite, postgres # CPU consumption of Headscale. A value too high (over 60s) will cause problems
# Please note that using Postgres is highly discouraged as it is only supported for legacy reasons. # for the nodes, as they won't get updates or keep alive messages frequently enough.
# All new development, testing and optimisations are done with SQLite in mind. # In case of doubts, do not touch the default 10s.
type: sqlite node_update_check_interval: 10s
# Enable debug mode. This setting requires the log.level to be set to "debug" or "trace". # SQLite config
debug: false db_type: sqlite3
# GORM configuration settings. # For production:
gorm: db_path: /var/lib/headscale/db.sqlite
# Enable prepared statements.
prepare_stmt: true
# Enable parameterized queries. # # Postgres config
parameterized_queries: true # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# db_type: postgres
# db_host: localhost
# db_port: 5432
# db_name: headscale
# db_user: foo
# db_pass: bar
# Skip logging "record not found" errors. # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
skip_err_record_not_found: true # in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# db_ssl: false
# Threshold for slow queries in milliseconds.
slow_threshold: 1000
# SQLite config
sqlite:
path: /var/lib/headscale/db.sqlite
# Enable WAL mode for SQLite. This is recommended for production environments.
# https://www.sqlite.org/wal.html
write_ahead_log: true
# # Postgres config
# Please note that using Postgres is highly discouraged as it is only supported for legacy reasons.
# See database.type for more information.
# postgres:
# # If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
# host: localhost
# port: 5432
# name: headscale
# user: foo
# pass: bar
# max_open_conns: 10
# max_idle_conns: 10
# conn_max_idle_time_secs: 3600
# # If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
# # in the 'ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
# ssl: false
### TLS configuration ### TLS configuration
# #
@@ -225,17 +196,10 @@ log:
format: text format: text
level: info level: info
## Policy # Path to a file containg ACL policies.
# headscale supports Tailscale's ACL policies. # ACLs can be defined as YAML or HUJSON.
# Please have a look to their KB to better # https://tailscale.com/kb/1018/acls/
# understand the concepts: https://tailscale.com/kb/1018/acls/ acl_policy_path: ""
policy:
# The mode can be "file" or "database" that defines
# where the ACL policies are stored and read from.
mode: file
# If the mode is set to "file", the path to a
# HuJSON file containing ACL policies.
path: ""
## DNS ## DNS
# #
@@ -246,60 +210,43 @@ policy:
# - https://tailscale.com/kb/1081/magicdns/ # - https://tailscale.com/kb/1081/magicdns/
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/ # - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
# #
# Please note that for the DNS configuration to have any effect, dns_config:
# clients must have the `--accept-dns=true` option enabled. This is the # Whether to prefer using Headscale provided DNS or use local.
# default for the Tailscale client. This option is enabled by default override_local_dns: true
# in the Tailscale client.
#
# Setting _any_ of the configuration and `--accept-dns=true` on the
# clients will integrate with the DNS manager on the client or
# overwrite /etc/resolv.conf.
# https://tailscale.com/kb/1235/resolv-conf
#
# If you want stop Headscale from managing the DNS configuration
# all the fields under `dns` should be set to empty values.
dns:
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Only works if there is at least a nameserver defined.
magic_dns: true
# Defines the base domain to create the hostnames for MagicDNS.
# This domain _must_ be different from the server_url domain.
# `base_domain` must be a FQDN, without the trailing dot.
# The FQDN of the hosts will be
# `hostname.base_domain` (e.g., _myhost.example.com_).
base_domain: example.com
# List of DNS servers to expose to clients. # List of DNS servers to expose to clients.
nameservers: nameservers:
global:
- 1.1.1.1 - 1.1.1.1
- 1.0.0.1
- 2606:4700:4700::1111
- 2606:4700:4700::1001
# NextDNS (see https://tailscale.com/kb/1218/nextdns/). # NextDNS (see https://tailscale.com/kb/1218/nextdns/).
# "abc123" is example NextDNS ID, replace with yours. # "abc123" is example NextDNS ID, replace with yours.
#
# With metadata sharing:
# nameservers:
# - https://dns.nextdns.io/abc123 # - https://dns.nextdns.io/abc123
#
# Without metadata sharing:
# nameservers:
# - 2a07:a8c0::ab:c123
# - 2a07:a8c1::ab:c123
# Split DNS (see https://tailscale.com/kb/1054/dns/), # Split DNS (see https://tailscale.com/kb/1054/dns/),
# a map of domains and which DNS server to use for each. # list of search domains and the DNS to query for each one.
split: #
{} # restricted_nameservers:
# foo.bar.com: # foo.bar.com:
# - 1.1.1.1 # - 1.1.1.1
# darp.headscale.net: # darp.headscale.net:
# - 1.1.1.1 # - 1.1.1.1
# - 8.8.8.8 # - 8.8.8.8
# Set custom DNS search domains. With MagicDNS enabled, # Search domains to inject.
# your tailnet base_domain is always the first search domain. domains: []
search_domains: []
# Extra DNS records # Extra DNS records
# so far only A-records are supported (on the tailscale side) # so far only A-records are supported (on the tailscale side)
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations # See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
extra_records: [] # extra_records:
# - name: "grafana.myvpn.example.com" # - name: "grafana.myvpn.example.com"
# type: "A" # type: "A"
# value: "100.64.0.3" # value: "100.64.0.3"
@@ -307,14 +254,15 @@ dns:
# # you can also put it in one line # # you can also put it in one line
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" } # - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
# DEPRECATED # Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Use the username as part of the DNS name for nodes, with this option enabled: # Only works if there is at least a nameserver defined.
# node1.username.example.com magic_dns: true
# while when this is disabled:
# node1.example.com # Defines the base domain to create the hostnames for MagicDNS.
# This is a legacy option as Headscale has have this wrongly implemented # `base_domain` must be a FQDNs, without the trailing dot.
# while in upstream Tailscale, the username is not included. # The FQDN of the hosts will be
use_username_in_magic_dns: false # `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
base_domain: example.com
# Unix socket used for the CLI to connect without authentication # Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like: # Note: for production you will want to set this to something like:

View File

@@ -3,7 +3,7 @@ Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-
For instance, instead of referring to users when defining groups you must For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com). use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/ for further information. Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as whichever the User have the ability to communicate with other hosts as
@@ -43,7 +43,8 @@ servers.
Note: Users will be created automatically when users authenticate with the Note: Users will be created automatically when users authenticate with the
Headscale server. Headscale server.
ACLs have to be written in [huJSON](https://github.com/tailscale/hujson). ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
or YAML. Check the [test ACLs](../tests/acls) for further information.
When registering the servers we will need to add the flag When registering the servers we will need to add the flag
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
@@ -52,7 +53,7 @@ a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the user that is and only valid tags are applied. A tag is valid if the user that is
registering it is allowed to do it. registering it is allowed to do it.
To use ACLs in headscale, you must edit your `config.yaml` file. In there you will find a `policy.path` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/). To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
Here are the ACL's to implement the same permissions as above: Here are the ACL's to implement the same permissions as above:

View File

@@ -8,9 +8,12 @@ This documentation has the goal of showing how a user can use the official Andro
Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/). Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/).
Ensure that the installed version is at least 1.30.0, as that is the first release to support custom URLs.
## Configuring the headscale URL ## Configuring the headscale URL
- Open the app and select the settings menu in the upper-right corner After opening the app, the kebab menu icon (three dots) on the top bar on the right must be repeatedly opened and closed until the _Change server_ option appears in the menu. This is where you can enter your headscale URL.
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server` A screen recording of this process can be seen in the `tailscale-android` PR which implemented this functionality: <https://github.com/tailscale/tailscale-android/pull/55>
- Enter your server URL (e.g `https://headscale.example.com`) and follow the instructions
After saving and restarting the app, selecting the regular _Sign in_ option (non-SSO) should open up the headscale authentication page.

View File

@@ -1,51 +0,0 @@
# Connecting an Apple client
## Goal
This documentation has the goal of showing how a user can use the official iOS and macOS [Tailscale](https://tailscale.com) clients with `headscale`.
!!! info "Instructions on your headscale instance"
An endpoint with information on how to connect your Apple device
is also available at `/apple` on your running instance.
## iOS
### Installation
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
### Configuring the headscale URL
- Open Tailscale and make sure you are _not_ logged in to any account
- Open Settings on the iOS device
- Scroll down to the `third party apps` section, under `Game Center` or `TV Provider`
- Find Tailscale and select it
- If the iOS device was previously logged into Tailscale, switch the `Reset Keychain` toggle to `on`
- Enter the URL of your headscale instance (e.g `https://headscale.example.com`) under `Alternate Coordination Server URL`
- Restart the app by closing it from the iOS app switcher, open the app and select the regular sign in option
_(non-SSO)_. It should open up to the headscale authentication page.
- Enter your credentials and log in. Headscale should now be working on your iOS device.
## macOS
### Installation
Choose one of the available [Tailscale clients for macOS](https://tailscale.com/kb/1065/macos-variants) and install it.
### Configuring the headscale URL
#### Command line
Use Tailscale's login command to connect with your headscale instance (e.g `https://headscale.example.com`):
```
tailscale login --login-server <YOUR_HEADSCALE_URL>
```
#### GUI
- ALT + Click the Tailscale icon in the menu and hover over the Debug menu
- Under `Custom Login Server`, select `Add Account...`
- Enter the URL of your headscale instance (e.g `https://headscale.example.com`) and press `Add Account`
- Follow the login procedure in the browser

View File

@@ -18,8 +18,8 @@ An example use case is to serve apps on the same host via a reverse proxy like N
1. Change the `config.yaml` to contain the desired records like so: 1. Change the `config.yaml` to contain the desired records like so:
```yaml ```yaml
dns: dns_config:
... ...
extra_records: extra_records:
- name: "prometheus.myvpn.example.com" - name: "prometheus.myvpn.example.com"
@@ -30,13 +30,11 @@ An example use case is to serve apps on the same host via a reverse proxy like N
type: "A" type: "A"
value: "100.64.0.3" value: "100.64.0.3"
... ...
``` ```
1. Restart your headscale instance. 2. Restart your headscale instance.
!!! warning Beware of the limitations listed later on!
Beware of the limitations listed later on!
### 2. Verify that the records are set ### 2. Verify that the records are set

View File

@@ -5,7 +5,7 @@
Register the node and make it advertise itself as an exit node: Register the node and make it advertise itself as an exit node:
```console ```console
$ sudo tailscale up --login-server https://headscale.example.com --advertise-exit-node $ sudo tailscale up --login-server https://my-server.com --advertise-exit-node
``` ```
If the node is already registered, it can advertise exit capabilities like this: If the node is already registered, it can advertise exit capabilities like this:
@@ -14,26 +14,24 @@ If the node is already registered, it can advertise exit capabilities like this:
$ sudo tailscale set --advertise-exit-node $ sudo tailscale set --advertise-exit-node
``` ```
To use a node as an exit node, IP forwarding must be enabled on the node. Check the official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to enable IP forwarding. To use a node as an exit node, IP forwarding must be enabled on the node. Check the official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to enable IP fowarding.
## On the control server ## On the control server
```console ```console
$ # list nodes $ # list nodes
$ headscale routes list $ headscale routes list
ID | Node | Prefix | Advertised | Enabled | Primary ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | - 1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | - 2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | false | - 3 | phobos | 0.0.0.0/0 | true | false | -
4 | phobos | ::/0 | true | false | - 4 | phobos | ::/0 | true | false | -
$ # enable routes for phobos $ # enable routes for phobos
$ headscale routes enable -r 3 $ headscale routes enable -r 3
$ headscale routes enable -r 4 $ headscale routes enable -r 4
$ # Check node list again. The routes are now enabled. $ # Check node list again. The routes are now enabled.
$ headscale routes list $ headscale routes list
ID | Node | Prefix | Advertised | Enabled | Primary ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | - 1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | - 2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | true | - 3 | phobos | 0.0.0.0/0 | true | true | -
@@ -48,4 +46,4 @@ The exit node can now be used with:
$ sudo tailscale set --exit-node phobos $ sudo tailscale set --exit-node phobos
``` ```
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes#use-the-exit-node) for how to do it on your device. Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes/?q=exit#step-3-use-the-exit-node) for how to do it on your device.

View File

@@ -31,12 +31,12 @@ We are more than happy to exchange emails, or to have dedicated calls before a P
## When/Why is Feature X going to be implemented? ## When/Why is Feature X going to be implemented?
We don't know. We might be working on it. If you're interested in contributing, please post a feature request about it. We don't know. We might be working on it. If you want to help, please send us a PR.
Please be aware that there are a number of reasons why we might not accept specific contributions: Please be aware that there are a number of reasons why we might not accept specific contributions:
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment. - It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves. - Given that we are reverse-engineering Tailscale to satify our own curiosity, we might be interested in implementing the feature ourselves.
- You are not sending unit and integration tests with it. - You are not sending unit and integration tests with it.
## Do you support Y method of deploying Headscale? ## Do you support Y method of deploying Headscale?
@@ -51,7 +51,3 @@ For convenience, we also build Docker images with `headscale`. But **please be a
## Why is my reverse proxy not working with Headscale? ## Why is my reverse proxy not working with Headscale?
We don't know. We don't use reverse proxies with `headscale` ourselves, so we don't have any experience with them. We have [community documentation](https://headscale.net/reverse-proxy/) on how to configure various reverse proxies, and a dedicated [Discord channel](https://discord.com/channels/896711691637780480/1070619818346164324) where you can ask for help to the community. We don't know. We don't use reverse proxies with `headscale` ourselves, so we don't have any experience with them. We have [community documentation](https://headscale.net/reverse-proxy/) on how to configure various reverse proxies, and a dedicated [Discord channel](https://discord.com/channels/896711691637780480/1070619818346164324) where you can ask for help to the community.
## Can I use headscale and tailscale on the same machine?
Running headscale on a machine that is also in the tailnet can cause problems with subnet routers, traffic relay nodes, and MagicDNS. It might work, but it is not supported.

6
docs/glossary.md Normal file
View File

@@ -0,0 +1,6 @@
# Glossary
| Term | Description |
| --------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
| Namespace | A namespace was a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User (This is now called user) |

30
docs/iOS-client.md Normal file
View File

@@ -0,0 +1,30 @@
# Connecting an iOS client
## Goal
This documentation has the goal of showing how a user can use the official iOS [Tailscale](https://tailscale.com) client with `headscale`.
## Installation
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
Ensure that the installed version is at least 1.38.1, as that is the first release to support alternate control servers.
## Configuring the headscale URL
!!! info "Apple devices"
An endpoint with information on how to connect your Apple devices
(currently macOS only) is available at `/apple` on your running instance.
Ensure that the tailscale app is logged out before proceeding.
Go to iOS settings, scroll down past game center and tv provider to the tailscale app and select it. The headscale URL can be entered into the _"ALTERNATE COORDINATION SERVER URL"_ box.
> **Note**
>
> If the app was previously logged into tailscale, toggle on the _Reset Keychain_ switch.
Restart the app by closing it from the iOS app switcher, open the app and select the regular _Sign in_ option (non-SSO), and it should open up to the headscale authentication page.
Enter your credentials and log in. Headscale should now be working on your iOS device.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Some files were not shown because too many files have changed in this diff Show More