Compare commits

..

127 Commits

Author SHA1 Message Date
Juan Font
9f7c25e853 Refactor unit tests 2023-05-01 14:53:23 +00:00
Juan Font
851da9d674 Refactored integration tests 2023-05-01 14:52:48 +00:00
Juan Font
83b4389090 Refactored app code with Node 2023-05-01 14:52:03 +00:00
Juan Font
89fffeab31 Deleted old pb machine stuff 2023-05-01 14:51:01 +00:00
Juan Font
46221cc220 Updated CLI entries 2023-05-01 14:50:38 +00:00
Juan Font
cf22604a4b Changed DB objects and added migrations 2023-05-01 14:49:31 +00:00
Juan Font
ae03f440ee Rename machine in protos and gen code 2023-05-01 14:14:07 +00:00
Juan Font
47bc930ace Rename files 2023-05-01 10:30:43 +00:00
Juan Font
a2b760834f Fix extra space 2023-04-30 23:28:16 +02:00
loprima-l
493bcfcf18 Update mkdocs.yml
Co-authored-by: Juan Font <juanfontalonso@gmail.com>
2023-04-30 23:28:16 +02:00
loprima-l
df72508089 Fix : Change master branch to main
This fix should change the edit branch to main in the documentation
2023-04-30 23:28:16 +02:00
loprima-l
0f8d8fc2d8 Fix : Updating the doc path
Updating the doc path to be the doc website url as it's a better documentation tool
2023-04-30 22:56:38 +02:00
Jonathan Wright
744e5a11b6 Update CHANGELOG.md
Co-authored-by: Juan Font <juanfontalonso@gmail.com>
2023-04-30 18:25:43 +02:00
Jonathan Wright
3ea1750ea0 Update CHANGELOG.md 2023-04-30 18:25:43 +02:00
Jonathan Wright
a45777d22e Put systemd service file in proper location 2023-04-30 18:25:43 +02:00
Kristoffer Dalby
56dd734300 Add go profiling flag, and enable on integration tests (#1382) 2023-04-27 16:57:11 +02:00
Philipp Krivanec
d0113732fe optimize generateACLPeerCacheMap (#1377) 2023-04-26 06:02:54 +02:00
Kristoffer Dalby
6215eb6471 update flake hash (#1376) 2023-04-24 15:52:15 +02:00
Juan Font
1d2b4bca8a Remove legacy DERP tests 2023-04-24 12:35:29 +02:00
Juan Font
96f9680afd Reuse Ping function for DERP ping 2023-04-24 12:17:24 +02:00
Juan Font
b465592c07 Do not use host networking in embedded DERP tests
fixed linting
2023-04-24 12:17:24 +02:00
Juan Font
991ff25362 Added workflow for embedded derp 2023-04-24 12:17:24 +02:00
Juan Font
eacd687dbf Added DERP integration tests
Linting fixes

Set listen addr to :8443
2023-04-24 12:17:24 +02:00
Juan Font
549f5a164d Expand surface of hsic for better TLS support 2023-04-24 12:17:24 +02:00
Juan Font
bb07aec82c Expand tsic to offer PingViaDerp 2023-04-24 12:17:24 +02:00
Kristoffer Dalby
a5afe4bd06 Add more capabilities for systemd
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 15:53:19 +02:00
Kristoffer Dalby
a71cc81fe7 fix
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 12:05:57 +02:00
Kristoffer Dalby
679305c3e4 Add version to binary release
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 12:05:57 +02:00
Kristoffer Dalby
c0680f34f1 fix issue where binaries are not released
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 11:10:27 +02:00
Kristoffer Dalby
64ebe6b0c8 change date in changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 08:13:38 +02:00
Kristoffer Dalby
e6b26499f7 release source code with vendored dependencies
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 08:13:38 +02:00
Kristoffer Dalby
977eb1dee3 Update flakes, add some quality of life improvements (#1346) 2023-04-20 07:56:53 +02:00
Kristoffer Dalby
b2e2b02210 set release date
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:47:31 +02:00
Kristoffer Dalby
2abff4bb08 update changelog for #1339
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:45:27 +02:00
Kristoffer Dalby
54c00645d1 update changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
cad5ce0ebd lint fix
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
b12a167fa2 remove rpm, might add back later
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
667295e15e add new documentation on how to install on debian/ubuntu
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
bea52678e3 move current linux documentation into "manual"
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
307cfc3304 add systemd enable to postinstall script
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
5e74ca9414 Fix IPv6 in ACLs (#1339) 2023-04-16 12:26:35 +02:00
Juan Font
9836b097a4 Make sure all clients of a user are ready (#1335) 2023-04-12 09:25:51 +02:00
Juan Font
d0b3b1bfc4 Fix binary releases 2023-04-08 09:21:27 +02:00
Juan Font
6eea96eabc Added 1.38.4 in the new tests 2023-04-07 19:45:46 +02:00
github-actions[bot]
d08fee78c3 docs(README): update contributors (#1325) 2023-04-07 17:31:29 +02:00
Andriy Kushnir (Orhideous)
bb5f0d456c Change primary color for light mode to white 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
c186c49e25 Removed custom accents, going with defaults 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
4ec6894773 Build with strict mode 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
dd9b4b1cb7 Move examples out of docs/ directory 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
a43bb9c958 Replace placeholder link with actual one 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
ba905ff6fc Add GHA CI to build and deploy docs 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
99bd09f688 Add new index page 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
a6bc792a61 Move admonitions to relevant sections 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
6381d3660a Add admonitions marking community-provided docs 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
66c5f74d78 Add admonitions marking community-provided docs 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
1723a6bf40 Configure MkDocs Material scaffold 2023-04-07 15:24:13 +02:00
Juan Font
353f191e4f Update changelog 2023-04-07 13:25:34 +02:00
Juan Font
8d865bb61b Target Go 1.20 in flake.nix 2023-04-07 13:25:34 +02:00
Juan Font
c6815c5334 Target Go 1.20 and Tailscale 1.38 2023-04-07 13:25:34 +02:00
Kristoffer Dalby
b684ac0668 Simplify goreleaser, package deb and rpm
This commit simplifies the goreleaser configuration and then adds nfpm
support which allows us to build .deb and .rpm for each of the ARCH we
support.

The deb and rpm packages adds systemd services and users, creates
directories etc and should in general give the user a working
environment. We should be able to remove a lot of the complicated,
PEBCAK inducing documentation after this.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-07 11:06:42 +02:00
Juan Font
dfc5d861c7 Fix CIDR calculation in expandACLPeerAddr 2023-04-05 09:44:46 +02:00
Juan Font
50b706eeed Remove deprecated linters + one casuing issues with imports 2023-04-04 22:37:27 +02:00
Sean Reifschneider
036ff1cbb9 Adding Powershell commands to Windows instructions (#1299)
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
2023-04-04 08:58:32 +02:00
Kristoffer Dalby
ceeef40cdf Add tests to verify "Hosts" aliases in ACL (#1304) 2023-04-03 10:08:48 +02:00
Julien Zweverink
681c86cc95 ACL Doc's (#1288) 2023-03-28 18:41:23 +02:00
Kristoffer Dalby
c7b459b615 Fix issue where ACL * would filter out returning connections (#1279) 2023-03-27 19:19:32 +02:00
Gabe Cook
56a7b1e349 Add SVG logos (#1286) 2023-03-27 15:33:25 +02:00
Antonio Fernandez
f1eee841cb updated to ACL doc (#1278) 2023-03-27 11:25:55 +02:00
Stefan Majer
45fbd34480 Do not use yaml.v2 and yaml.v3 as direct dependency (#1281) 2023-03-27 10:48:39 +02:00
Juan Font
248abcf353 Add missing entry to changelog and prepare for 0.22
Add missing entry to changelog
2023-03-20 13:48:56 +01:00
Kurnia D Win
2560c32378 adding some sleep on re-registration after machine expired (#1256) 2023-03-20 11:14:34 +01:00
Kristoffer Dalby
e38efd3cfa Add ACL test for limiting a single port. (#1258) 2023-03-20 08:52:52 +01:00
Moritz Poldrack
d12f247490 document running exit nodes
Currently the only kind-of documentation is #210 which is outdated. To remedy
this, add a document describing the process.
2023-03-19 11:02:01 +01:00
nicholas-yap
003036a779 Update iOS compatibility and added iOS docs (#1264) 2023-03-17 15:56:15 +01:00
github-actions[bot]
ed79f977a7 docs(README): update contributors (#1262) 2023-03-17 10:07:39 +01:00
Kristoffer Dalby
8012e1cbd2 Add instructions on how to login to iOS (#1261) 2023-03-15 11:31:38 +00:00
Kristoffer Dalby
a5562850a7 MapResponse optimalisations, peer list integration tests (#1254)
Co-authored-by: Allen <979347228@qq.com>
2023-03-06 17:50:26 +01:00
Stefan Majer
bb786ac8e4 github.com/gofrs/uuid/v5 is now go modules compatible, use it (#1224) 2023-03-06 09:54:24 +01:00
Juan Font
ea82035222 Allow to delete routes (#1244) 2023-03-06 09:05:40 +01:00
Albert Copeland
c9ecdd6ef1 Add Graphical Control Panels section to README (#1226) 2023-03-03 19:05:12 +01:00
Juan Font
54f5c249f1 Fix various linting issues + golang-lint upgrade (#1245) 2023-03-03 18:22:47 +01:00
dnaq
a82a603db6 Return 404 on unmatched routes (#1201) 2023-03-03 17:14:30 +01:00
Sean Reifschneider
f49930c514 Add "configtest" CLI command. (#1230)
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
Fixes https://github.com/juanfont/headscale/issues/1229
2023-03-03 14:55:29 +01:00
Kristoffer Dalby
2baeb79aa0 changelog: prep for 0.21 (#1246) 2023-03-03 13:42:45 +01:00
Kristoffer Dalby
b3f78a209a Post PR comment when nix vendor sum breaks
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-03-02 18:17:59 +01:00
Josh Taylor
5e6868a858 Run prettier 2023-02-27 10:28:49 +01:00
Josh Taylor
5caf848f94 Add steps for Google OAuth for OIDC 2023-02-27 10:28:49 +01:00
Juan Font
3e097123bf Target ts 1.36 in integration tests 2023-02-26 15:35:27 +01:00
Juan Font
74447b02e8 Target Tailscale 1.36 when building 2023-02-26 15:35:27 +01:00
Juan Font
20e96de963 Update dependencies 20230226 2023-02-26 14:39:37 +01:00
Juan Font
7c765fb3dc Update prettier action
Update prettier action
2023-02-26 13:54:30 +01:00
Michael Savage
dcc246c869 Fix OpenBSD build docs
- OpenBSD 7.2 installs go 1.19 by default
- Doubt you can run nix so skip the makefile/nix build and just go build directly
2023-02-26 12:54:47 +01:00
Àlex Torregrosa
cf7767d8f9 Add css to the /windows and /apple templates (#1211)
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
2023-02-09 08:49:05 -05:00
Maxim Gajdaj
61c578f82b Update running-headscale-linux.md
Option WorkingDirectory for headscale.service added
2023-02-09 08:17:28 -05:00
Kristoffer Dalby
6950ff7841 Add list of talks to the readme
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-07 17:04:14 +01:00
Kristoffer Dalby
e65ce17f7b Add documentation to integration test framework
so tsic, hsic and scenario

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 16:25:58 +01:00
Kristoffer Dalby
b190ec8edc Add section about running locally
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 16:25:58 +01:00
Kristoffer Dalby
c39085911f Add node expiry test
This commits adds a test to verify that nodes get updated if a node in
their network expires.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
3c20d2a178 Update changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
9187e4287c Remove unused components from old integration tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
2b7bcb77a5 Stop using deprecated string function
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
97a909866d Use pingAll helper for all integration pinging
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
feeb5d334b Populate the tags field on node
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
a840a2e6ee Sort tailcfg.Node creation as upstream
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
4183345020 Do not collect services, we dont support it
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
50fb7ad6ce Add TODOs for only sending patch updates
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
88a9f4b44c Send control time in map response
This gives all the nodes the same constant time to work from

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
00fbd8dd93 Remove all tests before generating new ones
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-02 17:55:19 +01:00
Kristoffer Dalby
ce587d2421 Update test workflows
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-01 10:58:37 +01:00
Kristoffer Dalby
e1eb30084d Remove new line at start of test template
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-01 10:58:37 +01:00
Kristoffer Dalby
673638afe7 Use ripgrep to find list of tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-01 10:58:37 +01:00
Kristoffer Dalby
da48cf64b3 Set OpenID Connect Expiry
This commit adds a default OpenID Connect expiry to 180d to align with
Tailscale SaaS (previously infinite or based on token expiry).

In addition, it adds an option use the expiry time from the Token sent
by the OpenID provider. This will typically cause really short expiry
and you should only turn on this option if you know what you are
desiring.

This fixes #1176.

Co-authored-by: Even Holthe <even.holthe@bekk.no>
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-31 18:55:16 +01:00
Dominic Bevacqua
385fd93e73 Update changelog 2023-01-31 00:15:48 +01:00
Dominic Bevacqua
26edf24477 Allow split DNS configuration without requiring global nameservers
Align behaviour of dns_config.restricted_nameservers to tailscale.

Tailscale allows split DNS configuration without requiring global nameservers.

In addition, as per [the docs](https://tailscale.com/kb/1054/dns/#using-dns-settings-in-the-admin-console):

> These nameservers also configure search domains for your devices

This commit aligns headscale to tailscale by:

 * honouring dns_config.restricted_nameservers regardless of whether any global resolvers are configured
 * adding a search domain for each restricted_nameserver
2023-01-31 00:15:48 +01:00
Kristoffer Dalby
83a538cc95 Rename IP specific function, add missing test case
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-30 15:56:38 +01:00
Kristoffer Dalby
cffa040474 Cancel old builds if new commits appear
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-30 14:57:10 +01:00
Kristoffer Dalby
727d95b477 Improve generated integration tests
- Save logs from control(headscale) on every run to tmp
- Upgrade nix-actions
- Cancel builds if new commit is pushed
- Fix a sorting bug in user command test

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-30 14:43:03 +01:00
Juan Font
640bb94119 Do not show IsPrimary field as false in exit nodes 2023-01-29 14:54:09 +01:00
Juan Font
0f65918a25 Update tests
Fixed linting
2023-01-29 12:25:37 +01:00
Juan Font
3ac2e0b253 Enable both exit node routes (IPv4 and IPv6) at the same time.
As indicated by bradfitz in https://github.com/juanfont/headscale/issues/804#issuecomment-1399314002,
both routes for the exit node must be enabled at the same time. If a user tries to enable one of the exit node routes,
the other gets activated too.

This commit also reduces the API surface, making private a method that didnt need to be exposed.
2023-01-29 12:25:37 +01:00
Juan Font
b322cdf251 Updated changelog for v0.20.0 2023-01-29 11:46:37 +01:00
Johan Siebens
e128796b59 use smallzstd and sync pool 2023-01-27 12:03:24 +01:00
Kristoffer Dalby
6d669c6b9c Migrate namespace_id to user_id column in machine and pak
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-26 11:07:26 +01:00
Kristoffer Dalby
8dadb045cf Mark -n and --namespace as deprecated
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-26 10:22:38 +01:00
Christian Heusel
9f6e546522 modify the test to reflect the changes on the webinterface
related to 2d44a1c99c17

Signed-off-by: Christian Heusel <christian@heusel.eu>
2023-01-26 08:33:44 +01:00
Juan Font
9714900db9 Target Tailscale 1.36.0 2023-01-26 07:50:03 +01:00
Jan Hartkopf
cb25f0d650 Add hint for reverse proxying with Apache 2023-01-23 15:51:20 +01:00
179 changed files with 11705 additions and 7339 deletions

View File

@@ -8,9 +8,14 @@ on:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
permissions: write-all
steps:
- uses: actions/checkout@v3
@@ -32,10 +37,34 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run build
id: build
if: steps.changed-files.outputs.any_changed == 'true'
run: nix build
run: |
nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}"
- uses: actions/upload-artifact@v2
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@v6
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.rest.pulls.createReviewComment({
pull_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@v3
if: steps.changed-files.outputs.any_changed == 'true'
with:
name: headscale-linux

45
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Build documentation
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
- name: Build docs
run: mkdocs build --strict
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: ./site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1

View File

@@ -3,6 +3,10 @@ name: Lint
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
golangci-lint:
runs-on: ubuntu-latest
@@ -26,7 +30,7 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
uses: golangci/golangci-lint-action@v2
with:
version: v1.49.0
version: v1.51.2
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
@@ -59,7 +63,7 @@ jobs:
- name: Prettify code
if: steps.changed-files.outputs.any_changed == 'true'
uses: creyD/prettier_action@v4.0
uses: creyD/prettier_action@v4.3
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}

138
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,138 @@
---
name: Release Docker
on:
push:
tags:
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -19,135 +19,6 @@ jobs:
- uses: cachix/install-nix-action@v16
- name: Run goreleaser
run: nix develop --command -- goreleaser release --rm-dist
run: nix develop --command -- goreleaser release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -1,35 +0,0 @@
name: Integration Test DERP
on: [pull_request]
jobs:
integration-test-derp:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Set Swap Space
uses: pierotofy/set-swap-space@master
with:
swap-size-gb: 10
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run Embedded DERP server integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- make test_integration_derp

View File

@@ -1,11 +1,14 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleNamespacesAllToAll
name: Integration Test v2 - TestACLAllowStarDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,10 +41,23 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleNamespacesAllToAll$"
-run "^TestACLAllowStarDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUser80Dst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUser80Dst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUserDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUserDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDenyAllPort80
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDenyAllPort80$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDevice1CanAccessDevice2$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLHostsInNetMapTable
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLHostsInNetMapTable$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReach
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReach$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReachBySubnet$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestAuthKeyLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestCreateTailscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestCreateTailscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestDERPServerScenario
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestDERPServerScenario$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestEnablingRoutes
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestEnablingRoutes$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,11 +1,14 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneNamespaceAllToAll
name: Integration Test v2 - TestEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,10 +41,23 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneNamespaceAllToAll$"
-run "^TestEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,11 +1,14 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNamespaceCommand
name: Integration Test v2 - TestExpireNode
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,10 +41,23 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNamespaceCommand$"
-run "^TestExpireNode$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestHeadscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestHeadscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestOIDCAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestOIDCAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestPingAllByHostname
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByHostname$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestPingAllByIP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestPreAuthKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandReusableEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandWithoutExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestResolveMagicDNS
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestResolveMagicDNS$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestSSHIsBlockedInACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestSSHIsBlockedInACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleUsersAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestSSHNoSSHConfigured
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestSSHNoSSHConfigured$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneUserAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneUserAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,47 +0,0 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSNamespaceOnlyIsolation
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSNamespaceOnlyIsolation$"

View File

@@ -0,0 +1,63 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSUserOnlyIsolation
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSUserOnlyIsolation$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestTaildrop
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestTaildrop$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,4 +1,3 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
@@ -6,6 +5,10 @@ name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,6 +41,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -45,3 +49,15 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^TestTailscaleNodesJoiningHeadcale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,11 +1,14 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodes
name: Integration Test v2 - TestUserCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -26,8 +29,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -38,10 +41,23 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodes$"
-run "^TestUserCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -2,6 +2,10 @@ name: Tests
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest

12
.gitignore vendored
View File

@@ -1,3 +1,5 @@
ignored/
# Binaries for programs and plugins
*.exe
*.exe~
@@ -12,8 +14,9 @@
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
vendor/
dist/
/headscale
config.json
config.yaml
@@ -26,10 +29,15 @@ derp.yaml
# Exclude Jetbrains Editors
.idea
test_output/
test_output/
control_logs/
# Nix build output
result
.direnv/
integration_test/etc/config.dump.yaml
# MkDocs
.cache
/site

View File

@@ -29,6 +29,14 @@ linters:
- execinquery
- exhaustruct
- nolintlint
- musttag # causes issues with imported libs
# deprecated
- structcheck # replaced by unused
- ifshort # deprecated by the owner
- varcheck # replaced by unused
- nosnakecase # replaced by revive
- deadcode # replaced by unused
# We should strive to enable these:
- wrapcheck

View File

@@ -1,21 +1,28 @@
---
before:
hooks:
- go mod tidy -compat=1.19
- go mod tidy -compat=1.20
- go mod vendor
release:
prerelease: auto
builds:
- id: darwin-amd64
- id: headscale
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- darwin
goarch:
- amd64
targets:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
flags:
- -mod=readonly
ldflags:
@@ -23,60 +30,56 @@ builds:
tags:
- ts2019
- id: darwin-arm64
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- darwin
goarch:
- arm64
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
- id: linux-amd64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- amd64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
- id: linux-arm64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
archives:
- id: golang-cross
builds:
- darwin-amd64
- darwin-arm64
- linux-amd64
- linux-arm64
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
format: binary
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}"
format: tar.gz
files:
- "vendor/"
nfpms:
# Configure nFPM for .deb and .rpm releases
#
# See https://nfpm.goreleaser.com/configuration/
# and https://goreleaser.com/customization/nfpm/
#
# Useful tools for debugging .debs:
# List file contents: dpkg -c dist/headscale...deb
# Package metadata: dpkg --info dist/headscale....deb
#
- builds:
- headscale
package_name: headscale
priority: optional
vendor: headscale
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
homepage: https://github.com/juanfont/headscale
license: BSD
bindir: /usr/bin
formats:
- deb
# - rpm
contents:
- src: ./config-example.yaml
dst: /etc/headscale/config.yaml
type: config|noreplace
file_info:
mode: 0644
- src: ./docs/packaging/headscale.systemd.service
dst: /usr/lib/systemd/system/headscale.service
- dst: /var/lib/headscale
type: dir
- dst: /var/run/headscale
type: dir
scripts:
postinstall: ./docs/packaging/postinstall.sh
postremove: ./docs/packaging/postremove.sh
checksum:
name_template: "checksums.txt"
snapshot:

View File

@@ -1,13 +1,60 @@
# CHANGELOG
## 0.19.0 (2022-11-26)
## 0.23.0 (2023-XX-XX)
### Changes
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
- Profiles are continously generated in our integration tests.
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
## 0.22.1 (2023-04-20)
### Changes
- Fix issue where SystemD could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
## 0.22.0 (2023-04-20)
### Changes
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
## 0.21.0 (2023-03-20)
### Changes
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
## 0.20.0 (2023-02-03)
### Changes
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
- defaults to 180 days like Tailscale SaaS
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
## 0.19.0 (2023-01-29)
### BREAKING
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
- **BACKUP your database before upgrading**
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
## 0.18.0 (2022-01-14)
## 0.18.0 (2023-01-14)
### Changes

View File

@@ -1,5 +1,5 @@
# Builder image
FROM docker.io/golang:1.19-bullseye AS build
FROM docker.io/golang:1.20-bullseye AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale

View File

@@ -1,5 +1,5 @@
# Builder image
FROM docker.io/golang:1.19-bullseye AS build
FROM docker.io/golang:1.20-bullseye AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
@@ -13,7 +13,7 @@ RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.c
RUN test -e /go/bin/headscale
# Debug image
FROM docker.io/golang:1.19.0-bullseye
FROM docker.io/golang:1.20.0-bullseye
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC

View File

@@ -36,17 +36,7 @@ test_integration_cli:
-v ~/.cache/hs-integration-go:/go \
-v $$PWD:$$PWD -w $$PWD \
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
test_integration_derp:
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
docker network create headscale-test || true
docker run -t --rm \
--network headscale-test \
-v ~/.cache/hs-integration-go:/go \
-v $$PWD:$$PWD -w $$PWD \
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
test_integration_v2_general:
docker run \
@@ -56,13 +46,7 @@ test_integration_v2_general:
-v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \
golang:1 \
go test $(TAGS) -failfast ./... -timeout 120m -parallel 8
coverprofile_func:
go tool cover -func=coverage.out
coverprofile_html:
go tool cover -html=coverage.out
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
lint:
golangci-lint run --fix --timeout 10m
@@ -80,11 +64,4 @@ compress: build
generate:
rm -rf gen
go run github.com/bufbuild/buf/cmd/buf generate proto
install-protobuf-plugins:
go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc
buf generate proto

341
README.md
View File

@@ -38,7 +38,6 @@ implements a _single_ Tailnet, which is typically what a single organisation, or
home/personal setup would use.
`headscale` uses terms that maps to Tailscale's control server, consult the
[glossary](./docs/glossary.md) for explainations.
## Support
@@ -75,11 +74,25 @@ one of the maintainers.
| macOS | Yes (see `/apple` on your headscale for more information) |
| Windows | Yes [docs](./docs/windows-client.md) |
| Android | Yes [docs](./docs/android-client.md) |
| iOS | Not yet |
| iOS | Yes [docs](./docs/iOS-client.md) |
## Running headscale
Please have a look at the documentation under [`docs/`](docs/).
Please have a look at the [`documentation`](https://headscale.net/).
## Graphical Control Panels
Headscale provides an API for complete management of your Tailnet.
These are community projects not directly affiliated with the Headscale project.
| Name | Repository Link | Description | Status |
| --------------- | ---------------------------------------------------- | ------------------------------------------------------ | ------ |
| headscale-webui | [Github](https://github.com/ifargle/headscale-webui) | A simple Headscale web UI for small-scale deployments. | Alpha |
## Talks
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
- presented by Juan Font Alonso and Kristoffer Dalby
## Disclaimer
@@ -174,13 +187,6 @@ make build
<sub style="font-size:14px"><b>Juan Font</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cure>
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
@@ -202,8 +208,6 @@ make build
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/reynico>
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
@@ -211,6 +215,15 @@ make build
<sub style="font-size:14px"><b>Nico</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/evenh>
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
<br />
<sub style="font-size:14px"><b>Even Holthe</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/e-zk>
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
@@ -239,6 +252,15 @@ make build
<sub style="font-size:14px"><b>unreality</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
<br />
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
@@ -246,13 +268,11 @@ make build
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -262,6 +282,13 @@ make build
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/christian-heusel>
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
<br />
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mike-lloyd03>
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
@@ -276,6 +303,8 @@ make build
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Niek>
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
@@ -290,8 +319,6 @@ make build
<sub style="font-size:14px"><b>Eugen Biegler</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/617a7a>
<img src=https://avatars.githubusercontent.com/u/67651251?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Azz/>
@@ -299,13 +326,6 @@ make build
<sub style="font-size:14px"><b>Azz</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/evenh>
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
<br />
<sub style="font-size:14px"><b>Even Holthe</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/qbit>
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
@@ -327,6 +347,15 @@ make build
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fdelucchijr>
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
@@ -334,8 +363,6 @@ make build
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/OrvilleQ>
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/>
@@ -357,6 +384,15 @@ make build
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/bravechamp>
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
<br />
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/deonthomasgy>
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
@@ -378,8 +414,6 @@ make build
<sub style="font-size:14px"><b>ChibangLW</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mevansam>
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
@@ -401,6 +435,8 @@ make build
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/samson4649>
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
@@ -408,13 +444,6 @@ make build
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kevin1sMe>
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
@@ -422,8 +451,13 @@ make build
<sub style="font-size:14px"><b>kevinlin</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/QZAiXH>
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
<br />
<sub style="font-size:14px"><b>Snack</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/artemklevtsov>
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
@@ -438,6 +472,22 @@ make build
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dbevacqua>
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
<br />
<sub style="font-size:14px"><b>dbevacqua</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/joshuataylor>
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
<br />
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/CNLHC>
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
@@ -445,6 +495,13 @@ make build
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/motiejus>
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
<br />
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pvinis>
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
@@ -475,13 +532,6 @@ make build
<sub style="font-size:14px"><b>Victor Freire</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lachy2849>
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
<br />
<sub style="font-size:14px"><b>lachy2849</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/t56k>
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
@@ -489,6 +539,13 @@ make build
<sub style="font-size:14px"><b>thomas</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/linsomniac>
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
<br />
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aberoham>
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
@@ -496,6 +553,13 @@ make build
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/iFargle>
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
<br />
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/puzpuzpuz>
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
@@ -503,6 +567,15 @@ make build
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/theryecatcher>
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
<br />
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/apognu>
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
@@ -510,8 +583,13 @@ make build
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tony1661>
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
<br />
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aofei>
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
@@ -527,12 +605,14 @@ make build
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/awoimbee>
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
<a href=https://github.com/avirut>
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
<br />
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stensonb>
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
@@ -554,8 +634,13 @@ make build
<sub style="font-size:14px"><b>kundel</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fatih-acar>
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
<br />
<sub style="font-size:14px"><b>fatih-acar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fkr>
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
@@ -570,6 +655,15 @@ make build
<sub style="font-size:14px"><b>Felix Yan</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gabe565>
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
<br />
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JJGadgets>
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
@@ -577,6 +671,13 @@ make build
<sub style="font-size:14px"><b>JJGadgets</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hrtkpf>
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
<br />
<sub style="font-size:14px"><b>hrtkpf</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimt>
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
@@ -584,6 +685,22 @@ make build
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jsiebens>
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
<br />
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/johnae>
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
<br />
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ShadowJonathan>
<img src=https://avatars.githubusercontent.com/u/22740616?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan de Jong/>
@@ -591,6 +708,43 @@ make build
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JulienFloris>
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
<br />
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/win-t>
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
<br />
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/foxtrot>
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
<br />
<sub style="font-size:14px"><b>Marc</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magf>
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
<br />
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mikejsavage>
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
<br />
<sub style="font-size:14px"><b>Michael Savage</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/piec>
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
@@ -598,8 +752,6 @@ make build
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Donran>
<img src=https://avatars.githubusercontent.com/u/4838348?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pontus N/>
@@ -635,6 +787,8 @@ make build
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/shaananc>
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
@@ -642,8 +796,6 @@ make build
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stefanvanburen>
<img src=https://avatars.githubusercontent.com/u/622527?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan VanBuren/>
@@ -672,6 +824,15 @@ make build
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Teteros>
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
<br />
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gitter-badger>
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
@@ -686,8 +847,6 @@ make build
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/thetillhoff>
<img src=https://avatars.githubusercontent.com/u/25052289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Till Hoffmann/>
@@ -716,11 +875,13 @@ make build
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/newellz2>
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary N./>
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
<br />
<sub style="font-size:14px"><b>Zachary N.</b></sub>
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -730,8 +891,6 @@ make build
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zhzy0077>
<img src=https://avatars.githubusercontent.com/u/8717471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zhiyuan Zheng/>
@@ -746,6 +905,13 @@ make build
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/caelansar>
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
<br />
<sub style="font-size:14px"><b>caelansar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/derelm>
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
@@ -753,6 +919,15 @@ make build
<sub style="font-size:14px"><b>derelm</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dnaq>
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
<br />
<sub style="font-size:14px"><b>dnaq</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nning>
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
@@ -767,6 +942,13 @@ make build
<sub style="font-size:14px"><b>ignoramous</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimyag>
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
<br />
<sub style="font-size:14px"><b>jimyag</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magichuihui>
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
@@ -774,8 +956,6 @@ make build
<sub style="font-size:14px"><b>suhelen</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lion24>
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
@@ -783,6 +963,15 @@ make build
<sub style="font-size:14px"><b>sharkonet</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ma6174>
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
<br />
<sub style="font-size:14px"><b>ma6174</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/manju-rn>
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
@@ -791,10 +980,17 @@ make build
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
<a href=https://github.com/nicholas-yap>
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
<br />
<sub style="font-size:14px"><b>pernila</b></sub>
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/>
<br />
<sub style="font-size:14px"><b>Tommi Pernila</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -811,6 +1007,8 @@ make build
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/xpzouying>
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
@@ -818,5 +1016,12 @@ make build
<sub style="font-size:14px"><b>zy</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/atorregrosa-smd>
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
<br />
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
</a>
</td>
</tr>
</table>

224
acls.go
View File

@@ -13,7 +13,9 @@ import (
"time"
"github.com/rs/zerolog/log"
"github.com/samber/lo"
"github.com/tailscale/hujson"
"go4.org/netipx"
"gopkg.in/yaml.v3"
"tailscale.com/envknob"
"tailscale.com/tailcfg"
@@ -117,7 +119,7 @@ func (h *Headscale) LoadACLPolicy(path string) error {
}
func (h *Headscale) UpdateACLRules() error {
machines, err := h.ListMachines()
nodes, err := h.ListNodes()
if err != nil {
return err
}
@@ -126,13 +128,21 @@ func (h *Headscale) UpdateACLRules() error {
return errEmptyPolicy
}
rules, err := generateACLRules(machines, *h.aclPolicy, h.cfg.OIDC.StripEmaildomain)
rules, err := generateACLRules(nodes, *h.aclPolicy, h.cfg.OIDC.StripEmaildomain)
if err != nil {
return err
}
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
h.aclRules = rules
// Precompute a map of which sources can reach each destination, this is
// to provide quicker lookup when we calculate the peerlist for the map
// response to nodes.
aclPeerCacheMap := generateACLPeerCacheMap(rules)
h.aclPeerCacheMapRW.Lock()
h.aclPeerCacheMap = aclPeerCacheMap
h.aclPeerCacheMapRW.Unlock()
if featureEnableSSH() {
sshRules, err := h.generateSSHRules()
if err != nil {
@@ -150,7 +160,75 @@ func (h *Headscale) UpdateACLRules() error {
return nil
}
func generateACLRules(machines []Machine, aclPolicy ACLPolicy, stripEmaildomain bool) ([]tailcfg.FilterRule, error) {
// generateACLPeerCacheMap takes a list of Tailscale filter rules and generates a map
// of which Sources ("*" and IPs) can access destinations. This is to speed up the
// process of generating MapResponses when deciding which Peers to inform nodes about.
func generateACLPeerCacheMap(rules []tailcfg.FilterRule) map[string][]string {
aclCachePeerMap := make(map[string][]string)
for _, rule := range rules {
for _, srcIP := range rule.SrcIPs {
for _, ip := range expandACLPeerAddr(srcIP) {
if data, ok := aclCachePeerMap[ip]; ok {
for _, dstPort := range rule.DstPorts {
data = append(data, dstPort.IP)
}
aclCachePeerMap[ip] = data
} else {
dstPortsMap := make([]string, 0)
for _, dstPort := range rule.DstPorts {
dstPortsMap = append(dstPortsMap, dstPort.IP)
}
aclCachePeerMap[ip] = dstPortsMap
}
}
}
}
log.Trace().Interface("ACL Cache Map", aclCachePeerMap).Msg("ACL Peer Cache Map generated")
return aclCachePeerMap
}
// expandACLPeerAddr takes a "tailcfg.FilterRule" "IP" and expands it into
// something our cache logic can look up, which is "*" or single IP addresses.
// This is probably quite inefficient, but it is a result of
// "make it work, then make it fast", and a lot of the ACL stuff does not
// work, but people have tried to make it fast.
func expandACLPeerAddr(srcIP string) []string {
if ip, err := netip.ParseAddr(srcIP); err == nil {
return []string{ip.String()}
}
if cidr, err := netip.ParsePrefix(srcIP); err == nil {
addrs := []string{}
ipRange := netipx.RangeOfPrefix(cidr)
from := ipRange.From()
too := ipRange.To()
if from == too {
return []string{from.String()}
}
for from != too && from.Less(too) {
addrs = append(addrs, from.String())
from = from.Next()
}
addrs = append(addrs, too.String()) // Add the last IP address in the range
return addrs
}
// probably "*" or other string based "IP"
return []string{srcIP}
}
func generateACLRules(
nodes []Node,
aclPolicy ACLPolicy,
stripEmaildomain bool,
) ([]tailcfg.FilterRule, error) {
rules := []tailcfg.FilterRule{}
for index, acl := range aclPolicy.ACLs {
@@ -160,7 +238,7 @@ func generateACLRules(machines []Machine, aclPolicy ACLPolicy, stripEmaildomain
srcIPs := []string{}
for innerIndex, src := range acl.Sources {
srcs, err := generateACLPolicySrcIP(machines, aclPolicy, src, stripEmaildomain)
srcs, err := generateACLPolicySrc(nodes, aclPolicy, src, stripEmaildomain)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
@@ -181,7 +259,7 @@ func generateACLRules(machines []Machine, aclPolicy ACLPolicy, stripEmaildomain
destPorts := []tailcfg.NetPortRange{}
for innerIndex, dest := range acl.Destinations {
dests, err := generateACLPolicyDest(
machines,
nodes,
aclPolicy,
dest,
needsWildcard,
@@ -213,7 +291,7 @@ func (h *Headscale) generateSSHRules() ([]*tailcfg.SSHRule, error) {
return nil, errEmptyPolicy
}
machines, err := h.ListMachines()
nodes, err := h.ListNodes()
if err != nil {
return nil, err
}
@@ -261,7 +339,7 @@ func (h *Headscale) generateSSHRules() ([]*tailcfg.SSHRule, error) {
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
for innerIndex, rawSrc := range sshACL.Sources {
expandedSrcs, err := expandAlias(
machines,
nodes,
*h.aclPolicy,
rawSrc,
h.cfg.OIDC.StripEmaildomain,
@@ -311,31 +389,56 @@ func sshCheckAction(duration string) (*tailcfg.SSHAction, error) {
}, nil
}
func generateACLPolicySrcIP(
machines []Machine,
func generateACLPolicySrc(
nodes []Node,
aclPolicy ACLPolicy,
src string,
stripEmaildomain bool,
) ([]string, error) {
return expandAlias(machines, aclPolicy, src, stripEmaildomain)
return expandAlias(nodes, aclPolicy, src, stripEmaildomain)
}
func generateACLPolicyDest(
machines []Machine,
nodes []Node,
aclPolicy ACLPolicy,
dest string,
needsWildcard bool,
stripEmaildomain bool,
) ([]tailcfg.NetPortRange, error) {
tokens := strings.Split(dest, ":")
var tokens []string
log.Trace().Str("destination", dest).Msg("generating policy destination")
// Check if there is a IPv4/6:Port combination, IPv6 has more than
// three ":".
tokens = strings.Split(dest, ":")
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
return nil, errInvalidPortFormat
port := tokens[len(tokens)-1]
maybeIPv6Str := strings.TrimSuffix(dest, ":"+port)
log.Trace().Str("maybeIPv6Str", maybeIPv6Str).Msg("")
if maybeIPv6, err := netip.ParseAddr(maybeIPv6Str); err != nil && !maybeIPv6.Is6() {
log.Trace().Err(err).Msg("trying to parse as IPv6")
return nil, fmt.Errorf(
"failed to parse destination, tokens %v: %w",
tokens,
errInvalidPortFormat,
)
} else {
tokens = []string{maybeIPv6Str, port}
}
}
log.Trace().Strs("tokens", tokens).Msg("generating policy destination")
var alias string
// We can have here stuff like:
// git-server:*
// 192.168.1.0/24:22
// fd7a:115c:a1e0::2:22
// fd7a:115c:a1e0::2/128:22
// tag:montreal-webserver:80,443
// tag:api-server:443
// example-host-1:*
@@ -346,7 +449,7 @@ func generateACLPolicyDest(
}
expanded, err := expandAlias(
machines,
nodes,
aclPolicy,
alias,
stripEmaildomain,
@@ -427,9 +530,12 @@ func parseProtocol(protocol string) ([]int, bool, error) {
// - a user
// - a group
// - a tag
// - a host
// - an ip
// - a cidr
// and transform these in IPAddresses.
func expandAlias(
machines []Machine,
nodes Nodes,
aclPolicy ACLPolicy,
alias string,
stripEmailDomain bool,
@@ -449,7 +555,7 @@ func expandAlias(
return ips, err
}
for _, n := range users {
nodes := filterMachinesByUser(machines, n)
nodes := filterNodesByUser(nodes, n)
for _, node := range nodes {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
@@ -460,9 +566,9 @@ func expandAlias(
if strings.HasPrefix(alias, "tag:") {
// check for forced tags
for _, machine := range machines {
if contains(machine.ForcedTags, alias) {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
for _, node := range nodes {
if contains(node.ForcedTags, alias) {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
}
@@ -484,13 +590,13 @@ func expandAlias(
}
}
// filter out machines per tag owner
// filter out nodes per tag owner
for _, user := range owners {
machines := filterMachinesByUser(machines, user)
for _, machine := range machines {
hi := machine.GetHostInfo()
nodes := filterNodesByUser(nodes, user)
for _, node := range nodes {
hi := node.GetHostInfo()
if contains(hi.RequestTags, alias) {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
}
}
@@ -499,10 +605,10 @@ func expandAlias(
}
// if alias is a user
nodes := filterMachinesByUser(machines, alias)
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
filteredNodes := filterNodesByUser(nodes, alias)
filteredNodes = excludeCorrectlyTaggedNodes(aclPolicy, filteredNodes, alias, stripEmailDomain)
for _, n := range nodes {
for _, n := range filteredNodes {
ips = append(ips, n.IPAddresses.ToStringSlice()...)
}
if len(ips) > 0 {
@@ -511,19 +617,40 @@ func expandAlias(
// if alias is an host
if h, ok := aclPolicy.Hosts[alias]; ok {
return []string{h.String()}, nil
log.Trace().Str("host", h.String()).Msg("expandAlias got hosts entry")
return expandAlias(filteredNodes, aclPolicy, h.String(), stripEmailDomain)
}
// if alias is an IP
ip, err := netip.ParseAddr(alias)
if err == nil {
return []string{ip.String()}, nil
if ip, err := netip.ParseAddr(alias); err == nil {
log.Trace().Str("ip", ip.String()).Msg("expandAlias got ip")
ips := []string{ip.String()}
matches := nodes.FilterByIP(ip)
for _, node := range matches {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
return lo.Uniq(ips), nil
}
// if alias is an CIDR
cidr, err := netip.ParsePrefix(alias)
if err == nil {
return []string{cidr.String()}, nil
if cidr, err := netip.ParsePrefix(alias); err == nil {
log.Trace().Str("cidr", cidr.String()).Msg("expandAlias got cidr")
val := []string{cidr.String()}
// This is suboptimal and quite expensive, but if we only add the cidr, we will miss all the relevant IPv6
// addresses for the hosts that belong to tailscale. This doesnt really affect stuff like subnet routers.
for _, node := range nodes {
for _, ip := range node.IPAddresses {
// log.Trace().
// Msgf("checking if node ip (%s) is part of cidr (%s): %v, is single ip cidr (%v), addr: %s", ip.String(), cidr.String(), cidr.Contains(ip), cidr.IsSingleIP(), cidr.Addr().String())
if cidr.Contains(ip) {
val = append(val, node.IPAddresses.ToStringSlice()...)
}
}
}
return lo.Uniq(val), nil
}
log.Warn().Msgf("No IPs found with the alias %v", alias)
@@ -536,11 +663,11 @@ func expandAlias(
// we assume in this function that we only have nodes from 1 user.
func excludeCorrectlyTaggedNodes(
aclPolicy ACLPolicy,
nodes []Machine,
nodes []Node,
user string,
stripEmailDomain bool,
) []Machine {
out := []Machine{}
) []Node {
out := []Node{}
tags := []string{}
for tag := range aclPolicy.TagOwners {
owners, _ := expandTagOwners(aclPolicy, user, stripEmailDomain)
@@ -549,9 +676,9 @@ func excludeCorrectlyTaggedNodes(
tags = append(tags, tag)
}
}
// for each machine if tag is in tags list, don't append it.
for _, machine := range nodes {
hi := machine.GetHostInfo()
// for each node if tag is in tags list, don't append it.
for _, node := range nodes {
hi := node.GetHostInfo()
found := false
for _, t := range hi.RequestTags {
@@ -561,11 +688,11 @@ func excludeCorrectlyTaggedNodes(
break
}
}
if len(machine.ForcedTags) > 0 {
if len(node.ForcedTags) > 0 {
found = true
}
if !found {
out = append(out, machine)
out = append(out, node)
}
}
@@ -585,6 +712,7 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
ports := []tailcfg.PortRange{}
for _, portStr := range strings.Split(portsStr, ",") {
log.Trace().Msgf("parsing portstring: %s", portStr)
rang := strings.Split(portStr, "-")
switch len(rang) {
case 1:
@@ -619,11 +747,11 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
return &ports, nil
}
func filterMachinesByUser(machines []Machine, user string) []Machine {
out := []Machine{}
for _, machine := range machines {
if machine.User.Name == user {
out = append(out, machine)
func filterNodesByUser(nodes []Node, user string) []Node {
out := []Node{}
for _, node := range nodes {
if node.User.Name == user {
out = append(out, node)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -51,7 +51,7 @@ type AutoApprovers struct {
ExitNode []string `json:"exitNode" yaml:"exitNode"`
}
// SSH controls who can ssh into which machines.
// SSH controls who can ssh into which nodes.
type SSH struct {
Action string `json:"action" yaml:"action"`
Sources []string `json:"src" yaml:"src"`

6
api.go
View File

@@ -20,7 +20,7 @@ const (
RegisterMethodOIDC = "oidc"
RegisterMethodCLI = "cli"
ErrRegisterMethodCLIDoesNotSupportExpire = Error(
"machines registered with CLI does not support expire",
"node registered with CLI does not support expire",
)
)
@@ -74,9 +74,9 @@ var registerWebAPITemplate = template.Must(
</head>
<body>
<h1>headscale</h1>
<h2>Machine registration</h2>
<h2>Node registration</h2>
<p>
Run the command below in the headscale server to add this machine to your network:
Run the command below in the headscale server to add this node to your network:
</p>
<pre><code>headscale nodes register --user USERNAME --key {{.Key}}</code></pre>
</body>

View File

@@ -1,19 +1,21 @@
package headscale
import (
"time"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
)
func (h *Headscale) generateMapResponse(
mapRequest tailcfg.MapRequest,
machine *Machine,
node *Node,
) (*tailcfg.MapResponse, error) {
log.Trace().
Str("func", "generateMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
Str("node", mapRequest.Hostinfo.Hostname).
Msg("Creating Map response")
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
tailNode, err := h.toNode(*node, h.cfg.BaseDomain, h.cfg.DNSConfig)
if err != nil {
log.Error().
Caller().
@@ -24,7 +26,7 @@ func (h *Headscale) generateMapResponse(
return nil, err
}
peers, err := h.getValidPeers(machine)
peers, err := h.getValidPeers(node)
if err != nil {
log.Error().
Caller().
@@ -35,7 +37,7 @@ func (h *Headscale) generateMapResponse(
return nil, err
}
profiles := h.getMapResponseUserProfiles(*machine, peers)
profiles := h.getMapResponseUserProfiles(*node, peers)
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
if err != nil {
@@ -51,20 +53,50 @@ func (h *Headscale) generateMapResponse(
dnsConfig := getMapResponseDNSConfig(
h.cfg.DNSConfig,
h.cfg.BaseDomain,
*machine,
*node,
peers,
)
now := time.Now()
resp := tailcfg.MapResponse{
KeepAlive: false,
Node: node,
Peers: nodePeers,
DNSConfig: dnsConfig,
Domain: h.cfg.BaseDomain,
KeepAlive: false,
Node: tailNode,
// TODO: Only send if updated
DERPMap: h.DERPMap,
// TODO: Only send if updated
Peers: nodePeers,
// TODO(kradalby): Implement:
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
// PeersChanged
// PeersRemoved
// PeersChangedPatch
// PeerSeenChange
// OnlineChange
// TODO: Only send if updated
DNSConfig: dnsConfig,
// TODO: Only send if updated
Domain: h.cfg.BaseDomain,
// Do not instruct clients to collect services, we do not
// support or do anything with them
CollectServices: "false",
// TODO: Only send if updated
PacketFilter: h.aclRules,
SSHPolicy: h.sshPolicy,
DERPMap: h.DERPMap,
UserProfiles: profiles,
// TODO: Only send if updated
SSHPolicy: h.sshPolicy,
ControlTime: &now,
Debug: &tailcfg.Debug{
DisableLogTail: !h.cfg.LogTail.Enabled,
RandomizeClientPort: h.cfg.RandomizeClientPort,
@@ -73,7 +105,7 @@ func (h *Headscale) generateMapResponse(
log.Trace().
Str("func", "generateMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
Str("node", mapRequest.Hostinfo.Hostname).
// Interface("payload", resp).
Msgf("Generated map response: %s", tailMapResponseToString(resp))

68
app.go
View File

@@ -84,9 +84,11 @@ type Headscale struct {
DERPMap *tailcfg.DERPMap
DERPServer *DERPServer
aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
sshPolicy *tailcfg.SSHPolicy
aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
aclPeerCacheMapRW sync.RWMutex
aclPeerCacheMap map[string][]string
sshPolicy *tailcfg.SSHPolicy
lastStateChange *xsync.MapOf[string, time.Time]
@@ -209,7 +211,7 @@ func (h *Headscale) redirect(w http.ResponseWriter, req *http.Request) {
http.Redirect(w, req, target, http.StatusFound)
}
// expireEphemeralNodes deletes ephemeral machine records that have not been
// expireEphemeralNodes deletes ephemeral node records that have not been
// seen for longer than h.cfg.EphemeralNodeInactivityTimeout.
func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
@@ -218,12 +220,12 @@ func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
}
}
// expireExpiredMachines expires machines that have an explicit expiry set
// expireExpiredNodes expires node that have an explicit expiry set
// after that expiry time has passed.
func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
func (h *Headscale) expireExpiredNodes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
h.expireExpiredMachinesWorker()
h.expireExpiredNodesWorker()
}
}
@@ -246,32 +248,32 @@ func (h *Headscale) expireEphemeralNodesWorker() {
}
for _, user := range users {
machines, err := h.ListMachinesByUser(user.Name)
nodes, err := h.ListNodesByUser(user.Name)
if err != nil {
log.Error().
Err(err).
Str("user", user.Name).
Msg("Error listing machines in user")
Msg("Error listing nodes in user")
return
}
expiredFound := false
for _, machine := range machines {
if machine.isEphemeral() && machine.LastSeen != nil &&
for _, node := range nodes {
if node.isEphemeral() && node.LastSeen != nil &&
time.Now().
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
After(node.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
expiredFound = true
log.Info().
Str("machine", machine.Hostname).
Str("node", node.Hostname).
Msg("Ephemeral client removed from database")
err = h.db.Unscoped().Delete(machine).Error
err = h.db.Unscoped().Delete(node).Error
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Msg("🤮 Cannot delete ephemeral machine from the database")
Str("node", node.Hostname).
Msg("Cannot delete ephemeral node from the database")
}
}
}
@@ -282,7 +284,7 @@ func (h *Headscale) expireEphemeralNodesWorker() {
}
}
func (h *Headscale) expireExpiredMachinesWorker() {
func (h *Headscale) expireExpiredNodesWorker() {
users, err := h.ListUsers()
if err != nil {
log.Error().Err(err).Msg("Error listing users")
@@ -291,34 +293,34 @@ func (h *Headscale) expireExpiredMachinesWorker() {
}
for _, user := range users {
machines, err := h.ListMachinesByUser(user.Name)
nodes, err := h.ListNodesByUser(user.Name)
if err != nil {
log.Error().
Err(err).
Str("user", user.Name).
Msg("Error listing machines in user")
Msg("Error listing nodes in user")
return
}
expiredFound := false
for index, machine := range machines {
if machine.isExpired() &&
machine.Expiry.After(h.getLastStateChange(user)) {
for index, node := range nodes {
if node.isExpired() &&
node.Expiry.After(h.getLastStateChange(user)) {
expiredFound = true
err := h.ExpireMachine(&machines[index])
err := h.ExpireNode(&nodes[index])
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Str("name", machine.GivenName).
Msg("🤮 Cannot expire machine")
Str("node", node.Hostname).
Str("name", node.GivenName).
Msg("Cannot expire node")
} else {
log.Info().
Str("machine", machine.Hostname).
Str("name", machine.GivenName).
Msg("Machine successfully expired")
Str("node", node.Hostname).
Str("name", node.GivenName).
Msg("Node successfully expired")
}
}
}
@@ -521,7 +523,7 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
apiRouter.Use(h.httpAuthenticationMiddleware)
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
router.PathPrefix("/").HandlerFunc(stdoutHandler)
router.PathPrefix("/").HandlerFunc(notFoundHandler)
return router
}
@@ -550,7 +552,7 @@ func (h *Headscale) Serve() error {
}
go h.expireEphemeralNodes(updateInterval)
go h.expireExpiredMachines(updateInterval)
go h.expireExpiredNodes(updateInterval)
go h.failoverSubnetRoutes(updateInterval)
@@ -818,7 +820,6 @@ func (h *Headscale) Serve() error {
// And we're done:
cancel()
os.Exit(0)
}
}
}
@@ -957,7 +958,7 @@ func (h *Headscale) getLastStateChange(users ...User) time.Time {
}
}
func stdoutHandler(
func notFoundHandler(
writer http.ResponseWriter,
req *http.Request,
) {
@@ -969,6 +970,7 @@ func stdoutHandler(
Interface("url", req.URL).
Bytes("body", body).
Msg("Request did not match")
writer.WriteHeader(http.StatusNotFound)
}
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {

View File

@@ -7,19 +7,29 @@ import (
"fmt"
"log"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"text/template"
)
var (
githubWorkflowPath = "../../.github/workflows/"
jobFileNameTemplate = `test-integration-v2-%s.yaml`
jobTemplate = template.Must(template.New("jobTemplate").Parse(`
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
jobTemplate = template.Must(
template.New("jobTemplate").
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - {{.Name}}
on: [pull_request]
concurrency:
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
@@ -40,8 +50,8 @@ jobs:
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v16
if: steps.changed-files.outputs.any_changed == 'true'
- uses: cachix/install-nix-action@v18
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
@@ -52,6 +62,7 @@ jobs:
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
@@ -59,43 +70,87 @@ jobs:
-timeout 120m \
-parallel 1 \
-run "^{{.Name}}$"
`))
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"
`),
)
)
const workflowFilePerm = 0o600
func removeTests() {
glob := fmt.Sprintf(jobFileNameTemplate, "*")
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
if err != nil {
log.Fatalf("failed to find test files")
}
for _, file := range files {
err := os.Remove(file)
if err != nil {
log.Printf("failed to remove: %s", err)
}
}
}
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
ripgrep := exec.Command(
rgBin,
args...,
)
result, err := ripgrep.CombinedOutput()
if err != nil {
log.Printf("out: %s", result)
log.Fatalf("failed to run ripgrep: %s", err)
}
tests := strings.Split(string(result), "\n")
tests = tests[:len(tests)-1]
return tests
}
func main() {
type testConfig struct {
Name string
}
// TODO(kradalby): automatic fetch tests at runtime
tests := []string{
"TestAuthKeyLogoutAndRelogin",
"TestAuthWebFlowAuthenticationPingAll",
"TestAuthWebFlowLogoutAndRelogin",
"TestCreateTailscale",
"TestEnablingRoutes",
"TestHeadscale",
"TestUserCommand",
"TestOIDCAuthenticationPingAll",
"TestOIDCExpireNodes",
"TestPingAllByHostname",
"TestPingAllByIP",
"TestPreAuthKeyCommand",
"TestPreAuthKeyCommandReusableEphemeral",
"TestPreAuthKeyCommandWithoutExpiry",
"TestResolveMagicDNS",
"TestSSHIsBlockedInACL",
"TestSSHMultipleUsersAllToAll",
"TestSSHNoSSHConfigured",
"TestSSHOneUserAllToAll",
"TestSSUserOnlyIsolation",
"TestTaildrop",
"TestTailscaleNodesJoiningHeadcale",
}
tests := findTests()
removeTests()
for _, test := range tests {
log.Printf("generating workflow for %s", test)
var content bytes.Buffer
if err := jobTemplate.Execute(&content, testConfig{
@@ -104,9 +159,9 @@ func main() {
log.Fatalf("failed to render template: %s", err)
}
path := "../../.github/workflows/" + fmt.Sprintf(jobFileNameTemplate, test)
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
err := os.WriteFile(path, content.Bytes(), workflowFilePerm)
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
if err != nil {
log.Fatalf("failed to write github job: %s", err)
}

View File

@@ -0,0 +1,22 @@
package cli
import (
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(configTestCmd)
}
var configTestCmd = &cobra.Command{
Use: "configtest",
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) {
_, err := getHeadscaleApp()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
},
}

View File

@@ -27,7 +27,13 @@ func init() {
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("user", "n", "", "User")
createNodeCmd.Flags().StringP("user", "u", "", "User")
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
createNodeNamespaceFlag.Hidden = true
err = createNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")
@@ -51,7 +57,7 @@ var debugCmd = &cobra.Command{
var createNodeCmd = &cobra.Command{
Use: "create-node",
Short: "Create a node (machine) that can be registered with `nodes register <>` command",
Short: "Create a node that can be registered with `nodes register <>` command",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
@@ -77,7 +83,7 @@ var createNodeCmd = &cobra.Command{
return
}
machineKey, err := cmd.Flags().GetString("key")
nodeKey, err := cmd.Flags().GetString("key")
if err != nil {
ErrorOutput(
err,
@@ -87,7 +93,7 @@ var createNodeCmd = &cobra.Command{
return
}
if !headscale.NodePublicKeyRegex.Match([]byte(machineKey)) {
if !headscale.NodePublicKeyRegex.Match([]byte(nodeKey)) {
err = errPreAuthKeyMalformed
ErrorOutput(
err,
@@ -109,24 +115,24 @@ var createNodeCmd = &cobra.Command{
return
}
request := &v1.DebugCreateMachineRequest{
Key: machineKey,
request := &v1.DebugCreateNodeRequest{
Key: nodeKey,
Name: name,
User: user,
Routes: routes,
}
response, err := client.DebugCreateMachine(ctx, request)
response, err := client.DebugCreateNode(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create machine: %s", status.Convert(err).Message()),
fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()),
output,
)
return
}
SuccessOutput(response.Machine, "Machine created", output)
SuccessOutput(response.Node, "Node created", output)
},
}

View File

@@ -19,11 +19,23 @@ import (
func init() {
rootCmd.AddCommand(nodeCmd)
listNodesCmd.Flags().StringP("user", "n", "", "Filter by user")
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
listNodesNamespaceFlag.Hidden = true
nodeCmd.AddCommand(listNodesCmd)
registerNodeCmd.Flags().StringP("user", "n", "", "User")
registerNodeCmd.Flags().StringP("user", "u", "", "User")
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
registerNodeNamespaceFlag.Hidden = true
err := registerNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatalf(err.Error())
@@ -63,7 +75,12 @@ func init() {
log.Fatalf(err.Error())
}
moveNodeCmd.Flags().StringP("user", "n", "", "New user")
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
moveNodeNamespaceFlag.Hidden = true
err = moveNodeCmd.MarkFlagRequired("user")
if err != nil {
@@ -90,7 +107,7 @@ var nodeCmd = &cobra.Command{
var registerNodeCmd = &cobra.Command{
Use: "register",
Short: "Registers a machine to your network",
Short: "Registers a node to your network",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetString("user")
@@ -115,12 +132,12 @@ var registerNodeCmd = &cobra.Command{
return
}
request := &v1.RegisterMachineRequest{
request := &v1.RegisterNodeRequest{
Key: machineKey,
User: user,
}
response, err := client.RegisterMachine(ctx, request)
response, err := client.RegisterNode(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -135,8 +152,8 @@ var registerNodeCmd = &cobra.Command{
}
SuccessOutput(
response.Machine,
fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
response.Node,
fmt.Sprintf("Node %s registered", response.Node.GivenName), output)
},
}
@@ -163,11 +180,11 @@ var listNodesCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.ListMachinesRequest{
request := &v1.ListNodesRequest{
User: user,
}
response, err := client.ListMachines(ctx, request)
response, err := client.ListNodes(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -179,12 +196,12 @@ var listNodesCmd = &cobra.Command{
}
if output != "" {
SuccessOutput(response.Machines, "", output)
SuccessOutput(response.Nodes, "", output)
return
}
tableData, err := nodesToPtables(user, showTags, response.Machines)
tableData, err := nodesToPtables(user, showTags, response.Nodes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -227,11 +244,11 @@ var expireNodeCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.ExpireMachineRequest{
MachineId: identifier,
request := &v1.ExpireNodeRequest{
NodeId: identifier,
}
response, err := client.ExpireMachine(ctx, request)
response, err := client.ExpireNode(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -245,7 +262,7 @@ var expireNodeCmd = &cobra.Command{
return
}
SuccessOutput(response.Machine, "Machine expired", output)
SuccessOutput(response.Node, "Node expired", output)
},
}
@@ -274,12 +291,12 @@ var renameNodeCmd = &cobra.Command{
if len(args) > 0 {
newName = args[0]
}
request := &v1.RenameMachineRequest{
MachineId: identifier,
NewName: newName,
request := &v1.RenameNodeRequest{
NodeId: identifier,
NewName: newName,
}
response, err := client.RenameMachine(ctx, request)
response, err := client.RenameNode(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -293,7 +310,7 @@ var renameNodeCmd = &cobra.Command{
return
}
SuccessOutput(response.Machine, "Machine renamed", output)
SuccessOutput(response.Node, "Node renamed", output)
},
}
@@ -319,11 +336,11 @@ var deleteNodeCmd = &cobra.Command{
defer cancel()
defer conn.Close()
getRequest := &v1.GetMachineRequest{
MachineId: identifier,
getRequest := &v1.GetNodeRequest{
NodeId: identifier,
}
getResponse, err := client.GetMachine(ctx, getRequest)
getResponse, err := client.GetNode(ctx, getRequest)
if err != nil {
ErrorOutput(
err,
@@ -337,8 +354,8 @@ var deleteNodeCmd = &cobra.Command{
return
}
deleteRequest := &v1.DeleteMachineRequest{
MachineId: identifier,
deleteRequest := &v1.DeleteNodeRequest{
NodeId: identifier,
}
confirm := false
@@ -347,7 +364,7 @@ var deleteNodeCmd = &cobra.Command{
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetMachine().Name,
getResponse.GetNode().Name,
),
}
err = survey.AskOne(prompt, &confirm)
@@ -357,7 +374,7 @@ var deleteNodeCmd = &cobra.Command{
}
if confirm || force {
response, err := client.DeleteMachine(ctx, deleteRequest)
response, err := client.DeleteNode(ctx, deleteRequest)
if output != "" {
SuccessOutput(response, "", output)
@@ -419,11 +436,11 @@ var moveNodeCmd = &cobra.Command{
defer cancel()
defer conn.Close()
getRequest := &v1.GetMachineRequest{
MachineId: identifier,
getRequest := &v1.GetNodeRequest{
NodeId: identifier,
}
_, err = client.GetMachine(ctx, getRequest)
_, err = client.GetNode(ctx, getRequest)
if err != nil {
ErrorOutput(
err,
@@ -437,12 +454,12 @@ var moveNodeCmd = &cobra.Command{
return
}
moveRequest := &v1.MoveMachineRequest{
MachineId: identifier,
User: user,
moveRequest := &v1.MoveNodeRequest{
NodeId: identifier,
User: user,
}
moveResponse, err := client.MoveMachine(ctx, moveRequest)
moveResponse, err := client.MoveNode(ctx, moveRequest)
if err != nil {
ErrorOutput(
err,
@@ -456,14 +473,14 @@ var moveNodeCmd = &cobra.Command{
return
}
SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
SuccessOutput(moveResponse.Node, "Node moved to another user", output)
},
}
func nodesToPtables(
currentUser string,
showTags bool,
machines []*v1.Machine,
nodes []*v1.Node,
) (pterm.TableData, error) {
tableHeader := []string{
"ID",
@@ -488,23 +505,23 @@ func nodesToPtables(
}
tableData := pterm.TableData{tableHeader}
for _, machine := range machines {
for _, node := range nodes {
var ephemeral bool
if machine.PreAuthKey != nil && machine.PreAuthKey.Ephemeral {
if node.PreAuthKey != nil && node.PreAuthKey.Ephemeral {
ephemeral = true
}
var lastSeen time.Time
var lastSeenTime string
if machine.LastSeen != nil {
lastSeen = machine.LastSeen.AsTime()
if node.LastSeen != nil {
lastSeen = node.LastSeen.AsTime()
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05")
}
var expiry time.Time
var expiryTime string
if machine.Expiry != nil {
expiry = machine.Expiry.AsTime()
if node.Expiry != nil {
expiry = node.Expiry.AsTime()
expiryTime = expiry.Format("2006-01-02 15:04:05")
} else {
expiryTime = "N/A"
@@ -512,7 +529,7 @@ func nodesToPtables(
var machineKey key.MachinePublic
err := machineKey.UnmarshalText(
[]byte(headscale.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
[]byte(headscale.MachinePublicKeyEnsurePrefix(node.MachineKey)),
)
if err != nil {
machineKey = key.MachinePublic{}
@@ -520,14 +537,14 @@ func nodesToPtables(
var nodeKey key.NodePublic
err = nodeKey.UnmarshalText(
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
[]byte(headscale.NodePublicKeyEnsurePrefix(node.NodeKey)),
)
if err != nil {
return nil, err
}
var online string
if machine.Online {
if node.Online {
online = pterm.LightGreen("online")
} else {
online = pterm.LightRed("offline")
@@ -541,36 +558,36 @@ func nodesToPtables(
}
var forcedTags string
for _, tag := range machine.ForcedTags {
for _, tag := range node.ForcedTags {
forcedTags += "," + tag
}
forcedTags = strings.TrimLeft(forcedTags, ",")
var invalidTags string
for _, tag := range machine.InvalidTags {
if !contains(machine.ForcedTags, tag) {
for _, tag := range node.InvalidTags {
if !contains(node.ForcedTags, tag) {
invalidTags += "," + pterm.LightRed(tag)
}
}
invalidTags = strings.TrimLeft(invalidTags, ",")
var validTags string
for _, tag := range machine.ValidTags {
if !contains(machine.ForcedTags, tag) {
for _, tag := range node.ValidTags {
if !contains(node.ForcedTags, tag) {
validTags += "," + pterm.LightGreen(tag)
}
}
validTags = strings.TrimLeft(validTags, ",")
var user string
if currentUser == "" || (currentUser == machine.User.Name) {
user = pterm.LightMagenta(machine.User.Name)
if currentUser == "" || (currentUser == node.User.Name) {
user = pterm.LightMagenta(node.User.Name)
} else {
// Shared into this user
user = pterm.LightYellow(machine.User.Name)
user = pterm.LightYellow(node.User.Name)
}
var IPV4Address string
var IPV6Address string
for _, addr := range machine.IpAddresses {
for _, addr := range node.IpAddresses {
if netip.MustParseAddr(addr).Is4() {
IPV4Address = addr
} else {
@@ -579,9 +596,9 @@ func nodesToPtables(
}
nodeData := []string{
strconv.FormatUint(machine.Id, headscale.Base10),
machine.Name,
machine.GetGivenName(),
strconv.FormatUint(node.Id, headscale.Base10),
node.Name,
node.GetGivenName(),
machineKey.ShortString(),
nodeKey.ShortString(),
user,
@@ -638,8 +655,8 @@ var tagCmd = &cobra.Command{
// Sending tags to machine
request := &v1.SetTagsRequest{
MachineId: identifier,
Tags: tagsToSet,
NodeId: identifier,
Tags: tagsToSet,
}
resp, err := client.SetTags(ctx, request)
if err != nil {
@@ -654,8 +671,8 @@ var tagCmd = &cobra.Command{
if resp != nil {
SuccessOutput(
resp.GetMachine(),
"Machine updated",
resp.GetNode(),
"Node updated",
output,
)
}

View File

@@ -20,7 +20,13 @@ const (
func init() {
rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.PersistentFlags().StringP("user", "n", "", "User")
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
pakNamespaceFlag.Hidden = true
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")

View File

@@ -12,10 +12,15 @@ import (
"github.com/tcnksm/go-latest"
)
const (
deprecateNamespaceMessage = "use --user"
)
var cfgFile string = ""
func init() {
if len(os.Args) > 1 && (os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
if len(os.Args) > 1 &&
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
return
}

View File

@@ -3,8 +3,10 @@ package cli
import (
"fmt"
"log"
"net/netip"
"strconv"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
@@ -33,6 +35,13 @@ func init() {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(disableRouteCmd)
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = deleteRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(deleteRouteCmd)
}
var routesCmd = &cobra.Command{
@@ -48,11 +57,11 @@ var listRoutesCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
machineID, err := cmd.Flags().GetUint64("identifier")
nodeID, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
fmt.Sprintf("Error getting node id from flag: %s", err),
output,
)
@@ -65,7 +74,7 @@ var listRoutesCmd = &cobra.Command{
var routes []*v1.Route
if machineID == 0 {
if nodeID == 0 {
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
if err != nil {
ErrorOutput(
@@ -85,13 +94,13 @@ var listRoutesCmd = &cobra.Command{
routes = response.Routes
} else {
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
MachineId: machineID,
response, err := client.GetNodeRoutes(ctx, &v1.GetNodeRoutesRequest{
NodeId: nodeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
fmt.Sprintf("Cannot get routes for node %d: %s", nodeID, status.Convert(err).Message()),
output,
)
@@ -138,7 +147,7 @@ var enableRouteCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
fmt.Sprintf("Error getting node id from flag: %s", err),
output,
)
@@ -181,7 +190,7 @@ var disableRouteCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
fmt.Sprintf("Error getting node id from flag: %s", err),
output,
)
@@ -198,7 +207,50 @@ var disableRouteCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
var deleteRouteCmd = &cobra.Command{
Use: "delete",
Short: "Delete a given route",
Long: `This command will delete a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting node id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
output,
)
@@ -215,17 +267,30 @@ var disableRouteCmd = &cobra.Command{
// routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes []*v1.Route) pterm.TableData {
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
tableData := pterm.TableData{{"ID", "Node", "Prefix", "Advertised", "Enabled", "Primary"}}
for _, route := range routes {
var isPrimaryStr string
prefix, err := netip.ParsePrefix(route.Prefix)
if err != nil {
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
continue
}
if prefix == headscale.ExitRouteV4 || prefix == headscale.ExitRouteV6 {
isPrimaryStr = "-"
} else {
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
}
tableData = append(tableData,
[]string{
strconv.FormatUint(route.Id, Base10),
route.Machine.GivenName,
route.Node.GivenName,
route.Prefix,
strconv.FormatBool(route.Advertised),
strconv.FormatBool(route.Enabled),
strconv.FormatBool(route.IsPrimary),
isPrimaryStr,
})
}

View File

@@ -27,7 +27,7 @@ const (
var userCmd = &cobra.Command{
Use: "users",
Short: "Manage the users of Headscale",
Aliases: []string{"user", "namespace", "ns"},
Aliases: []string{"user", "namespace", "namespaces", "ns"},
}
var createUserCmd = &cobra.Command{

View File

@@ -14,7 +14,7 @@ import (
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
)
const (

View File

@@ -6,11 +6,25 @@ import (
"github.com/efekarakus/termcolor"
"github.com/juanfont/headscale/cmd/headscale/cli"
"github.com/pkg/profile"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
)
func main() {
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
err := os.MkdirAll(profilePath, os.ModePerm)
if err != nil {
log.Fatal().Err(err).Msg("failed to create profiling directory")
}
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
} else {
defer profile.Start().Stop()
}
}
var colors bool
switch l := termcolor.SupportLevel(os.Stderr); l {
case termcolor.Level16M:

View File

@@ -58,7 +58,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "./db.sqlite")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
@@ -101,7 +101,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "./db.sqlite")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")

View File

@@ -44,9 +44,7 @@ grpc_allow_insecure: false
# and Tailscale clients.
# The private key file will be autogenerated if it's missing.
#
# For production:
# /var/lib/headscale/private.key
private_key_path: ./private.key
private_key_path: /var/lib/headscale/private.key
# The Noise section includes specific configuration for the
# TS2021 Noise protocol
@@ -55,10 +53,7 @@ noise:
# traffic between headscale and Tailscale clients when
# using the new Noise-based protocol. It must be different
# from the legacy private key.
#
# For production:
# private_key_path: /var/lib/headscale/noise_private.key
private_key_path: ./noise_private.key
private_key_path: /var/lib/headscale/noise_private.key
# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
@@ -137,8 +132,7 @@ node_update_check_interval: 10s
db_type: sqlite3
# For production:
# db_path: /var/lib/headscale/db.sqlite
db_path: ./db.sqlite
db_path: /var/lib/headscale/db.sqlite
# # Postgres config
# If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
@@ -172,8 +166,7 @@ tls_letsencrypt_hostname: ""
# Path to store certificates and metadata needed by
# letsencrypt
# For production:
# tls_letsencrypt_cache_dir: /var/lib/headscale/cache
tls_letsencrypt_cache_dir: ./cache
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
@@ -263,8 +256,7 @@ dns_config:
# Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like:
# unix_socket: /var/run/headscale.sock
unix_socket: ./headscale.sock
unix_socket: /var/run/headscale/headscale.sock
unix_socket_permission: "0770"
#
# headscale supports experimental OpenID connect support,
@@ -282,27 +274,38 @@ unix_socket_permission: "0770"
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
# # client_secret and client_secret_path are mutually exclusive.
#
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
# # The amount of time from a node is authenticated with OpenID until it
# # expires and needs to reauthenticate.
# # Setting the value to "0" will mean no expiry.
# expiry: 180d
#
# # Use the expiry from the token received from OpenID when the user logged
# # in, this will typically lead to frequent need to reauthenticate and should
# # only been enabled if you know what you are doing.
# # Note: enabling this will cause `oidc.expiry` to be ignored.
# use_expiry_from_token: false
#
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
#
# scope: ["openid", "profile", "email", "custom"]
# extra_params:
# domain_hint: example.com
#
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# authentication request will be rejected.
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# # authentication request will be rejected.
#
# allowed_domains:
# - example.com
# Groups from keycloak have a leading '/'
# # Note: Groups from keycloak have a leading '/'
# allowed_groups:
# - /headscale
# allowed_users:
# - alice@example.com
#
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# user: `first-name.last-name.example.com`
#
# strip_email_domain: true

View File

@@ -11,6 +11,7 @@ import (
"time"
"github.com/coreos/go-oidc/v3/oidc"
"github.com/prometheus/common/model"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/spf13/viper"
@@ -25,9 +26,14 @@ const (
JSONLogFormat = "json"
TextLogFormat = "text"
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
maxDuration time.Duration = 1<<63 - 1
)
var errOidcMutuallyExclusive = errors.New("oidc_client_secret and oidc_client_secret_path are mutually exclusive")
var errOidcMutuallyExclusive = errors.New(
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
)
// Config contains the initial Headscale configuration.
type Config struct {
@@ -101,6 +107,8 @@ type OIDCConfig struct {
AllowedUsers []string
AllowedGroups []string
StripEmaildomain bool
Expiry time.Duration
UseExpiryFromToken bool
}
type DERPConfig struct {
@@ -180,6 +188,8 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
viper.SetDefault("oidc.strip_email_domain", true)
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
viper.SetDefault("oidc.expiry", "180d")
viper.SetDefault("oidc.use_expiry_from_token", false)
viper.SetDefault("logtail.enabled", false)
viper.SetDefault("randomize_client_port", false)
@@ -411,34 +421,32 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
}
if viper.IsSet("dns_config.restricted_nameservers") {
if len(dnsConfig.Resolvers) > 0 {
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers",
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
domains := []string{}
restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers",
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make(
[]*dnstype.Resolver,
len(restrictedNameservers),
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make(
[]*dnstype.Resolver,
len(restrictedNameservers),
)
for index, nameserverStr := range restrictedNameservers {
nameserver, err := netip.ParseAddr(nameserverStr)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
}
restrictedResolvers[index] = &dnstype.Resolver{
Addr: nameserver.String(),
}
for index, nameserverStr := range restrictedNameservers {
nameserver, err := netip.ParseAddr(nameserverStr)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
}
restrictedResolvers[index] = &dnstype.Resolver{
Addr: nameserver.String(),
}
dnsConfig.Routes[domain] = restrictedResolvers
}
} else {
log.Warn().
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
dnsConfig.Routes[domain] = restrictedResolvers
domains = append(domains, domain)
}
dnsConfig.Domains = domains
}
if viper.IsSet("dns_config.domains") {
@@ -603,6 +611,22 @@ func GetHeadscaleConfig() (*Config, error) {
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
Expiry: func() time.Duration {
// if set to 0, we assume no expiry
if value := viper.GetString("oidc.expiry"); value == "0" {
return maxDuration
} else {
expiry, err := model.ParseDuration(value)
if err != nil {
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
return defaultOIDCExpiryTime
}
return time.Duration(expiry)
}
}(),
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
},
LogTail: logConfig,

97
db.go
View File

@@ -43,46 +43,53 @@ func (h *Headscale) initDB() error {
_ = db.Migrator().RenameTable("namespaces", "users")
// the big rename from Machine to Node
_ = db.Migrator().RenameTable("machines", "nodes")
_ = db.Migrator().RenameColumn(&Route{}, "machine_id", "node_id")
err = db.AutoMigrate(&User{})
if err != nil {
return err
}
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
_ = db.Migrator().RenameColumn(&Node{}, "namespace_id", "user_id")
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
_ = db.Migrator().RenameColumn(&Node{}, "ip_address", "ip_addresses")
_ = db.Migrator().RenameColumn(&Node{}, "name", "hostname")
// GivenName is used as the primary source of DNS names, make sure
// the field is populated and normalized if it was not when the
// machine was registered.
_ = db.Migrator().RenameColumn(&Machine{}, "nickname", "given_name")
// node was registered.
_ = db.Migrator().RenameColumn(&Node{}, "nickname", "given_name")
// If the Machine table has a column for registered,
// If the Node table has a column for registered,
// find all occourences of "false" and drop them. Then
// remove the column.
if db.Migrator().HasColumn(&Machine{}, "registered") {
if db.Migrator().HasColumn(&Node{}, "registered") {
log.Info().
Msg(`Database has legacy "registered" column in machine, removing...`)
Msg(`Database has legacy "registered" column in node, removing...`)
machines := Machines{}
if err := h.db.Not("registered").Find(&machines).Error; err != nil {
nodes := Nodes{}
if err := h.db.Not("registered").Find(&nodes).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for _, machine := range machines {
for _, node := range nodes {
log.Info().
Str("machine", machine.Hostname).
Str("machine_key", machine.MachineKey).
Msg("Deleting unregistered machine")
if err := h.db.Delete(&Machine{}, machine.ID).Error; err != nil {
Str("node", node.Hostname).
Str("machine_key", node.MachineKey).
Msg("Deleting unregistered node")
if err := h.db.Delete(&Node{}, node.ID).Error; err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Str("machine_key", machine.MachineKey).
Msg("Error deleting unregistered machine")
Str("node", node.Hostname).
Str("machine_key", node.MachineKey).
Msg("Error deleting unregistered node")
}
}
err := db.Migrator().DropColumn(&Machine{}, "registered")
err := db.Migrator().DropColumn(&Node{}, "registered")
if err != nil {
log.Error().Err(err).Msg("Error dropping registered column")
}
@@ -93,21 +100,21 @@ func (h *Headscale) initDB() error {
return err
}
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
if db.Migrator().HasColumn(&Node{}, "enabled_routes") {
log.Info().Msgf("Database has legacy enabled_routes column in node, migrating...")
type MachineAux struct {
type NodeAux struct {
ID uint64
EnabledRoutes IPPrefixes
}
machinesAux := []MachineAux{}
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
nodesAux := []NodeAux{}
err := db.Table("nodes").Select("id, enabled_routes").Scan(&nodesAux).Error
if err != nil {
log.Fatal().Err(err).Msg("Error accessing db")
}
for _, machine := range machinesAux {
for _, prefix := range machine.EnabledRoutes {
for _, node := range nodesAux {
for _, prefix := range node.EnabledRoutes {
if err != nil {
log.Error().
Err(err).
@@ -117,8 +124,8 @@ func (h *Headscale) initDB() error {
continue
}
err = db.Preload("Machine").
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
err = db.Preload("Node").
Where("node_id = ? AND prefix = ?", node.ID, IPPrefix(prefix)).
First(&Route{}).
Error
if err == nil {
@@ -130,7 +137,7 @@ func (h *Headscale) initDB() error {
}
route := Route{
MachineID: machine.ID,
NodeID: node.ID,
Advertised: true,
Enabled: true,
Prefix: IPPrefix(prefix),
@@ -139,51 +146,51 @@ func (h *Headscale) initDB() error {
log.Error().Err(err).Msg("Error creating route")
} else {
log.Info().
Uint64("machine_id", route.MachineID).
Uint64("node_id", route.NodeID).
Str("prefix", prefix.String()).
Msg("Route migrated")
}
}
}
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
err = db.Migrator().DropColumn(&Node{}, "enabled_routes")
if err != nil {
log.Error().Err(err).Msg("Error dropping enabled_routes column")
}
}
err = db.AutoMigrate(&Machine{})
err = db.AutoMigrate(&Node{})
if err != nil {
return err
}
if db.Migrator().HasColumn(&Machine{}, "given_name") {
machines := Machines{}
if err := h.db.Find(&machines).Error; err != nil {
if db.Migrator().HasColumn(&Node{}, "given_name") {
nodes := Nodes{}
if err := h.db.Find(&nodes).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for item, machine := range machines {
if machine.GivenName == "" {
for item, node := range nodes {
if node.GivenName == "" {
normalizedHostname, err := NormalizeToFQDNRules(
machine.Hostname,
node.Hostname,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
log.Error().
Caller().
Str("hostname", machine.Hostname).
Str("hostname", node.Hostname).
Err(err).
Msg("Failed to normalize machine hostname in DB migration")
Msg("Failed to normalize node hostname in DB migration")
}
err = h.RenameMachine(&machines[item], normalizedHostname)
err = h.RenameNode(&nodes[item], normalizedHostname)
if err != nil {
log.Error().
Caller().
Str("hostname", machine.Hostname).
Str("hostname", node.Hostname).
Err(err).
Msg("Failed to save normalized machine name in DB migration")
Msg("Failed to save normalized node name in DB migration")
}
}
}
@@ -321,7 +328,7 @@ func (hi *HostInfo) Scan(destination interface{}) error {
return json.Unmarshal([]byte(value), hi)
default:
return fmt.Errorf("%w: unexpected data type %T", ErrMachineAddressesInvalid, destination)
return fmt.Errorf("%w: unexpected data type %T", ErrNodeAddressesInvalid, destination)
}
}
@@ -367,7 +374,7 @@ func (i *IPPrefixes) Scan(destination interface{}) error {
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", ErrMachineAddressesInvalid, destination)
return fmt.Errorf("%w: unexpected data type %T", ErrNodeAddressesInvalid, destination)
}
}
@@ -389,7 +396,7 @@ func (i *StringList) Scan(destination interface{}) error {
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", ErrMachineAddressesInvalid, destination)
return fmt.Errorf("%w: unexpected data type %T", ErrNodeAddressesInvalid, destination)
}
}

View File

@@ -10,7 +10,7 @@ import (
"time"
"github.com/rs/zerolog/log"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
"tailscale.com/tailcfg"
)

View File

@@ -157,14 +157,14 @@ func (h *Headscale) DERPHandler(
if !fastStart {
pubKey := h.privateKey.Public()
pubKeyStr := pubKey.UntypedHexString() //nolint
pubKeyStr, _ := pubKey.MarshalText() //nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+
"Connection: Upgrade\r\n"+
"Derp-Version: %v\r\n"+
"Derp-Public-Key: %s\r\n\r\n",
derp.ProtocolVersion,
pubKeyStr)
string(pubKeyStr))
}
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())

22
dns.go
View File

@@ -159,22 +159,22 @@ func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
}
// If any nextdns DoH resolvers are present in the list of resolvers it will
// take metadata from the machine metadata and instruct tailscale to add it
// take metadata from the node metadata and instruct tailscale to add it
// to the requests. This makes it possible to identify from which device the
// requests come in the NextDNS dashboard.
//
// This will produce a resolver like:
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
func addNextDNSMetadata(resolvers []*dnstype.Resolver, node Node) {
for _, resolver := range resolvers {
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
attrs := url.Values{
"device_name": []string{machine.Hostname},
"device_model": []string{machine.HostInfo.OS},
"device_name": []string{node.Hostname},
"device_model": []string{node.HostInfo.OS},
}
if len(machine.IPAddresses) > 0 {
attrs.Add("device_ip", machine.IPAddresses[0].String())
if len(node.IPAddresses) > 0 {
attrs.Add("device_ip", node.IPAddresses[0].String())
}
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
@@ -185,8 +185,8 @@ func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
func getMapResponseDNSConfig(
dnsConfigOrig *tailcfg.DNSConfig,
baseDomain string,
machine Machine,
peers Machines,
node Node,
peers Nodes,
) *tailcfg.DNSConfig {
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
@@ -195,13 +195,13 @@ func getMapResponseDNSConfig(
dnsConfig.Domains,
fmt.Sprintf(
"%s.%s",
machine.User.Name,
node.User.Name,
baseDomain,
),
)
userSet := mapset.NewSet[User]()
userSet.Add(machine.User)
userSet.Add(node.User)
for _, p := range peers {
userSet.Add(p.User)
}
@@ -213,7 +213,7 @@ func getMapResponseDNSConfig(
dnsConfig = dnsConfigOrig
}
addNextDNSMetadata(dnsConfig.Resolvers, machine)
addNextDNSMetadata(dnsConfig.Resolvers, node)
return dnsConfig
}

View File

@@ -157,10 +157,10 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
_, err = app.GetNode(userShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
nodesInShared1 := &Node{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
@@ -172,12 +172,12 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
app.db.Save(nodesInShared1)
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
_, err = app.GetNode(userShared1.Name, nodesInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
nodesInShared2 := &Node{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
@@ -189,12 +189,12 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
app.db.Save(nodesInShared2)
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
_, err = app.GetNode(userShared2.Name, nodesInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
nodesInShared3 := &Node{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
@@ -206,12 +206,12 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
app.db.Save(nodesInShared3)
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
_, err = app.GetNode(userShared3.Name, nodesInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
nodes2InShared1 := &Node{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
@@ -223,7 +223,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
AuthKeyID: uint(PreAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
app.db.Save(nodes2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
@@ -232,14 +232,14 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
Proxied: true,
}
peersOfMachineInShared1, err := app.getPeers(machineInShared1)
peersOfNodeInShared1, err := app.getPeers(nodesInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachineInShared1,
*nodesInShared1,
peersOfNodeInShared1,
)
c.Assert(dnsConfig, check.NotNil)
@@ -304,10 +304,10 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
_, err = app.GetNode(userShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
nodesInShared1 := &Node{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
@@ -319,12 +319,12 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
app.db.Save(nodesInShared1)
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
_, err = app.GetNode(userShared1.Name, nodesInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
nodesInShared2 := &Node{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
@@ -336,12 +336,12 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
app.db.Save(nodesInShared2)
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
_, err = app.GetNode(userShared2.Name, nodesInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
nodesInShared3 := &Node{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
@@ -353,12 +353,12 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
app.db.Save(nodesInShared3)
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
_, err = app.GetNode(userShared3.Name, nodesInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
nodes2InShared1 := &Node{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
@@ -370,7 +370,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
AuthKeyID: uint(preAuthKey2InShared1.ID),
}
app.db.Save(machine2InShared1)
app.db.Save(nodes2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
@@ -379,14 +379,14 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
Proxied: false,
}
peersOfMachine1Shared1, err := app.getPeers(machineInShared1)
peersOfNode1Shared1, err := app.getPeers(nodesInShared1)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachine1Shared1,
*nodesInShared1,
peersOfNode1Shared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 0)

View File

@@ -1,56 +0,0 @@
# headscale documentation
This page contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
## Official documentation
### How-to
- [Running headscale on Linux](running-headscale-linux.md)
- [Control headscale remotely](remote-cli.md)
- [Using a Windows client with headscale](windows-client.md)
- [Configuring OIDC](oidc.md)
### References
- [Configuration](../config-example.yaml)
- [Glossary](glossary.md)
- [TLS](tls.md)
## Community documentation
Community documentation is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
- [Running headscale in a container](running-headscale-container.md)
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
- [Running headscale behind a reverse proxy](reverse-proxy.md)
- [Set Custom DNS records](dns-records.md)
## Misc
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
The [ACLs](acls.md) document should help understand a fictional case of setting
up ACLs in a small company. All concepts presented in this document could be
applied outside of business oriented usage.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.

View File

@@ -1,4 +1,15 @@
# ACLs use case example
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACLs use case example
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
@@ -42,6 +53,8 @@ a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the user that is
registering it is allowed to do it.
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
Here are the ACL's to implement the same permissions as above:
```json

View File

@@ -1,5 +1,12 @@
# Setting custom DNS records
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.

47
docs/exit-node.md Normal file
View File

@@ -0,0 +1,47 @@
# Exit Nodes
## On the node
Register the node and make it advertise itself as an exit node:
```console
$ sudo tailscale up --login-server https://my-server.com --advertise-exit-node
```
If the node is already registered, it can advertise exit capabilities like this:
```console
$ sudo tailscale set --advertise-exit-node
```
## On the control server
```console
$ # list nodes
$ headscale routes list
ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | false | -
4 | phobos | ::/0 | true | false | -
$ # enable routes for phobos
$ headscale routes enable -r 3
$ headscale routes enable -r 4
$ # Check node list again. The routes are now enabled.
$ headscale routes list
ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | true | -
4 | phobos | ::/0 | true | true | -
```
## On the client
The exit node can now be used with:
```console
$ sudo tailscale set --exit-node phobos
```
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes/?q=exit#step-3-use-the-exit-node) for how to do it on your device.

30
docs/iOS-client.md Normal file
View File

@@ -0,0 +1,30 @@
# Connecting an iOS client
## Goal
This documentation has the goal of showing how a user can use the official iOS [Tailscale](https://tailscale.com) client with `headscale`.
## Installation
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
Ensure that the installed version is at least 1.38.1, as that is the first release to support alternate control servers.
## Configuring the headscale URL
!!! info "Apple devices"
An endpoint with information on how to connect your Apple devices
(currently macOS only) is available at `/apple` on your running instance.
Ensure that the tailscale app is logged out before proceeding.
Go to iOS settings, scroll down past game center and tv provider to the tailscale app and select it. The headscale URL can be entered into the _"ALTERNATE COORDINATION SERVER URL"_ box.
> **Note**
>
> If the app was previously logged into tailscale, toggle on the _Reset Keychain_ switch.
Restart the app by closing it from the iOS app switcher, open the app and select the regular _Sign in_ option (non-SSO), and it should open up to the headscale authentication page.
Enter your credentials and log in. Headscale should now be working on your iOS device.

12
docs/index.md Normal file
View File

@@ -0,0 +1,12 @@
---
hide:
- navigation
- toc
---
# headscale documentation
This site contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 7.8 KiB

View File

@@ -139,3 +139,34 @@ oidc:
# Optional: Force the Azure AD account picker
prompt: select_account
```
## Google OAuth Example
In order to integrate Headscale with Google, you'll need to have a [Google Cloud Console](https://console.cloud.google.com) account.
Google OAuth has a [verification process](https://support.google.com/cloud/answer/9110914?hl=en) if you need to have users authenticate who are outside of your domain. If you only need to authenticate users from your domain name (ie `@example.com`), you don't need to go through the verification process.
However if you don't have a domain, or need to add users outside of your domain, you can manually add emails via Google Console.
### Steps
1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one.
2. Create a project (if you don't already have one).
3. On the left hand menu, go to `APIs and services` -> `Credentials`
4. Click `Create Credentials` -> `OAuth client ID`
5. Under `Application Type`, choose `Web Application`
6. For `Name`, enter whatever you like
7. Under `Authorised redirect URIs`, use `https://example.com/oidc/callback`, replacing example.com with your Headscale URL.
8. Click `Save` at the bottom of the form
9. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it.
10. Edit your headscale config, under `oidc`, filling in your `client_id` and `client_secret`:
```yaml
oidc:
issuer: "https://accounts.google.com"
client_id: ""
client_secret: ""
scope: ["openid", "profile", "email"]
```
You can also use `allowed_domains` and `allowed_users` to restrict the users who can authenticate.

5
docs/packaging/README.md Normal file
View File

@@ -0,0 +1,5 @@
# Packaging
We use [nFPM](https://nfpm.goreleaser.com/) for making `.deb`, `.rpm` and `.apk`.
This folder contains files we need to package with these releases.

View File

@@ -0,0 +1,52 @@
[Unit]
After=syslog.target
After=network.target
Description=headscale coordination server for Tailscale
X-Restart-Triggers=/etc/headscale/config.yaml
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/bin/headscale serve
Restart=always
RestartSec=5
WorkingDirectory=/var/lib/headscale
ReadWritePaths=/var/lib/headscale /var/run
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_CHOWN
CapabilityBoundingSet=CAP_NET_BIND_SERVICE CAP_CHOWN
LockPersonality=true
NoNewPrivileges=true
PrivateDevices=true
PrivateMounts=true
PrivateTmp=true
ProcSubset=pid
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHome=yes
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=invisible
ProtectSystem=strict
RemoveIPC=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=true
RestrictRealtime=true
RestrictSUIDSGID=true
RuntimeDirectory=headscale
RuntimeDirectoryMode=0750
StateDirectory=headscale
StateDirectoryMode=0750
SystemCallArchitectures=native
SystemCallFilter=@chown
SystemCallFilter=@system-service
SystemCallFilter=~@privileged
UMask=0077
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,86 @@
#!/bin/sh
# Determine OS platform
# shellcheck source=/dev/null
. /etc/os-release
HEADSCALE_EXE="/usr/bin/headscale"
BSD_HIER=""
HEADSCALE_RUN_DIR="/var/run/headscale"
HEADSCALE_USER="headscale"
HEADSCALE_GROUP="headscale"
ensure_sudo() {
if [ "$(id -u)" = "0" ]; then
echo "Sudo permissions detected"
else
echo "No sudo permission detected, please run as sudo"
exit 1
fi
}
ensure_headscale_path() {
if [ ! -f "$HEADSCALE_EXE" ]; then
echo "headscale not in default path, exiting..."
exit 1
fi
printf "Found headscale %s\n" "$HEADSCALE_EXE"
}
create_headscale_user() {
printf "PostInstall: Adding headscale user %s\n" "$HEADSCALE_USER"
useradd -s /bin/sh -c "headscale default user" headscale
}
create_headscale_group() {
if command -V systemctl >/dev/null 2>&1; then
printf "PostInstall: Adding headscale group %s\n" "$HEADSCALE_GROUP"
groupadd "$HEADSCALE_GROUP"
printf "PostInstall: Adding headscale user %s to group %s\n" "$HEADSCALE_USER" "$HEADSCALE_GROUP"
usermod -a -G "$HEADSCALE_GROUP" "$HEADSCALE_USER"
fi
if [ "$ID" = "alpine" ]; then
printf "PostInstall: Adding headscale group %s\n" "$HEADSCALE_GROUP"
addgroup "$HEADSCALE_GROUP"
printf "PostInstall: Adding headscale user %s to group %s\n" "$HEADSCALE_USER" "$HEADSCALE_GROUP"
addgroup "$HEADSCALE_USER" "$HEADSCALE_GROUP"
fi
}
create_run_dir() {
printf "PostInstall: Creating headscale run directory \n"
mkdir -p "$HEADSCALE_RUN_DIR"
printf "PostInstall: Modifying group ownership of headscale run directory \n"
chown "$HEADSCALE_USER":"$HEADSCALE_GROUP" "$HEADSCALE_RUN_DIR"
}
summary() {
echo "----------------------------------------------------------------------"
echo " headscale package has been successfully installed."
echo ""
echo " Please follow the next steps to start the software:"
echo ""
echo " sudo systemctl enable headscale"
echo " sudo systemctl start headscale"
echo ""
echo " Configuration settings can be adjusted here:"
echo " ${BSD_HIER}/etc/headscale/config.yaml"
echo ""
echo "----------------------------------------------------------------------"
}
#
# Main body of the script
#
{
ensure_sudo
ensure_headscale_path
create_headscale_user
create_headscale_group
create_run_dir
summary
}

View File

@@ -0,0 +1,15 @@
#!/bin/sh
# Determine OS platform
# shellcheck source=/dev/null
. /etc/os-release
if command -V systemctl >/dev/null 2>&1; then
echo "Stop and disable headscale service"
systemctl stop headscale >/dev/null 2>&1 || true
systemctl disable headscale >/dev/null 2>&1 || true
echo "Running daemon-reload"
systemctl daemon-reload || true
fi
echo "Removing run directory"
rm -rf "/var/run/headscale.sock"

View File

@@ -1,5 +1,12 @@
# Running headscale behind a reverse proxy
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
Running headscale behind a reverse proxy is useful when running multiple applications on the same server, and you want to reuse the same external IP and port - usually tcp/443 for HTTPS.
### WebSockets
@@ -115,6 +122,17 @@ For a slightly more complex configuration which utilizes Docker containers to ma
## Apache
Apache is NOT supported. It will not work. Apache [overwrites](https://github.com/svn2github/apache-httpd/blob/82779fce1be478e2333afc9fef86d34f88db718b/modules/proxy/mod_proxy_wstunnel.c#L354) the custom upgrade header of thw WebSockets connection, which is required for the Tailscale TS2021 protocol.
The following minimal Apache config will proxy traffic to the Headscale instance on `<IP:PORT>`. Note that `upgrade=any` is required as a parameter for `ProxyPass` so that WebSockets traffic whose `Upgrade` header value is not equal to `WebSocket` (i. e. Tailscale Control Protocol) is forwarded correctly. See the [Apache docs](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html) for more information on this.
Please use any other reverse proxy.
```
<VirtualHost *:443>
ServerName <YOUR_SERVER_NAME>
ProxyPreserveHost On
ProxyPass / http://<IP:PORT>/ upgrade=any
SSLEngine On
SSLCertificateFile <PATH_TO_CERT>
SSLCertificateKeyFile <PATH_CERT_KEY>
</VirtualHost>
```

View File

@@ -1,7 +1,11 @@
# Running headscale in a container
**Note:** the container documentation is maintained by the _community_ and there is no guarentee
it is up to date, or working.
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
@@ -24,7 +28,7 @@ cd ./headscale
touch ./config/db.sqlite
```
3. **(Strongly Recommended)** Download a copy of the [example configuration](../config-example.yaml) from the [headscale repository](https://github.com/juanfont/headscale/).
3. **(Strongly Recommended)** Download a copy of the [example configuration][config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml) from the headscale repository.
Using wget:

View File

@@ -0,0 +1,198 @@
# Running headscale on Linux
## Note: Outdated and "advanced"
This documentation is considered the "legacy"/advanced/manual version of the documentation, you most likely do not
want to use this documentation and rather look at the distro specific documentation (TODO LINK)[].
## Goal
This documentation has the goal of showing a user how-to set up and run `headscale` on Linux.
In additional to the "get up and running section", there is an optional [SystemD section](#running-headscale-in-the-background-with-systemd)
describing how to make `headscale` run properly in a server environment.
## Configure and run `headscale`
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell
wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
```
2. Make `headscale` executable:
```shell
chmod +x /usr/local/bin/headscale
```
3. Prepare a directory to hold `headscale` configuration and the [SQLite](https://www.sqlite.org/) database:
```shell
# Directory for configuration
mkdir -p /etc/headscale
# Directory for Database, and other variable data (like certificates)
mkdir -p /var/lib/headscale
# or if you create a headscale user:
useradd \
--create-home \
--home-dir /var/lib/headscale/ \
--system \
--user-group \
--shell /usr/bin/nologin \
headscale
```
4. Create an empty SQLite database:
```shell
touch /var/lib/headscale/db.sqlite
```
5. Create a `headscale` configuration:
```shell
touch /etc/headscale/config.yaml
```
**(Strongly Recommended)** Download a copy of the [example configuration][config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml) from the headscale repository.
6. Start the headscale server:
```shell
headscale serve
```
This command will start `headscale` in the current terminal session.
---
To continue the tutorial, open a new terminal and let it run in the background.
Alternatively use terminal emulators like [tmux](https://github.com/tmux/tmux) or [screen](https://www.gnu.org/software/screen/).
To run `headscale` in the background, please follow the steps in the [SystemD section](#running-headscale-in-the-background-with-systemd) before continuing.
7. Verify `headscale` is running:
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
8. Create a user ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
headscale users create myfirstuser
```
### Register a machine (normal login)
On a client machine, execute the `tailscale` login command:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
```
Register the machine:
```shell
headscale --user myfirstuser nodes register --key <YOUR_MACHINE_KEY>
```
### Register machine using a pre authenticated key
Generate a key using the command line:
```shell
headscale --user myfirstuser preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Running `headscale` in the background with SystemD
:warning: **Deprecated**: This part is very outdated and you should use the [pre-packaged Headscale for this](./running-headscale-linux.md
This section demonstrates how to run `headscale` as a service in the background with [SystemD](https://www.freedesktop.org/wiki/Software/systemd/).
This should work on most modern Linux distributions.
1. Create a SystemD service configuration at `/etc/systemd/system/headscale.service` containing:
```systemd
[Unit]
Description=headscale controller
After=syslog.target
After=network.target
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/local/bin/headscale serve
Restart=always
RestartSec=5
# Optional security enhancements
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
WorkingDirectory=/var/lib/headscale
ReadWritePaths=/var/lib/headscale /var/run/headscale
AmbientCapabilities=CAP_NET_BIND_SERVICE
RuntimeDirectory=headscale
[Install]
WantedBy=multi-user.target
```
Note that when running as the headscale user ensure that, either you add your current user to the headscale group:
```shell
usermod -a -G headscale current_user
```
or run all headscale commands as the headscale user:
```shell
su - headscale
```
2. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with path that is writable by the `headscale` user or group:
```yaml
unix_socket: /var/run/headscale/headscale.sock
```
3. Reload SystemD to load the new configuration file:
```shell
systemctl daemon-reload
```
4. Enable and start the new `headscale` service:
```shell
systemctl enable --now headscale
```
5. Verify the headscale service:
```shell
systemctl status headscale
```
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
`headscale` will now run in the background and start at boot.

View File

@@ -1,84 +1,65 @@
# Running headscale on Linux
## Requirements
- Ubuntu 20.04 or newer, Debian 11 or newer.
## Goal
This documentation has the goal of showing a user how-to set up and run `headscale` on Linux.
In additional to the "get up and running section", there is an optional [SystemD section](#running-headscale-in-the-background-with-systemd)
describing how to make `headscale` run properly in a server environment.
Get Headscale up and running.
## Configure and run `headscale`
This includes running Headscale with SystemD.
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
## Migrating from manual install
If you are migrating from the old manual install, the best thing would be to remove
the files installed by following [the guide in reverse](./running-headscale-linux-manual.md).
You should _not_ delete the database (`/var/headscale/db.sqlite`) and the
configuration (`/etc/headscale/config.yaml`).
## Installation
1. Download the lastest Headscale package for your platform (`.deb` for Ubuntu and Debian) from [Headscale's releases page]():
```shell
wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
wget --output-document=headscale.deb \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>.deb
```
2. Make `headscale` executable:
2. Install Headscale:
```shell
chmod +x /usr/local/bin/headscale
sudo dpkg --install headscale.deb
```
3. Prepare a directory to hold `headscale` configuration and the [SQLite](https://www.sqlite.org/) database:
3. Enable Headscale service, this will start Headscale at boot:
```shell
# Directory for configuration
mkdir -p /etc/headscale
# Directory for Database, and other variable data (like certificates)
mkdir -p /var/lib/headscale
# or if you create a headscale user:
useradd \
--create-home \
--home-dir /var/lib/headscale/ \
--system \
--user-group \
--shell /usr/bin/nologin \
headscale
sudo systemctl enable headscale
```
4. Create an empty SQLite database:
4. Configure Headscale by editing the configuration file:
```shell
touch /var/lib/headscale/db.sqlite
nano /etc/headscale/config.yaml
```
5. Create a `headscale` configuration:
5. Start Headscale:
```shell
touch /etc/headscale/config.yaml
sudo systemctl start headscale
```
It is **strongly recommended** to copy and modify the [example configuration](../config-example.yaml)
from the [headscale repository](../)
6. Start the headscale server:
6. Check that Headscale is running as intended:
```shell
headscale serve
systemctl status headscale
```
This command will start `headscale` in the current terminal session.
## Using Headscale
---
To continue the tutorial, open a new terminal and let it run in the background.
Alternatively use terminal emulators like [tmux](https://github.com/tmux/tmux) or [screen](https://www.gnu.org/software/screen/).
To run `headscale` in the background, please follow the steps in the [SystemD section](#running-headscale-in-the-background-with-systemd) before continuing.
7. Verify `headscale` is running:
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
8. Create a user ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
### Create a user
```shell
headscale users create myfirstuser
@@ -86,16 +67,16 @@ headscale users create myfirstuser
### Register a machine (normal login)
On a client machine, execute the `tailscale` login command:
On a client machine, run the `tailscale` login command:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
tailscale up --login-server <YOUR_HEADSCALE_URL>
```
Register the machine:
```shell
headscale --user myfirstuser nodes register --key <YOU_+MACHINE_KEY>
headscale --user myfirstuser nodes register --key <YOUR_MACHINE_KEY>
```
### Register machine using a pre authenticated key
@@ -106,86 +87,9 @@ Generate a key using the command line:
headscale --user myfirstuser preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:
This will return a pre-authenticated key that is used to
connect a node to `headscale` during the `tailscale` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Running `headscale` in the background with SystemD
This section demonstrates how to run `headscale` as a service in the background with [SystemD](https://www.freedesktop.org/wiki/Software/systemd/).
This should work on most modern Linux distributions.
1. Create a SystemD service configuration at `/etc/systemd/system/headscale.service` containing:
```systemd
[Unit]
Description=headscale controller
After=syslog.target
After=network.target
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/local/bin/headscale serve
Restart=always
RestartSec=5
# Optional security enhancements
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/headscale /var/run/headscale
AmbientCapabilities=CAP_NET_BIND_SERVICE
RuntimeDirectory=headscale
[Install]
WantedBy=multi-user.target
```
Note that when running as the headscale user ensure that, either you add your current user to the headscale group:
```shell
usermod -a -G headscale current_user
```
or run all headscale commands as the headscale user:
```shell
su - headscale
```
2. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with path that is writable by the `headscale` user or group:
```yaml
unix_socket: /var/run/headscale/headscale.sock
```
3. Reload SystemD to load the new configuration file:
```shell
systemctl daemon-reload
```
4. Enable and start the new `headscale` service:
```shell
systemctl enable --now headscale
```
5. Verify the headscale service:
```shell
systemctl status headscale
```
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
`headscale` will now run in the background and start at boot.

View File

@@ -1,5 +1,12 @@
# Running headscale on OpenBSD
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
This documentation has the goal of showing a user how-to install and run `headscale` on OpenBSD 7.1.
@@ -10,17 +17,14 @@ describing how to make `headscale` run properly in a server environment.
1. Install from ports (Not Recommend)
As of OpenBSD 7.1, there's a headscale in ports collection, however, it's severely outdated(v0.12.4).
As of OpenBSD 7.2, there's a headscale in ports collection, however, it's severely outdated(v0.12.4).
You can install it via `pkg_add headscale`.
2. Install from source on OpenBSD 7.1
2. Install from source on OpenBSD 7.2
```shell
# Install prerequistes
# 1. go v1.19+: headscale newer than 0.17 needs go 1.19+ to compile
# 2. gmake: Makefile in the headscale repo is written in GNU make syntax
pkg_add -D snap go
pkg_add gmake
pkg_add go
git clone https://github.com/juanfont/headscale.git
@@ -33,7 +37,7 @@ latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)
git checkout $latestTag
gmake build
go build -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$latestTag" github.com/juanfont/headscale
# make it executable
chmod a+x headscale
@@ -46,7 +50,7 @@ cp headscale /usr/local/sbin
```shell
# Install prerequistes
# 1. go v1.19+: headscale newer than 0.17 needs go 1.19+ to compile
# 1. go v1.20+: headscale newer than 0.21 needs go 1.20+ to compile
# 2. gmake: Makefile in the headscale repo is written in GNU make syntax
git clone https://github.com/juanfont/headscale.git
@@ -90,8 +94,7 @@ touch /var/lib/headscale/db.sqlite
touch /etc/headscale/config.yaml
```
It is **strongly recommended** to copy and modify the [example configuration](../config-example.yaml)
from the [headscale repository](../)
**(Strongly Recommended)** Download a copy of the [example configuration][config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml) from the headscale repository.
4. Start the headscale server:

View File

@@ -11,8 +11,17 @@ To make the Windows client behave as expected and to run well with `headscale`,
- `HKLM:\SOFTWARE\Tailscale IPN\UnattendedMode` must be set to `always` as a `string` type, to allow Tailscale to run properly in the background
- `HKLM:\SOFTWARE\Tailscale IPN\LoginURL` must be set to `<YOUR HEADSCALE URL>` as a `string` type, to ensure Tailscale contacts the correct control server.
You can set these using the Windows Registry Editor:
![windows-registry](./images/windows-registry.png)
Or via the following Powershell commands (right click Powershell icon and select "Run as administrator"):
```
New-ItemProperty -Path 'HKLM:\Software\Tailscale IPN' -Name UnattendedMode -PropertyType String -Value always
New-ItemProperty -Path 'HKLM:\Software\Tailscale IPN' -Name LoginURL -PropertyType String -Value https://YOUR-HEADSCALE-URL
```
The Tailscale Windows client has been observed to reset its configuration on logout/reboot and these two keys [resolves that issue](https://github.com/tailscale/tailscale/issues/2798).
For a guide on how to edit registry keys, [check out Computer Hope](https://www.computerhope.com/issues/ch001348.htm).

Some files were not shown because too many files have changed in this diff Show More