Compare commits

...

197 Commits

Author SHA1 Message Date
Kristoffer Dalby
64ebe6b0c8 change date in changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 08:13:38 +02:00
Kristoffer Dalby
e6b26499f7 release source code with vendored dependencies
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-20 08:13:38 +02:00
Kristoffer Dalby
977eb1dee3 Update flakes, add some quality of life improvements (#1346) 2023-04-20 07:56:53 +02:00
Kristoffer Dalby
b2e2b02210 set release date
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:47:31 +02:00
Kristoffer Dalby
2abff4bb08 update changelog for #1339
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:45:27 +02:00
Kristoffer Dalby
54c00645d1 update changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
cad5ce0ebd lint fix
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
b12a167fa2 remove rpm, might add back later
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
667295e15e add new documentation on how to install on debian/ubuntu
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
bea52678e3 move current linux documentation into "manual"
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
307cfc3304 add systemd enable to postinstall script
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-19 20:04:58 +02:00
Kristoffer Dalby
5e74ca9414 Fix IPv6 in ACLs (#1339) 2023-04-16 12:26:35 +02:00
Juan Font
9836b097a4 Make sure all clients of a user are ready (#1335) 2023-04-12 09:25:51 +02:00
Juan Font
d0b3b1bfc4 Fix binary releases 2023-04-08 09:21:27 +02:00
Juan Font
6eea96eabc Added 1.38.4 in the new tests 2023-04-07 19:45:46 +02:00
github-actions[bot]
d08fee78c3 docs(README): update contributors (#1325) 2023-04-07 17:31:29 +02:00
Andriy Kushnir (Orhideous)
bb5f0d456c Change primary color for light mode to white 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
c186c49e25 Removed custom accents, going with defaults 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
4ec6894773 Build with strict mode 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
dd9b4b1cb7 Move examples out of docs/ directory 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
a43bb9c958 Replace placeholder link with actual one 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
ba905ff6fc Add GHA CI to build and deploy docs 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
99bd09f688 Add new index page 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
a6bc792a61 Move admonitions to relevant sections 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
6381d3660a Add admonitions marking community-provided docs 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
66c5f74d78 Add admonitions marking community-provided docs 2023-04-07 15:24:13 +02:00
Andriy Kushnir (Orhideous)
1723a6bf40 Configure MkDocs Material scaffold 2023-04-07 15:24:13 +02:00
Juan Font
353f191e4f Update changelog 2023-04-07 13:25:34 +02:00
Juan Font
8d865bb61b Target Go 1.20 in flake.nix 2023-04-07 13:25:34 +02:00
Juan Font
c6815c5334 Target Go 1.20 and Tailscale 1.38 2023-04-07 13:25:34 +02:00
Kristoffer Dalby
b684ac0668 Simplify goreleaser, package deb and rpm
This commit simplifies the goreleaser configuration and then adds nfpm
support which allows us to build .deb and .rpm for each of the ARCH we
support.

The deb and rpm packages adds systemd services and users, creates
directories etc and should in general give the user a working
environment. We should be able to remove a lot of the complicated,
PEBCAK inducing documentation after this.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-04-07 11:06:42 +02:00
Juan Font
dfc5d861c7 Fix CIDR calculation in expandACLPeerAddr 2023-04-05 09:44:46 +02:00
Juan Font
50b706eeed Remove deprecated linters + one casuing issues with imports 2023-04-04 22:37:27 +02:00
Sean Reifschneider
036ff1cbb9 Adding Powershell commands to Windows instructions (#1299)
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
2023-04-04 08:58:32 +02:00
Kristoffer Dalby
ceeef40cdf Add tests to verify "Hosts" aliases in ACL (#1304) 2023-04-03 10:08:48 +02:00
Julien Zweverink
681c86cc95 ACL Doc's (#1288) 2023-03-28 18:41:23 +02:00
Kristoffer Dalby
c7b459b615 Fix issue where ACL * would filter out returning connections (#1279) 2023-03-27 19:19:32 +02:00
Gabe Cook
56a7b1e349 Add SVG logos (#1286) 2023-03-27 15:33:25 +02:00
Antonio Fernandez
f1eee841cb updated to ACL doc (#1278) 2023-03-27 11:25:55 +02:00
Stefan Majer
45fbd34480 Do not use yaml.v2 and yaml.v3 as direct dependency (#1281) 2023-03-27 10:48:39 +02:00
Juan Font
248abcf353 Add missing entry to changelog and prepare for 0.22
Add missing entry to changelog
2023-03-20 13:48:56 +01:00
Kurnia D Win
2560c32378 adding some sleep on re-registration after machine expired (#1256) 2023-03-20 11:14:34 +01:00
Kristoffer Dalby
e38efd3cfa Add ACL test for limiting a single port. (#1258) 2023-03-20 08:52:52 +01:00
Moritz Poldrack
d12f247490 document running exit nodes
Currently the only kind-of documentation is #210 which is outdated. To remedy
this, add a document describing the process.
2023-03-19 11:02:01 +01:00
nicholas-yap
003036a779 Update iOS compatibility and added iOS docs (#1264) 2023-03-17 15:56:15 +01:00
github-actions[bot]
ed79f977a7 docs(README): update contributors (#1262) 2023-03-17 10:07:39 +01:00
Kristoffer Dalby
8012e1cbd2 Add instructions on how to login to iOS (#1261) 2023-03-15 11:31:38 +00:00
Kristoffer Dalby
a5562850a7 MapResponse optimalisations, peer list integration tests (#1254)
Co-authored-by: Allen <979347228@qq.com>
2023-03-06 17:50:26 +01:00
Stefan Majer
bb786ac8e4 github.com/gofrs/uuid/v5 is now go modules compatible, use it (#1224) 2023-03-06 09:54:24 +01:00
Juan Font
ea82035222 Allow to delete routes (#1244) 2023-03-06 09:05:40 +01:00
Albert Copeland
c9ecdd6ef1 Add Graphical Control Panels section to README (#1226) 2023-03-03 19:05:12 +01:00
Juan Font
54f5c249f1 Fix various linting issues + golang-lint upgrade (#1245) 2023-03-03 18:22:47 +01:00
dnaq
a82a603db6 Return 404 on unmatched routes (#1201) 2023-03-03 17:14:30 +01:00
Sean Reifschneider
f49930c514 Add "configtest" CLI command. (#1230)
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
Fixes https://github.com/juanfont/headscale/issues/1229
2023-03-03 14:55:29 +01:00
Kristoffer Dalby
2baeb79aa0 changelog: prep for 0.21 (#1246) 2023-03-03 13:42:45 +01:00
Kristoffer Dalby
b3f78a209a Post PR comment when nix vendor sum breaks
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-03-02 18:17:59 +01:00
Josh Taylor
5e6868a858 Run prettier 2023-02-27 10:28:49 +01:00
Josh Taylor
5caf848f94 Add steps for Google OAuth for OIDC 2023-02-27 10:28:49 +01:00
Juan Font
3e097123bf Target ts 1.36 in integration tests 2023-02-26 15:35:27 +01:00
Juan Font
74447b02e8 Target Tailscale 1.36 when building 2023-02-26 15:35:27 +01:00
Juan Font
20e96de963 Update dependencies 20230226 2023-02-26 14:39:37 +01:00
Juan Font
7c765fb3dc Update prettier action
Update prettier action
2023-02-26 13:54:30 +01:00
Michael Savage
dcc246c869 Fix OpenBSD build docs
- OpenBSD 7.2 installs go 1.19 by default
- Doubt you can run nix so skip the makefile/nix build and just go build directly
2023-02-26 12:54:47 +01:00
Àlex Torregrosa
cf7767d8f9 Add css to the /windows and /apple templates (#1211)
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
2023-02-09 08:49:05 -05:00
Maxim Gajdaj
61c578f82b Update running-headscale-linux.md
Option WorkingDirectory for headscale.service added
2023-02-09 08:17:28 -05:00
Kristoffer Dalby
6950ff7841 Add list of talks to the readme
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-07 17:04:14 +01:00
Kristoffer Dalby
e65ce17f7b Add documentation to integration test framework
so tsic, hsic and scenario

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 16:25:58 +01:00
Kristoffer Dalby
b190ec8edc Add section about running locally
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 16:25:58 +01:00
Kristoffer Dalby
c39085911f Add node expiry test
This commits adds a test to verify that nodes get updated if a node in
their network expires.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
3c20d2a178 Update changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
9187e4287c Remove unused components from old integration tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
2b7bcb77a5 Stop using deprecated string function
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
97a909866d Use pingAll helper for all integration pinging
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
feeb5d334b Populate the tags field on node
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
a840a2e6ee Sort tailcfg.Node creation as upstream
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
4183345020 Do not collect services, we dont support it
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
50fb7ad6ce Add TODOs for only sending patch updates
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
88a9f4b44c Send control time in map response
This gives all the nodes the same constant time to work from

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-03 09:26:22 +01:00
Kristoffer Dalby
00fbd8dd93 Remove all tests before generating new ones
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-02 17:55:19 +01:00
Kristoffer Dalby
ce587d2421 Update test workflows
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-01 10:58:37 +01:00
Kristoffer Dalby
e1eb30084d Remove new line at start of test template
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-01 10:58:37 +01:00
Kristoffer Dalby
673638afe7 Use ripgrep to find list of tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-02-01 10:58:37 +01:00
Kristoffer Dalby
da48cf64b3 Set OpenID Connect Expiry
This commit adds a default OpenID Connect expiry to 180d to align with
Tailscale SaaS (previously infinite or based on token expiry).

In addition, it adds an option use the expiry time from the Token sent
by the OpenID provider. This will typically cause really short expiry
and you should only turn on this option if you know what you are
desiring.

This fixes #1176.

Co-authored-by: Even Holthe <even.holthe@bekk.no>
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-31 18:55:16 +01:00
Dominic Bevacqua
385fd93e73 Update changelog 2023-01-31 00:15:48 +01:00
Dominic Bevacqua
26edf24477 Allow split DNS configuration without requiring global nameservers
Align behaviour of dns_config.restricted_nameservers to tailscale.

Tailscale allows split DNS configuration without requiring global nameservers.

In addition, as per [the docs](https://tailscale.com/kb/1054/dns/#using-dns-settings-in-the-admin-console):

> These nameservers also configure search domains for your devices

This commit aligns headscale to tailscale by:

 * honouring dns_config.restricted_nameservers regardless of whether any global resolvers are configured
 * adding a search domain for each restricted_nameserver
2023-01-31 00:15:48 +01:00
Kristoffer Dalby
83a538cc95 Rename IP specific function, add missing test case
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-30 15:56:38 +01:00
Kristoffer Dalby
cffa040474 Cancel old builds if new commits appear
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-30 14:57:10 +01:00
Kristoffer Dalby
727d95b477 Improve generated integration tests
- Save logs from control(headscale) on every run to tmp
- Upgrade nix-actions
- Cancel builds if new commit is pushed
- Fix a sorting bug in user command test

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-30 14:43:03 +01:00
Juan Font
640bb94119 Do not show IsPrimary field as false in exit nodes 2023-01-29 14:54:09 +01:00
Juan Font
0f65918a25 Update tests
Fixed linting
2023-01-29 12:25:37 +01:00
Juan Font
3ac2e0b253 Enable both exit node routes (IPv4 and IPv6) at the same time.
As indicated by bradfitz in https://github.com/juanfont/headscale/issues/804#issuecomment-1399314002,
both routes for the exit node must be enabled at the same time. If a user tries to enable one of the exit node routes,
the other gets activated too.

This commit also reduces the API surface, making private a method that didnt need to be exposed.
2023-01-29 12:25:37 +01:00
Juan Font
b322cdf251 Updated changelog for v0.20.0 2023-01-29 11:46:37 +01:00
Johan Siebens
e128796b59 use smallzstd and sync pool 2023-01-27 12:03:24 +01:00
Kristoffer Dalby
6d669c6b9c Migrate namespace_id to user_id column in machine and pak
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-26 11:07:26 +01:00
Kristoffer Dalby
8dadb045cf Mark -n and --namespace as deprecated
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-26 10:22:38 +01:00
Christian Heusel
9f6e546522 modify the test to reflect the changes on the webinterface
related to 2d44a1c99c17

Signed-off-by: Christian Heusel <christian@heusel.eu>
2023-01-26 08:33:44 +01:00
Juan Font
9714900db9 Target Tailscale 1.36.0 2023-01-26 07:50:03 +01:00
Jan Hartkopf
cb25f0d650 Add hint for reverse proxying with Apache 2023-01-23 15:51:20 +01:00
caelansar
9c2e580ab5 put Where before Find 2023-01-20 10:50:29 +01:00
Christian Heusel
0ffff2c994 Update the node join instruction to reference "username"
related to https://github.com/juanfont/headscale/pull/1144

Signed-off-by: Christian Heusel <christian@heusel.eu>
2023-01-20 09:50:49 +01:00
Christian Heusel
c720af66d6 permalink in the limitations section to tailscale
The relative link was broken after one commit to the file

Signed-off-by: Christian Heusel <christian@heusel.eu>
2023-01-20 09:19:26 +01:00
Kristoffer Dalby
86a7129027 Update changelog, more explicit backup note
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-19 12:54:34 +01:00
Kristoffer Dalby
9eaa8dd049 Migrate DB: rename table is plural, order matters
The calls to AutoMigrate to other classes that refer to users will
create the table and it will break, it needs to be done before
everything else.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-19 12:54:34 +01:00
Kristoffer Dalby
81441afe70 update changelog
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
f19e8aa7f0 Fix failing tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
90287a6735 gofumpt
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
fb3e2dcf10 Rename namespace to user in docs
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
bf0b85f382 Rename acl test file
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
5da0963aac Migrate DB: rename namespace, automigrate user
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
da5c051d73 Lint fix
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
b98bf199dd Regenerate go from proto
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
428d7c86ce Rename namespace in protobuf files
While this truly breaks the point of the backwards compatible stuff with
protobuf, it does not seem worth it to attempt to glue together a
compatible API.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
af1ec5a593 Rename .go namespace files
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Kristoffer Dalby
e3a2593344 Rename [Nn]amespace -> [Uu]ser in go code
Use gopls, ag and perl to rename all occurances of Namespace

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-18 15:40:04 +01:00
Motiejus Jakštys
bafb6791d3 oidc: allow reading the client secret from a file
Currently the most "secret" way to specify the oidc client secret is via
an environment variable `OIDC_CLIENT_SECRET`, which is problematic[1].
Lets allow reading oidc client secret from a file. For extra convenience
the path to the secret will resolve the environment variables.

[1]: https://systemd.io/CREDENTIALS/
2023-01-14 17:03:57 +01:00
Motiejus Jakštys
6edac4863a Makefile: remove a missing target
test_integration_oidc was removed in 0525bea593
2023-01-14 13:42:48 +01:00
Even Holthe
e27e01c09f nodes list: expose expiration time 2023-01-12 13:43:21 +01:00
Even Holthe
dd173ecc1f Refresh machines with correct new expiry 2023-01-12 13:43:21 +01:00
Kristoffer Dalby
8ca0fb7ed0 update ip_prefixes docs
we cant actually have arbitrary ip ranges, add a note about that.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-12 11:39:39 +01:00
Juan Font
6c714e88ee Added entry for performance improvements in ACLs 2023-01-11 08:58:03 +01:00
Allen
a6c8718a97 ToStringSlice will lead to high CPU usage, early conversion can reduce cpu usage 2023-01-11 08:45:54 +01:00
Even Holthe
26282b7a54 Fix SIGSEGV crash related to map of state changes
See https://github.com/juanfont/headscale/issues/1114#issuecomment-1373698441
2023-01-10 22:26:21 +01:00
Kristoffer Dalby
93aca81c1c Read integration test config from Env
This commit sets the Headscale config from env instead of file for
integration tests, the main point is to make sure that when we add per
test config, it properly replaces the config key and not append it or
something similar.

Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 23:06:43 +01:00
Kristoffer Dalby
81254cdf7a Limit run regex for generated workflows
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 18:36:31 +01:00
Kristoffer Dalby
b3a0c4a63b Add integration readme
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 12:32:24 +01:00
Kristoffer Dalby
376235c9de make prettier ignore generated test flows
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 12:32:24 +01:00
Kristoffer Dalby
7274fdacc6 Generate github action jobs for integration tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 12:32:24 +01:00
Kristoffer Dalby
91c1f54b49 Remove "run all v2 job"
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 12:32:24 +01:00
Kristoffer Dalby
efd0f79fbc Add script to generate integration test gitjobs
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2023-01-06 12:32:24 +01:00
Juan Font
2084464225 Update CHANGELOG.md
Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
2023-01-05 14:59:02 +01:00
Juan Font
66ebbf3ecb Preload AuthKey in machine getters 2023-01-05 14:59:02 +01:00
Juan Font
55a3885614 Added integration tests for ephemeral nodes
Fetch the machines from headscale
2023-01-05 14:59:02 +01:00
Juan Font
afae1ff7b6 Delete ephemeral machines on logout
Update changelog

Use dedicated method to delete
2023-01-05 14:59:02 +01:00
Juan Font
4de49f5f49 Add isEphemeral() method to Machine 2023-01-05 14:59:02 +01:00
Even Holthe
6db9656008 oidc: update changelog 2023-01-04 09:23:52 +01:00
Even Holthe
fecb13b24b oidc: add basic docs 2023-01-04 09:23:52 +01:00
Even Holthe
23a595c26f oidc: add test for expiring nodes after token expiration 2023-01-04 09:23:52 +01:00
Even Holthe
085912cfb4 expire machines after db expiry 2023-01-04 09:23:52 +01:00
Even Holthe
7157e14aff add expiration from OIDC token to machine 2023-01-04 09:23:52 +01:00
Allen
4e2c4f92d3 reflect.DeepEqual is a value copy that causes golang to continuously allocate memory 2023-01-03 18:09:18 +01:00
Juan Font
893b0de8fa Added tests on allowedip field for routing 2023-01-03 13:34:55 +01:00
Juan Font
9b98c3b79f Send in AllowedIPs both primary routes AND enabled exit routes 2023-01-03 13:34:55 +01:00
Even Holthe
6de26b1d7c Remove Tailscale v1.18.2 from test matrix 2023-01-02 16:06:12 +01:00
Christian Heusel
1f1931fb00 fix spelling mistakes 2023-01-01 22:45:16 +01:00
Christian Heusel
1f4efbcd3b add changelog entry 2023-01-01 22:45:16 +01:00
Christian Heusel
711fe1d806 enumerate the config 2023-01-01 22:45:16 +01:00
Christian Heusel
e2c62a7b0c document how to add new DNS records via extra_records 2023-01-01 22:45:16 +01:00
Christian Heusel
ab6565723e add the possibility for custom DNS records
related to https://github.com/juanfont/headscale/issues/762

Co-Authored-By: Jamie Greeff <jamie@greeff.me>
Signed-off-by: Christian Heusel <christian@heusel.eu>
2023-01-01 22:45:16 +01:00
John Axel Eriksson
7bb6f1a7eb domains/restricted_nameservers: check dnsConfig.Resolvers instead of dnsConfig.Nameservers 2022-12-31 19:06:32 +01:00
Avirut Mehta
549b82df11 Add Caddy instructions to reverse_proxy.md 2022-12-27 23:08:34 +01:00
Marc
036cdf922f templates: fix typo "custm" -> "custom" 2022-12-27 12:02:33 +01:00
jimyag
b4ff22935c update macos check 2022-12-25 15:45:45 +01:00
ma6174
5feadbf3fc fix goroutine leak 2022-12-25 14:11:16 +01:00
Juan Font
3e9ee816f9 Add integration tests for logout with authkey 2022-12-22 20:02:18 +01:00
Juan Font
2494e27a73 Make WaitForTailscaleLogout a Scenario method 2022-12-22 20:02:18 +01:00
Juan Font
8e8b65bb84 Add ko-fi sponsor button 2022-12-22 17:25:49 +01:00
Juan Font
b7d7fc57c4 Add logout method to tsic 2022-12-22 00:09:21 +01:00
Juan Font
b54c0e3d22 Add integration tests that check logout and relogin 2022-12-21 20:52:08 +01:00
Juan Font
593040b73d Run the Noise handlers under a new struct so we can access the noiseConn from the handlers
In TS2021 the MachineKey can be obtained from noiseConn.Peer() - contrary to what I thought before,
where I assumed MachineKey was dropped in TS2021.

By having a ts2021App and hanging from there the TS2021 handlers, we can fetch again the MachineKey.
2022-12-21 20:52:08 +01:00
Juan Font
6e890afc5f Minor linting fixes 2022-12-21 08:28:53 +01:00
Fatih Acar
2afba0233b fix(routes): ensure routes are correctly propagated
When using Tailscale v1.34.1, enabling or disabling a route does not
effectively add or remove the route from the node's routing table.
We must restart tailscale on the node to have a netmap update.

Fix this by refreshing last state change so that a netmap diff is sent.

Also do not include secondary routes in allowedIPs, otherwise secondary
routes might be used by nodes instead of the primary route.

Signed-off-by: Fatih Acar <facar@scaleway.com>
2022-12-20 15:39:59 +01:00
Anoop Sundaresh
91900b7310 Update remote-cli.md
Fixing the local binary path
2022-12-19 19:16:48 +01:00
Juan Font
55b198a16a Clients are offline when expired 2022-12-19 15:56:12 +01:00
Juan Font
ca37dc6268 Update changelog 2022-12-15 00:13:53 -08:00
Juan Font
000c02dad9 Show online in CLI & API when isOnline() reports so 2022-12-15 00:13:53 -08:00
Juan Font
4532915be1 Refresh autogenerated grpc stuff 2022-12-15 00:13:53 -08:00
Juan Font
4b8d6e7c64 Include online field in proto for machine 2022-12-15 00:13:53 -08:00
Kristoffer Dalby
579c5827b3 regenerate proto with new plugin
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-14 00:05:29 -08:00
Kristoffer Dalby
01628f76ff upgrade grpc-gateway plugin
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-14 00:05:29 -08:00
Kristoffer Dalby
53858a32f1 dont fail docker if nothing to delete
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-13 23:13:23 -08:00
Juan Font Alonso
2bf576ea8a Disable Tailscale 1.16 in integration tests 2022-12-09 19:11:24 +01:00
github-actions[bot]
1faac0b3d7 docs(README): update contributors 2022-12-07 15:18:37 +01:00
Kristoffer Dalby
134c72f4fb Set db_ssl to false by default, fixes #1043
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-07 14:58:47 +01:00
Zachary Newell
70f2f5d750 Added an OIDC AllowGroups option for authorization. 2022-12-07 08:53:16 +01:00
Kristoffer Dalby
4453728614 Murder docker container and network before run
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-06 08:52:21 +01:00
Juan Font
34107f9a0f Updated changelog 2022-12-06 08:17:14 +01:00
Juan Font
52862b8a22 Port integration tests routes CLI to v2
Fix options signature
2022-12-06 08:17:14 +01:00
Juan Font
946d38e5d7 Minor linting fixes
Remove magic number (base10...)
2022-12-06 08:17:14 +01:00
Juan Font
78819be03c Use the new routes API from the CLI 2022-12-06 08:17:14 +01:00
Juan Font
34631dfcf5 Refactored route grpc glue code 2022-12-06 08:17:14 +01:00
Juan Font
8fa9755b55 Updated generated pb code
Update swagger
2022-12-06 08:17:14 +01:00
Juan Font
1b557ac1ea Update protobuf definitions + support methods for the API
Add more logging

Updated protos with new routes API
2022-12-06 08:17:14 +01:00
Juan Font
8170f5e693 Removed unused code and linting fixes
Another bunch of gosec/golint related fixes

Remove method no longer used
2022-12-06 08:17:14 +01:00
Juan Font
a506d0fcc8 Run handlePrimarySubnetFailover() with a ticker when Serve 2022-12-06 08:17:14 +01:00
Juan Font
6718ff71d3 Added helper methods for subnet failover + unit tests
Added method to perform subnet failover

Added tests for subnet failover
2022-12-06 08:17:14 +01:00
Juan Font
b62acff2e3 Refactor machine.go, and move functionality to routes.go + unit tests
Port routes tests to new model

Mark as primary the first instance of subnet + tests

In preparation for subnet failover, mark the initial occurrence of a subnet as the primary one.
2022-12-06 08:17:14 +01:00
Juan Font
ac8bff716d Call processMachineRoutes when a new Map is received 2022-12-06 08:17:14 +01:00
Juan Font
fba77de4eb Add Route DB model and migration from existing field
Add migration from Machine.EnabledRoutes to the new Route table

Cleanup route.go and add helper methods to process HostInfo
2022-12-06 08:17:14 +01:00
github-actions[bot]
d1bca105ef docs(README): update contributors 2022-12-05 22:05:25 +01:00
Juan Font
6c2d6fa302 Do not explicitly set the protocols when ommited in ACL 2022-12-05 21:45:18 +01:00
Kristoffer Dalby
1015bc3e02 Upgrade to Tailscale 1.34.0
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
Kristoffer Dalby
68c72d03b5 Prep changelog for new release
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
Kristoffer Dalby
bd4b2da06e Add changelog entry to correct version
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
Kristoffer Dalby
7b8cf5ef1a Add 1.34.0 to integration tests
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
Kristoffer Dalby
638a3d48ec fix nix run
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
Kristoffer Dalby
4de676c64e Add instructions for macOS GUI
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
Kristoffer Dalby
a58a552f0e Update macos/windows doc
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2022-12-05 20:41:15 +01:00
189 changed files with 13496 additions and 5729 deletions

3
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,3 @@
# These are supported funding model platforms
ko_fi: headscale

View File

@@ -8,9 +8,14 @@ on:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
permissions: write-all
steps:
- uses: actions/checkout@v3
@@ -32,10 +37,34 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run build
id: build
if: steps.changed-files.outputs.any_changed == 'true'
run: nix build
run: |
nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}"
- uses: actions/upload-artifact@v2
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@v6
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
github.rest.pulls.createReviewComment({
pull_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@v3
if: steps.changed-files.outputs.any_changed == 'true'
with:
name: headscale-linux

45
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Build documentation
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
- name: Build docs
run: mkdocs build --strict
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: ./site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1

View File

@@ -3,6 +3,10 @@ name: Lint
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
golangci-lint:
runs-on: ubuntu-latest
@@ -26,7 +30,7 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
uses: golangci/golangci-lint-action@v2
with:
version: v1.49.0
version: v1.51.2
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
@@ -59,7 +63,7 @@ jobs:
- name: Prettify code
if: steps.changed-files.outputs.any_changed == 'true'
uses: creyD/prettier_action@v4.0
uses: creyD/prettier_action@v4.3
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}

138
.github/workflows/release-docker.yml vendored Normal file
View File

@@ -0,0 +1,138 @@
---
name: Release Docker
on:
push:
tags:
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -19,135 +19,6 @@ jobs:
- uses: cachix/install-nix-action@v16
- name: Run goreleaser
run: nix develop --command -- goreleaser release --rm-dist
run: nix develop --command -- goreleaser release --clean
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
build-args: |
VERSION=${{ steps.meta.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
suffix=-debug,onlatest=true
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=sha
type=raw,value=develop
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
build-args: |
VERSION=${{ steps.meta-debug.outputs.version }}
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowStarDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowStarDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUser80Dst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUser80Dst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUserDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUserDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDenyAllPort80
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDenyAllPort80$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDevice1CanAccessDevice2$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLHostsInNetMapTable
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLHostsInNetMapTable$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReach
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReach$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReachBySubnet$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthKeyLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestCreateTailscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestCreateTailscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnablingRoutes
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnablingRoutes$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestExpireNode
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestExpireNode$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHeadscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHeadscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByHostname
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByHostname$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByIP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandReusableEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandWithoutExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestResolveMagicDNS
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestResolveMagicDNS$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHIsBlockedInACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHIsBlockedInACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleUsersAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHNoSSHConfigured
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHNoSSHConfigured$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneUserAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneUserAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSUserOnlyIsolation
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSUserOnlyIsolation$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTaildrop
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTaildrop$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTailscaleNodesJoiningHeadcale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -0,0 +1,57 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestUserCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestUserCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"

View File

@@ -1,27 +0,0 @@
name: Integration Test v2 - kradalby
on: [pull_request]
jobs:
integration-test-v2-kradalby:
runs-on: [self-hosted, linux, x64, nixos, docker]
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --command -- make test_integration_v2_general

View File

@@ -2,6 +2,10 @@ name: Tests
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest

12
.gitignore vendored
View File

@@ -1,3 +1,5 @@
ignored/
# Binaries for programs and plugins
*.exe
*.exe~
@@ -12,8 +14,9 @@
*.out
# Dependency directories (remove the comment below to include it)
# vendor/
vendor/
dist/
/headscale
config.json
config.yaml
@@ -26,10 +29,15 @@ derp.yaml
# Exclude Jetbrains Editors
.idea
test_output/
test_output/
control_logs/
# Nix build output
result
.direnv/
integration_test/etc/config.dump.yaml
# MkDocs
.cache
/site

View File

@@ -29,6 +29,14 @@ linters:
- execinquery
- exhaustruct
- nolintlint
- musttag # causes issues with imported libs
# deprecated
- structcheck # replaced by unused
- ifshort # deprecated by the owner
- varcheck # replaced by unused
- nosnakecase # replaced by revive
- deadcode # replaced by unused
# We should strive to enable these:
- wrapcheck

View File

@@ -1,21 +1,28 @@
---
before:
hooks:
- go mod tidy -compat=1.19
- go mod tidy -compat=1.20
- go mod vendor
release:
prerelease: auto
builds:
- id: darwin-amd64
- id: headscale
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- darwin
goarch:
- amd64
targets:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
flags:
- -mod=readonly
ldflags:
@@ -23,60 +30,66 @@ builds:
tags:
- ts2019
- id: darwin-arm64
main: ./cmd/headscale/headscale.go
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- darwin
goarch:
- arm64
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
- id: linux-amd64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- amd64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
- id: linux-arm64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm64
main: ./cmd/headscale/headscale.go
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
archives:
- id: golang-cross
builds:
- darwin-amd64
- darwin-arm64
- freebsd-amd64
- linux-386
- linux-amd64
- linux-arm64
- linux-arm-5
- linux-arm-6
- linux-arm-7
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
format: binary
source:
enabled: true
name_template: "{{ .ProjectName }}_{{ .Version }}"
format: tar.gz
files:
- "vendor/"
nfpms:
# Configure nFPM for .deb and .rpm releases
#
# See https://nfpm.goreleaser.com/configuration/
# and https://goreleaser.com/customization/nfpm/
#
# Useful tools for debugging .debs:
# List file contents: dpkg -c dist/headscale...deb
# Package metadata: dpkg --info dist/headscale....deb
#
- builds:
- headscale
package_name: headscale
priority: optional
vendor: headscale
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
homepage: https://github.com/juanfont/headscale
license: BSD
bindir: /usr/bin
formats:
- deb
# - rpm
contents:
- src: ./config-example.yaml
dst: /etc/headscale/config.yaml
type: config|noreplace
file_info:
mode: 0644
- src: ./docs/packaging/headscale.systemd.service
dst: /etc/systemd/system/headscale.service
- dst: /var/lib/headscale
type: dir
- dst: /var/run/headscale
type: dir
scripts:
postinstall: ./docs/packaging/postinstall.sh
postremove: ./docs/packaging/postremove.sh
checksum:
name_template: "checksums.txt"
snapshot:

1
.prettierignore Normal file
View File

@@ -0,0 +1 @@
.github/workflows/test-integration-v2*

View File

@@ -1,10 +1,70 @@
# CHANGELOG
## 0.17.1 (2022-x-x)
## 0.23.0 (2023-XX-XX)
### Changes
## 0.22.0 (2023-04-20)
### Changes
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
## 0.21.0 (2023-03-20)
### Changes
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
## 0.20.0 (2023-02-03)
### Changes
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
- defaults to 180 days like Tailscale SaaS
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
## 0.19.0 (2023-01-29)
### BREAKING
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
- **BACKUP your database before upgrading**
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
## 0.18.0 (2023-01-14)
### Changes
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
## 0.17.1 (2022-12-05)
### Changes
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
## 0.17.0 (2022-11-26)

View File

@@ -1,5 +1,5 @@
# Builder image
FROM docker.io/golang:1.19-bullseye AS build
FROM docker.io/golang:1.20-bullseye AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale

View File

@@ -1,5 +1,5 @@
# Builder image
FROM docker.io/golang:1.19-bullseye AS build
FROM docker.io/golang:1.20-bullseye AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
@@ -13,7 +13,7 @@ RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.c
RUN test -e /go/bin/headscale
# Debug image
FROM docker.io/golang:1.19.0-bullseye
FROM docker.io/golang:1.20.0-bullseye
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC

View File

@@ -26,7 +26,7 @@ dev: lint test build
test:
@go test $(TAGS) -short -coverprofile=coverage.out ./...
test_integration: test_integration_cli test_integration_derp test_integration_oidc test_integration_v2_general
test_integration: test_integration_cli test_integration_derp test_integration_v2_general
test_integration_cli:
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
@@ -36,7 +36,7 @@ test_integration_cli:
-v ~/.cache/hs-integration-go:/go \
-v $$PWD:$$PWD -w $$PWD \
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
test_integration_derp:
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
@@ -46,7 +46,7 @@ test_integration_derp:
-v ~/.cache/hs-integration-go:/go \
-v $$PWD:$$PWD -w $$PWD \
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
test_integration_v2_general:
docker run \
@@ -56,13 +56,7 @@ test_integration_v2_general:
-v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \
golang:1 \
go test $(TAGS) -failfast ./... -timeout 120m -parallel 8
coverprofile_func:
go tool cover -func=coverage.out
coverprofile_html:
go tool cover -html=coverage.out
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
lint:
golangci-lint run --fix --timeout 10m
@@ -80,11 +74,4 @@ compress: build
generate:
rm -rf gen
go run github.com/bufbuild/buf/cmd/buf generate proto
install-protobuf-plugins:
go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc
buf generate proto

341
README.md
View File

@@ -75,12 +75,26 @@ one of the maintainers.
| macOS | Yes (see `/apple` on your headscale for more information) |
| Windows | Yes [docs](./docs/windows-client.md) |
| Android | Yes [docs](./docs/android-client.md) |
| iOS | Not yet |
| iOS | Yes [docs](./docs/iOS-client.md) |
## Running headscale
Please have a look at the documentation under [`docs/`](docs/).
## Graphical Control Panels
Headscale provides an API for complete management of your Tailnet.
These are community projects not directly affiliated with the Headscale project.
| Name | Repository Link | Description | Status |
| --------------- | ---------------------------------------------------- | ------------------------------------------------------ | ------ |
| headscale-webui | [Github](https://github.com/ifargle/headscale-webui) | A simple Headscale web UI for small-scale deployments. | Alpha |
## Talks
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
- presented by Juan Font Alonso and Kristoffer Dalby
## Disclaimer
1. We have nothing to do with Tailscale, or Tailscale Inc.
@@ -174,13 +188,6 @@ make build
<sub style="font-size:14px"><b>Juan Font</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cure>
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
@@ -202,8 +209,6 @@ make build
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/reynico>
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
@@ -211,6 +216,15 @@ make build
<sub style="font-size:14px"><b>Nico</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/evenh>
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
<br />
<sub style="font-size:14px"><b>Even Holthe</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/e-zk>
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
@@ -239,6 +253,15 @@ make build
<sub style="font-size:14px"><b>unreality</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
<br />
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
@@ -246,13 +269,11 @@ make build
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mpldr>
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
@@ -262,6 +283,13 @@ make build
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/christian-heusel>
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
<br />
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mike-lloyd03>
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
@@ -276,6 +304,8 @@ make build
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Niek>
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
@@ -290,8 +320,6 @@ make build
<sub style="font-size:14px"><b>Eugen Biegler</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/617a7a>
<img src=https://avatars.githubusercontent.com/u/67651251?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Azz/>
@@ -320,6 +348,15 @@ make build
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fdelucchijr>
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
@@ -334,8 +371,6 @@ make build
<sub style="font-size:14px"><b>Orville Q. Song</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hdhoang>
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
@@ -350,6 +385,15 @@ make build
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/bravechamp>
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
<br />
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/deonthomasgy>
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
@@ -378,8 +422,6 @@ make build
<sub style="font-size:14px"><b>Mevan Samaratunga</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dragetd>
<img src=https://avatars.githubusercontent.com/u/3639577?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael G./>
@@ -394,6 +436,8 @@ make build
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/samson4649>
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
@@ -401,13 +445,6 @@ make build
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kevin1sMe>
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
@@ -415,6 +452,13 @@ make build
<sub style="font-size:14px"><b>kevinlin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/QZAiXH>
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
<br />
<sub style="font-size:14px"><b>Snack</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/artemklevtsov>
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
@@ -422,8 +466,6 @@ make build
<sub style="font-size:14px"><b>Artem Klevtsov</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cmars>
<img src=https://avatars.githubusercontent.com/u/23741?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Casey Marshall/>
@@ -431,6 +473,22 @@ make build
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dbevacqua>
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
<br />
<sub style="font-size:14px"><b>dbevacqua</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/joshuataylor>
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
<br />
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/CNLHC>
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
@@ -438,6 +496,13 @@ make build
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/motiejus>
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
<br />
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pvinis>
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
@@ -459,6 +524,8 @@ make build
<sub style="font-size:14px"><b>Steven Honson</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ratsclub>
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
@@ -466,15 +533,6 @@ make build
<sub style="font-size:14px"><b>Victor Freire</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lachy2849>
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
<br />
<sub style="font-size:14px"><b>lachy2849</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/t56k>
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
@@ -482,6 +540,13 @@ make build
<sub style="font-size:14px"><b>thomas</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/linsomniac>
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
<br />
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aberoham>
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
@@ -489,6 +554,13 @@ make build
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/iFargle>
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
<br />
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/puzpuzpuz>
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
@@ -496,6 +568,15 @@ make build
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/theryecatcher>
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
<br />
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/apognu>
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
@@ -503,6 +584,13 @@ make build
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tony1661>
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
<br />
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aofei>
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
@@ -510,8 +598,6 @@ make build
<sub style="font-size:14px"><b>Aofei Sheng</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/arnarg>
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
@@ -520,12 +606,14 @@ make build
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/awoimbee>
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
<a href=https://github.com/avirut>
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
<br />
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stensonb>
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
@@ -547,6 +635,13 @@ make build
<sub style="font-size:14px"><b>kundel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fatih-acar>
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
<br />
<sub style="font-size:14px"><b>fatih-acar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fkr>
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
@@ -554,8 +649,6 @@ make build
<sub style="font-size:14px"><b>Felix Kronlage-Dammers</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/felixonmars>
<img src=https://avatars.githubusercontent.com/u/1006477?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Yan/>
@@ -563,6 +656,15 @@ make build
<sub style="font-size:14px"><b>Felix Yan</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gabe565>
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
<br />
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JJGadgets>
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
@@ -570,6 +672,13 @@ make build
<sub style="font-size:14px"><b>JJGadgets</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hrtkpf>
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
<br />
<sub style="font-size:14px"><b>hrtkpf</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimt>
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
@@ -577,6 +686,22 @@ make build
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jsiebens>
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
<br />
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/johnae>
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
<br />
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ShadowJonathan>
<img src=https://avatars.githubusercontent.com/u/22740616?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan de Jong/>
@@ -584,6 +709,43 @@ make build
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JulienFloris>
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
<br />
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/win-t>
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
<br />
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/foxtrot>
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
<br />
<sub style="font-size:14px"><b>Marc</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magf>
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
<br />
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mikejsavage>
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
<br />
<sub style="font-size:14px"><b>Michael Savage</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/piec>
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
@@ -591,6 +753,13 @@ make build
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Donran>
<img src=https://avatars.githubusercontent.com/u/4838348?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pontus N/>
<br />
<sub style="font-size:14px"><b>Pontus N</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nnsee>
<img src=https://avatars.githubusercontent.com/u/36747857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Rasmus Moorats/>
@@ -598,8 +767,6 @@ make build
<sub style="font-size:14px"><b>Rasmus Moorats</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/rcursaru>
<img src=https://avatars.githubusercontent.com/u/16259641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=rcursaru/>
@@ -621,6 +788,8 @@ make build
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/shaananc>
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
@@ -642,8 +811,6 @@ make build
<sub style="font-size:14px"><b>sophware</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/m-tanner-dev0>
<img src=https://avatars.githubusercontent.com/u/97977342?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tanner/>
@@ -658,6 +825,15 @@ make build
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Teteros>
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
<br />
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gitter-badger>
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
@@ -686,8 +862,6 @@ make build
<sub style="font-size:14px"><b>Tjerk Woudsma</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/y0ngb1n>
<img src=https://avatars.githubusercontent.com/u/25719408?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yang Bin/>
@@ -702,6 +876,15 @@ make build
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/newellz2>
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
<br />
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zekker6>
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
@@ -723,6 +906,13 @@ make build
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/caelansar>
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
<br />
<sub style="font-size:14px"><b>caelansar</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/derelm>
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
@@ -732,6 +922,13 @@ make build
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dnaq>
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
<br />
<sub style="font-size:14px"><b>dnaq</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/nning>
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
@@ -746,6 +943,13 @@ make build
<sub style="font-size:14px"><b>ignoramous</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimyag>
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
<br />
<sub style="font-size:14px"><b>jimyag</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/magichuihui>
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
@@ -760,6 +964,15 @@ make build
<sub style="font-size:14px"><b>sharkonet</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ma6174>
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
<br />
<sub style="font-size:14px"><b>ma6174</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/manju-rn>
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
@@ -768,14 +981,19 @@ make build
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
<a href=https://github.com/nicholas-yap>
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
<br />
<sub style="font-size:14px"><b>pernila</b></sub>
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/>
<br />
<sub style="font-size:14px"><b>Tommi Pernila</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/phpmalik>
<img src=https://avatars.githubusercontent.com/u/26834645?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=phpmalik/>
@@ -790,6 +1008,8 @@ make build
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/xpzouying>
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
@@ -797,5 +1017,12 @@ make build
<sub style="font-size:14px"><b>zy</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/atorregrosa-smd>
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
<br />
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
</a>
</td>
</tr>
</table>

202
acls.go
View File

@@ -13,7 +13,9 @@ import (
"time"
"github.com/rs/zerolog/log"
"github.com/samber/lo"
"github.com/tailscale/hujson"
"go4.org/netipx"
"gopkg.in/yaml.v3"
"tailscale.com/envknob"
"tailscale.com/tailcfg"
@@ -133,6 +135,14 @@ func (h *Headscale) UpdateACLRules() error {
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
h.aclRules = rules
// Precompute a map of which sources can reach each destination, this is
// to provide quicker lookup when we calculate the peerlist for the map
// response to nodes.
aclPeerCacheMap := generateACLPeerCacheMap(rules)
h.aclPeerCacheMapRW.Lock()
h.aclPeerCacheMap = aclPeerCacheMap
h.aclPeerCacheMapRW.Unlock()
if featureEnableSSH() {
sshRules, err := h.generateSSHRules()
if err != nil {
@@ -150,7 +160,78 @@ func (h *Headscale) UpdateACLRules() error {
return nil
}
func generateACLRules(machines []Machine, aclPolicy ACLPolicy, stripEmaildomain bool) ([]tailcfg.FilterRule, error) {
// generateACLPeerCacheMap takes a list of Tailscale filter rules and generates a map
// of which Sources ("*" and IPs) can access destinations. This is to speed up the
// process of generating MapResponses when deciding which Peers to inform nodes about.
func generateACLPeerCacheMap(rules []tailcfg.FilterRule) map[string]map[string]struct{} {
aclCachePeerMap := make(map[string]map[string]struct{})
for _, rule := range rules {
for _, srcIP := range rule.SrcIPs {
for _, ip := range expandACLPeerAddr(srcIP) {
if data, ok := aclCachePeerMap[ip]; ok {
for _, dstPort := range rule.DstPorts {
for _, dstIP := range expandACLPeerAddr(dstPort.IP) {
data[dstIP] = struct{}{}
}
}
} else {
dstPortsMap := make(map[string]struct{}, len(rule.DstPorts))
for _, dstPort := range rule.DstPorts {
for _, dstIP := range expandACLPeerAddr(dstPort.IP) {
dstPortsMap[dstIP] = struct{}{}
}
}
aclCachePeerMap[ip] = dstPortsMap
}
}
}
}
log.Trace().Interface("ACL Cache Map", aclCachePeerMap).Msg("ACL Peer Cache Map generated")
return aclCachePeerMap
}
// expandACLPeerAddr takes a "tailcfg.FilterRule" "IP" and expands it into
// something our cache logic can look up, which is "*" or single IP addresses.
// This is probably quite inefficient, but it is a result of
// "make it work, then make it fast", and a lot of the ACL stuff does not
// work, but people have tried to make it fast.
func expandACLPeerAddr(srcIP string) []string {
if ip, err := netip.ParseAddr(srcIP); err == nil {
return []string{ip.String()}
}
if cidr, err := netip.ParsePrefix(srcIP); err == nil {
addrs := []string{}
ipRange := netipx.RangeOfPrefix(cidr)
from := ipRange.From()
too := ipRange.To()
if from == too {
return []string{from.String()}
}
for from != too && from.Less(too) {
addrs = append(addrs, from.String())
from = from.Next()
}
addrs = append(addrs, too.String()) // Add the last IP address in the range
return addrs
}
// probably "*" or other string based "IP"
return []string{srcIP}
}
func generateACLRules(
machines []Machine,
aclPolicy ACLPolicy,
stripEmaildomain bool,
) ([]tailcfg.FilterRule, error) {
rules := []tailcfg.FilterRule{}
for index, acl := range aclPolicy.ACLs {
@@ -160,7 +241,7 @@ func generateACLRules(machines []Machine, aclPolicy ACLPolicy, stripEmaildomain
srcIPs := []string{}
for innerIndex, src := range acl.Sources {
srcs, err := generateACLPolicySrcIP(machines, aclPolicy, src, stripEmaildomain)
srcs, err := generateACLPolicySrc(machines, aclPolicy, src, stripEmaildomain)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
@@ -311,7 +392,7 @@ func sshCheckAction(duration string) (*tailcfg.SSHAction, error) {
}, nil
}
func generateACLPolicySrcIP(
func generateACLPolicySrc(
machines []Machine,
aclPolicy ACLPolicy,
src string,
@@ -327,15 +408,40 @@ func generateACLPolicyDest(
needsWildcard bool,
stripEmaildomain bool,
) ([]tailcfg.NetPortRange, error) {
tokens := strings.Split(dest, ":")
var tokens []string
log.Trace().Str("destination", dest).Msg("generating policy destination")
// Check if there is a IPv4/6:Port combination, IPv6 has more than
// three ":".
tokens = strings.Split(dest, ":")
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
return nil, errInvalidPortFormat
port := tokens[len(tokens)-1]
maybeIPv6Str := strings.TrimSuffix(dest, ":"+port)
log.Trace().Str("maybeIPv6Str", maybeIPv6Str).Msg("")
if maybeIPv6, err := netip.ParseAddr(maybeIPv6Str); err != nil && !maybeIPv6.Is6() {
log.Trace().Err(err).Msg("trying to parse as IPv6")
return nil, fmt.Errorf(
"failed to parse destination, tokens %v: %w",
tokens,
errInvalidPortFormat,
)
} else {
tokens = []string{maybeIPv6Str, port}
}
}
log.Trace().Strs("tokens", tokens).Msg("generating policy destination")
var alias string
// We can have here stuff like:
// git-server:*
// 192.168.1.0/24:22
// fd7a:115c:a1e0::2:22
// fd7a:115c:a1e0::2/128:22
// tag:montreal-webserver:80,443
// tag:api-server:443
// example-host-1:*
@@ -386,12 +492,7 @@ func generateACLPolicyDest(
func parseProtocol(protocol string) ([]int, bool, error) {
switch protocol {
case "":
return []int{
protocolICMP,
protocolIPv6ICMP,
protocolTCP,
protocolUDP,
}, false, nil
return nil, false, nil
case "igmp":
return []int{protocolIGMP}, true, nil
case "ipv4", "ip-in-ip":
@@ -429,12 +530,15 @@ func parseProtocol(protocol string) ([]int, bool, error) {
}
// expandalias has an input of either
// - a namespace
// - a user
// - a group
// - a tag
// - a host
// - an ip
// - a cidr
// and transform these in IPAddresses.
func expandAlias(
machines []Machine,
machines Machines,
aclPolicy ACLPolicy,
alias string,
stripEmailDomain bool,
@@ -449,12 +553,12 @@ func expandAlias(
Msg("Expanding")
if strings.HasPrefix(alias, "group:") {
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
users, err := expandGroup(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
for _, n := range namespaces {
nodes := filterMachinesByNamespace(machines, n)
for _, n := range users {
nodes := filterMachinesByUser(machines, n)
for _, node := range nodes {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
@@ -490,8 +594,8 @@ func expandAlias(
}
// filter out machines per tag owner
for _, namespace := range owners {
machines := filterMachinesByNamespace(machines, namespace)
for _, user := range owners {
machines := filterMachinesByUser(machines, user)
for _, machine := range machines {
hi := machine.GetHostInfo()
if contains(hi.RequestTags, alias) {
@@ -503,8 +607,8 @@ func expandAlias(
return ips, nil
}
// if alias is a namespace
nodes := filterMachinesByNamespace(machines, alias)
// if alias is a user
nodes := filterMachinesByUser(machines, alias)
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
for _, n := range nodes {
@@ -516,19 +620,40 @@ func expandAlias(
// if alias is an host
if h, ok := aclPolicy.Hosts[alias]; ok {
return []string{h.String()}, nil
log.Trace().Str("host", h.String()).Msg("expandAlias got hosts entry")
return expandAlias(machines, aclPolicy, h.String(), stripEmailDomain)
}
// if alias is an IP
ip, err := netip.ParseAddr(alias)
if err == nil {
return []string{ip.String()}, nil
if ip, err := netip.ParseAddr(alias); err == nil {
log.Trace().Str("ip", ip.String()).Msg("expandAlias got ip")
ips := []string{ip.String()}
matches := machines.FilterByIP(ip)
for _, machine := range matches {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
return lo.Uniq(ips), nil
}
// if alias is an CIDR
cidr, err := netip.ParsePrefix(alias)
if err == nil {
return []string{cidr.String()}, nil
if cidr, err := netip.ParsePrefix(alias); err == nil {
log.Trace().Str("cidr", cidr.String()).Msg("expandAlias got cidr")
val := []string{cidr.String()}
// This is suboptimal and quite expensive, but if we only add the cidr, we will miss all the relevant IPv6
// addresses for the hosts that belong to tailscale. This doesnt really affect stuff like subnet routers.
for _, machine := range machines {
for _, ip := range machine.IPAddresses {
// log.Trace().
// Msgf("checking if machine ip (%s) is part of cidr (%s): %v, is single ip cidr (%v), addr: %s", ip.String(), cidr.String(), cidr.Contains(ip), cidr.IsSingleIP(), cidr.Addr().String())
if cidr.Contains(ip) {
val = append(val, machine.IPAddresses.ToStringSlice()...)
}
}
}
return lo.Uniq(val), nil
}
log.Warn().Msgf("No IPs found with the alias %v", alias)
@@ -537,20 +662,20 @@ func expandAlias(
}
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
// that are correctly tagged since they should not be listed as being in the namespace
// we assume in this function that we only have nodes from 1 namespace.
// that are correctly tagged since they should not be listed as being in the user
// we assume in this function that we only have nodes from 1 user.
func excludeCorrectlyTaggedNodes(
aclPolicy ACLPolicy,
nodes []Machine,
namespace string,
user string,
stripEmailDomain bool,
) []Machine {
out := []Machine{}
tags := []string{}
for tag := range aclPolicy.TagOwners {
owners, _ := expandTagOwners(aclPolicy, namespace, stripEmailDomain)
ns := append(owners, namespace)
if contains(ns, namespace) {
owners, _ := expandTagOwners(aclPolicy, user, stripEmailDomain)
ns := append(owners, user)
if contains(ns, user) {
tags = append(tags, tag)
}
}
@@ -590,6 +715,7 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
ports := []tailcfg.PortRange{}
for _, portStr := range strings.Split(portsStr, ",") {
log.Trace().Msgf("parsing portstring: %s", portStr)
rang := strings.Split(portStr, "-")
switch len(rang) {
case 1:
@@ -624,10 +750,10 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
return &ports, nil
}
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
func filterMachinesByUser(machines []Machine, user string) []Machine {
out := []Machine{}
for _, machine := range machines {
if machine.Namespace.Name == namespace {
if machine.User.Name == user {
out = append(out, machine)
}
}
@@ -635,7 +761,7 @@ func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
return out
}
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
// expandTagOwners will return a list of user. An owner can be either a user or a group
// a group cannot be composed of groups.
func expandTagOwners(
aclPolicy ACLPolicy,
@@ -666,7 +792,7 @@ func expandTagOwners(
return owners, nil
}
// expandGroup will return the list of namespace inside the group
// expandGroup will return the list of user inside the group
// after some validation.
func expandGroup(
aclPolicy ACLPolicy,

View File

@@ -77,10 +77,10 @@ func (s *Suite) TestInvalidAction(c *check.C) {
func (s *Suite) TestSshRules(c *check.C) {
envknob.Setenv("HEADSCALE_EXPERIMENTAL_FEATURE_SSH", "1")
namespace, err := app.CreateNamespace("user1")
user, err := app.CreateUser("user1")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("user1", "testmachine")
@@ -98,7 +98,7 @@ func (s *Suite) TestSshRules(c *check.C) {
DiscoKey: "faa",
Hostname: "testmachine",
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.1")},
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
AuthKeyID: uint(pak.ID),
HostInfo: HostInfo(hostInfo),
@@ -187,10 +187,10 @@ func (s *Suite) TestInvalidTagOwners(c *check.C) {
// match properly the IP's of the related hosts. The owner is valid and the tag is also valid.
// the tag is matched in the Sources section.
func (s *Suite) TestValidExpandTagOwnersInSources(c *check.C) {
namespace, err := app.CreateNamespace("user1")
user, err := app.CreateUser("user1")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("user1", "testmachine")
@@ -208,7 +208,7 @@ func (s *Suite) TestValidExpandTagOwnersInSources(c *check.C) {
DiscoKey: "faa",
Hostname: "testmachine",
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.1")},
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
AuthKeyID: uint(pak.ID),
HostInfo: HostInfo(hostInfo),
@@ -237,10 +237,10 @@ func (s *Suite) TestValidExpandTagOwnersInSources(c *check.C) {
// match properly the IP's of the related hosts. The owner is valid and the tag is also valid.
// the tag is matched in the Destinations section.
func (s *Suite) TestValidExpandTagOwnersInDestinations(c *check.C) {
namespace, err := app.CreateNamespace("user1")
user, err := app.CreateUser("user1")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("user1", "testmachine")
@@ -258,7 +258,7 @@ func (s *Suite) TestValidExpandTagOwnersInDestinations(c *check.C) {
DiscoKey: "faa",
Hostname: "testmachine",
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.1")},
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
AuthKeyID: uint(pak.ID),
HostInfo: HostInfo(hostInfo),
@@ -284,13 +284,13 @@ func (s *Suite) TestValidExpandTagOwnersInDestinations(c *check.C) {
}
// need a test with:
// tag on a host that isn't owned by a tag owners. So the namespace
// tag on a host that isn't owned by a tag owners. So the user
// of the host should be valid.
func (s *Suite) TestInvalidTagValidNamespace(c *check.C) {
namespace, err := app.CreateNamespace("user1")
func (s *Suite) TestInvalidTagValidUser(c *check.C) {
user, err := app.CreateUser("user1")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("user1", "testmachine")
@@ -308,7 +308,7 @@ func (s *Suite) TestInvalidTagValidNamespace(c *check.C) {
DiscoKey: "faa",
Hostname: "testmachine",
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.1")},
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
AuthKeyID: uint(pak.ID),
HostInfo: HostInfo(hostInfo),
@@ -333,13 +333,13 @@ func (s *Suite) TestInvalidTagValidNamespace(c *check.C) {
}
// tag on a host is owned by a tag owner, the tag is valid.
// an ACL rule is matching the tag to a namespace. It should not be valid since the
// an ACL rule is matching the tag to a user. It should not be valid since the
// host should be tied to the tag now.
func (s *Suite) TestValidTagInvalidNamespace(c *check.C) {
namespace, err := app.CreateNamespace("user1")
func (s *Suite) TestValidTagInvalidUser(c *check.C) {
user, err := app.CreateUser("user1")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("user1", "webserver")
@@ -357,7 +357,7 @@ func (s *Suite) TestValidTagInvalidNamespace(c *check.C) {
DiscoKey: "faa",
Hostname: "webserver",
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.1")},
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
AuthKeyID: uint(pak.ID),
HostInfo: HostInfo(hostInfo),
@@ -376,7 +376,7 @@ func (s *Suite) TestValidTagInvalidNamespace(c *check.C) {
DiscoKey: "faab",
Hostname: "user",
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.2")},
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
AuthKeyID: uint(pak.ID),
HostInfo: HostInfo(hostInfo2),
@@ -467,14 +467,14 @@ func (s *Suite) TestPortWildcardYAML(c *check.C) {
c.Assert(rules[0].SrcIPs[0], check.Equals, "*")
}
func (s *Suite) TestPortNamespace(c *check.C) {
namespace, err := app.CreateNamespace("testnamespace")
func (s *Suite) TestPortUser(c *check.C) {
user, err := app.CreateUser("testuser")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("testnamespace", "testmachine")
_, err = app.GetMachine("testuser", "testmachine")
c.Assert(err, check.NotNil)
ips, _ := app.getAvailableIPs()
machine := Machine{
@@ -483,7 +483,7 @@ func (s *Suite) TestPortNamespace(c *check.C) {
NodeKey: "bar",
DiscoKey: "faa",
Hostname: "testmachine",
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: ips,
AuthKeyID: uint(pak.ID),
@@ -491,7 +491,7 @@ func (s *Suite) TestPortNamespace(c *check.C) {
app.db.Save(&machine)
err = app.LoadACLPolicy(
"./tests/acls/acl_policy_basic_namespace_as_user.hujson",
"./tests/acls/acl_policy_basic_user_as_user.hujson",
)
c.Assert(err, check.IsNil)
@@ -513,13 +513,13 @@ func (s *Suite) TestPortNamespace(c *check.C) {
}
func (s *Suite) TestPortGroup(c *check.C) {
namespace, err := app.CreateNamespace("testnamespace")
user, err := app.CreateUser("testuser")
c.Assert(err, check.IsNil)
pak, err := app.CreatePreAuthKey(namespace.Name, false, false, nil, nil)
pak, err := app.CreatePreAuthKey(user.Name, false, false, nil, nil)
c.Assert(err, check.IsNil)
_, err = app.GetMachine("testnamespace", "testmachine")
_, err = app.GetMachine("testuser", "testmachine")
c.Assert(err, check.NotNil)
ips, _ := app.getAvailableIPs()
machine := Machine{
@@ -528,7 +528,7 @@ func (s *Suite) TestPortGroup(c *check.C) {
NodeKey: "bar",
DiscoKey: "faa",
Hostname: "testmachine",
NamespaceID: namespace.ID,
UserID: user.ID,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: ips,
AuthKeyID: uint(pak.ID),
@@ -689,7 +689,7 @@ func Test_expandTagOwners(t *testing.T) {
wantErr: false,
},
{
name: "expand with namespace and group",
name: "expand with user and group",
args: args{
aclPolicy: ACLPolicy{
Groups: Groups{"group:foo": []string{"user1", "user2"}},
@@ -843,10 +843,10 @@ func Test_expandPorts(t *testing.T) {
}
}
func Test_listMachinesInNamespace(t *testing.T) {
func Test_listMachinesInUser(t *testing.T) {
type args struct {
machines []Machine
namespace string
machines []Machine
user string
}
tests := []struct {
name string
@@ -854,54 +854,54 @@ func Test_listMachinesInNamespace(t *testing.T) {
want []Machine
}{
{
name: "1 machine in namespace",
name: "1 machine in user",
args: args{
machines: []Machine{
{Namespace: Namespace{Name: "joe"}},
{User: User{Name: "joe"}},
},
namespace: "joe",
user: "joe",
},
want: []Machine{
{Namespace: Namespace{Name: "joe"}},
{User: User{Name: "joe"}},
},
},
{
name: "3 machines, 2 in namespace",
name: "3 machines, 2 in user",
args: args{
machines: []Machine{
{ID: 1, Namespace: Namespace{Name: "joe"}},
{ID: 2, Namespace: Namespace{Name: "marc"}},
{ID: 3, Namespace: Namespace{Name: "marc"}},
{ID: 1, User: User{Name: "joe"}},
{ID: 2, User: User{Name: "marc"}},
{ID: 3, User: User{Name: "marc"}},
},
namespace: "marc",
user: "marc",
},
want: []Machine{
{ID: 2, Namespace: Namespace{Name: "marc"}},
{ID: 3, Namespace: Namespace{Name: "marc"}},
{ID: 2, User: User{Name: "marc"}},
{ID: 3, User: User{Name: "marc"}},
},
},
{
name: "5 machines, 0 in namespace",
name: "5 machines, 0 in user",
args: args{
machines: []Machine{
{ID: 1, Namespace: Namespace{Name: "joe"}},
{ID: 2, Namespace: Namespace{Name: "marc"}},
{ID: 3, Namespace: Namespace{Name: "marc"}},
{ID: 4, Namespace: Namespace{Name: "marc"}},
{ID: 5, Namespace: Namespace{Name: "marc"}},
{ID: 1, User: User{Name: "joe"}},
{ID: 2, User: User{Name: "marc"}},
{ID: 3, User: User{Name: "marc"}},
{ID: 4, User: User{Name: "marc"}},
{ID: 5, User: User{Name: "marc"}},
},
namespace: "mickael",
user: "mickael",
},
want: []Machine{},
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
if got := filterMachinesByNamespace(test.args.machines, test.args.namespace); !reflect.DeepEqual(
if got := filterMachinesByUser(test.args.machines, test.args.user); !reflect.DeepEqual(
got,
test.want,
) {
t.Errorf("listMachinesInNamespace() = %v, want %v", got, test.want)
t.Errorf("listMachinesInUser() = %v, want %v", got, test.want)
}
})
}
@@ -947,25 +947,25 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "mickael"},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{
@@ -985,25 +985,25 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "mickael"},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{
@@ -1025,6 +1025,88 @@ func Test_expandAlias(t *testing.T) {
want: []string{"10.0.0.3"},
wantErr: false,
},
{
name: "simple host by ip passed through",
args: args{
alias: "10.0.0.1",
machines: []Machine{},
aclPolicy: ACLPolicy{},
stripEmailDomain: true,
},
want: []string{"10.0.0.1"},
wantErr: false,
},
{
name: "simple host by ipv4 single ipv4",
args: args{
alias: "10.0.0.1",
machines: []Machine{
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("10.0.0.1"),
},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{},
stripEmailDomain: true,
},
want: []string{"10.0.0.1"},
wantErr: false,
},
{
name: "simple host by ipv4 single dual stack",
args: args{
alias: "10.0.0.1",
machines: []Machine{
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("10.0.0.1"),
netip.MustParseAddr("fd7a:115c:a1e0:ab12:4843:2222:6273:2222"),
},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{},
stripEmailDomain: true,
},
want: []string{"10.0.0.1", "fd7a:115c:a1e0:ab12:4843:2222:6273:2222"},
wantErr: false,
},
{
name: "simple host by ipv6 single dual stack",
args: args{
alias: "fd7a:115c:a1e0:ab12:4843:2222:6273:2222",
machines: []Machine{
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("10.0.0.1"),
netip.MustParseAddr("fd7a:115c:a1e0:ab12:4843:2222:6273:2222"),
},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{},
stripEmailDomain: true,
},
want: []string{"fd7a:115c:a1e0:ab12:4843:2222:6273:2222", "10.0.0.1"},
wantErr: false,
},
{
name: "simple host by hostname alias",
args: args{
alias: "testy",
machines: []Machine{},
aclPolicy: ACLPolicy{
Hosts: Hosts{
"testy": netip.MustParsePrefix("10.0.0.132/32"),
},
},
stripEmailDomain: true,
},
want: []string{"10.0.0.132/32"},
wantErr: false,
},
{
name: "private network",
args: args{
@@ -1040,17 +1122,6 @@ func Test_expandAlias(t *testing.T) {
want: []string{"192.168.1.0/24"},
wantErr: false,
},
{
name: "simple host",
args: args{
alias: "10.0.0.1",
machines: []Machine{},
aclPolicy: ACLPolicy{},
stripEmailDomain: true,
},
want: []string{"10.0.0.1"},
wantErr: false,
},
{
name: "simple CIDR",
args: args{
@@ -1071,7 +1142,7 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1082,7 +1153,7 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1093,13 +1164,13 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
aclPolicy: ACLPolicy{
@@ -1119,25 +1190,25 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "mickael"},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{
@@ -1160,27 +1231,27 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
ForcedTags: []string{"tag:hr-webserver"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
ForcedTags: []string{"tag:hr-webserver"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "mickael"},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{},
@@ -1198,14 +1269,14 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
ForcedTags: []string{"tag:hr-webserver"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1216,13 +1287,13 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "mickael"},
User: User{Name: "mickael"},
},
},
aclPolicy: ACLPolicy{
@@ -1236,7 +1307,7 @@ func Test_expandAlias(t *testing.T) {
wantErr: false,
},
{
name: "list host in namespace without correctly tagged servers",
name: "list host in user without correctly tagged servers",
args: args{
alias: "joe",
machines: []Machine{
@@ -1244,7 +1315,7 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1255,7 +1326,7 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1266,13 +1337,13 @@ func Test_expandAlias(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.3"),
},
Namespace: Namespace{Name: "marc"},
User: User{Name: "marc"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
aclPolicy: ACLPolicy{
@@ -1308,7 +1379,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
type args struct {
aclPolicy ACLPolicy
nodes []Machine
namespace string
user string
stripEmailDomain bool
}
tests := []struct {
@@ -1328,7 +1399,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1339,7 +1410,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1350,16 +1421,16 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
namespace: "joe",
user: "joe",
stripEmailDomain: true,
},
want: []Machine{
{
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.4")},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
},
@@ -1379,7 +1450,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1390,7 +1461,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1401,16 +1472,16 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
namespace: "joe",
user: "joe",
stripEmailDomain: true,
},
want: []Machine{
{
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.4")},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
},
@@ -1425,7 +1496,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "foo",
@@ -1436,23 +1507,23 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
ForcedTags: []string{"tag:accountant-webserver"},
},
{
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
namespace: "joe",
user: "joe",
stripEmailDomain: true,
},
want: []Machine{
{
IPAddresses: MachineAddresses{netip.MustParseAddr("100.64.0.4")},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
},
@@ -1467,7 +1538,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "hr-web1",
@@ -1478,7 +1549,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "hr-web2",
@@ -1489,10 +1560,10 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
namespace: "joe",
user: "joe",
stripEmailDomain: true,
},
want: []Machine{
@@ -1500,7 +1571,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.1"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "hr-web1",
@@ -1511,7 +1582,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.2"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
HostInfo: HostInfo{
OS: "centos",
Hostname: "hr-web2",
@@ -1522,7 +1593,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
IPAddresses: MachineAddresses{
netip.MustParseAddr("100.64.0.4"),
},
Namespace: Namespace{Name: "joe"},
User: User{Name: "joe"},
},
},
},
@@ -1532,7 +1603,7 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
got := excludeCorrectlyTaggedNodes(
test.args.aclPolicy,
test.args.nodes,
test.args.namespace,
test.args.user,
test.args.stripEmailDomain,
)
if !reflect.DeepEqual(got, test.want) {
@@ -1541,3 +1612,138 @@ func Test_excludeCorrectlyTaggedNodes(t *testing.T) {
})
}
}
func Test_expandACLPeerAddr(t *testing.T) {
type args struct {
srcIP string
}
tests := []struct {
name string
args args
want []string
}{
{
name: "asterix",
args: args{
srcIP: "*",
},
want: []string{"*"},
},
{
name: "ip",
args: args{
srcIP: "10.0.0.1",
},
want: []string{"10.0.0.1"},
},
{
name: "ip/32",
args: args{
srcIP: "10.0.0.1/32",
},
want: []string{"10.0.0.1"},
},
{
name: "ip/30",
args: args{
srcIP: "10.0.0.1/30",
},
want: []string{
"10.0.0.0",
"10.0.0.1",
"10.0.0.2",
"10.0.0.3",
},
},
{
name: "ip/28",
args: args{
srcIP: "192.168.0.128/28",
},
want: []string{
"192.168.0.128", "192.168.0.129", "192.168.0.130",
"192.168.0.131", "192.168.0.132", "192.168.0.133",
"192.168.0.134", "192.168.0.135", "192.168.0.136",
"192.168.0.137", "192.168.0.138", "192.168.0.139",
"192.168.0.140", "192.168.0.141", "192.168.0.142",
"192.168.0.143",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := expandACLPeerAddr(tt.args.srcIP); !reflect.DeepEqual(got, tt.want) {
t.Errorf("expandACLPeerAddr() = %v, want %v", got, tt.want)
}
})
}
}
func Test_expandACLPeerAddrV6(t *testing.T) {
type args struct {
srcIP string
}
tests := []struct {
name string
args args
want []string
}{
{
name: "asterix",
args: args{
srcIP: "*",
},
want: []string{"*"},
},
{
name: "ipfull",
args: args{
srcIP: "fd7a:115c:a1e0:ab12:4943:cd96:624c:3166",
},
want: []string{"fd7a:115c:a1e0:ab12:4943:cd96:624c:3166"},
},
{
name: "ipzerocompression",
args: args{
srcIP: "fd7a:115c:a1e0:ab12:4943:cd96:624c::",
},
want: []string{"fd7a:115c:a1e0:ab12:4943:cd96:624c:0"},
},
{
name: "ip/128",
args: args{
srcIP: "fd7a:115c:a1e0:ab12:4943:cd96:624c:3166/128",
},
want: []string{"fd7a:115c:a1e0:ab12:4943:cd96:624c:3166"},
},
{
name: "ip/127",
args: args{
srcIP: "fd7a:115c:a1e0:ab12:4943:cd96:624c:0000/127",
},
want: []string{
"fd7a:115c:a1e0:ab12:4943:cd96:624c:0",
"fd7a:115c:a1e0:ab12:4943:cd96:624c:1",
},
},
{
name: "ip/126",
args: args{
srcIP: "fd7a:115c:a1e0:ab12:4943:cd96:624c:0000/126",
},
want: []string{
"fd7a:115c:a1e0:ab12:4943:cd96:624c:0",
"fd7a:115c:a1e0:ab12:4943:cd96:624c:1",
"fd7a:115c:a1e0:ab12:4943:cd96:624c:2",
"fd7a:115c:a1e0:ab12:4943:cd96:624c:3",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := expandACLPeerAddr(tt.args.srcIP); !reflect.DeepEqual(got, tt.want) {
t.Errorf("expandACLPeerAddr() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -34,7 +34,7 @@ type Groups map[string][]string
// Hosts are alias for IP addresses or subnets.
type Hosts map[string]netip.Prefix
// TagOwners specify what users (namespaces?) are allow to use certain tags.
// TagOwners specify what users (users?) are allow to use certain tags.
type TagOwners map[string][]string
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
@@ -44,7 +44,7 @@ type ACLTest struct {
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
}
// AutoApprovers specify which users (namespaces?), groups or tags have their advertised routes
// AutoApprovers specify which users (users?), groups or tags have their advertised routes
// or exit node status automatically enabled.
type AutoApprovers struct {
Routes map[string][]string `json:"routes" yaml:"routes"`
@@ -119,7 +119,7 @@ func (policy ACLPolicy) IsZero() bool {
return false
}
// Returns the list of autoApproving namespaces, groups or tags for a given IPPrefix.
// Returns the list of autoApproving users, groups or tags for a given IPPrefix.
func (autoApprovers *AutoApprovers) GetRouteApprovers(
prefix netip.Prefix,
) ([]string, error) {

2
api.go
View File

@@ -78,7 +78,7 @@ var registerWebAPITemplate = template.Must(
<p>
Run the command below in the headscale server to add this machine to your network:
</p>
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
<pre><code>headscale nodes register --user USERNAME --key {{.Key}}</code></pre>
</body>
</html>
`))

View File

@@ -1,6 +1,8 @@
package headscale
import (
"time"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
)
@@ -13,7 +15,7 @@ func (h *Headscale) generateMapResponse(
Str("func", "generateMapResponse").
Str("machine", mapRequest.Hostinfo.Hostname).
Msg("Creating Map response")
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig)
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
if err != nil {
log.Error().
Caller().
@@ -37,7 +39,7 @@ func (h *Headscale) generateMapResponse(
profiles := h.getMapResponseUserProfiles(*machine, peers)
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig)
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
if err != nil {
log.Error().
Caller().
@@ -55,16 +57,46 @@ func (h *Headscale) generateMapResponse(
peers,
)
now := time.Now()
resp := tailcfg.MapResponse{
KeepAlive: false,
Node: node,
Peers: nodePeers,
DNSConfig: dnsConfig,
Domain: h.cfg.BaseDomain,
KeepAlive: false,
Node: node,
// TODO: Only send if updated
DERPMap: h.DERPMap,
// TODO: Only send if updated
Peers: nodePeers,
// TODO(kradalby): Implement:
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
// PeersChanged
// PeersRemoved
// PeersChangedPatch
// PeerSeenChange
// OnlineChange
// TODO: Only send if updated
DNSConfig: dnsConfig,
// TODO: Only send if updated
Domain: h.cfg.BaseDomain,
// Do not instruct clients to collect services, we do not
// support or do anything with them
CollectServices: "false",
// TODO: Only send if updated
PacketFilter: h.aclRules,
SSHPolicy: h.sshPolicy,
DERPMap: h.DERPMap,
UserProfiles: profiles,
// TODO: Only send if updated
SSHPolicy: h.sshPolicy,
ControlTime: &now,
Debug: &tailcfg.Debug{
DisableLogTail: !h.cfg.LogTail.Enabled,
RandomizeClientPort: h.cfg.RandomizeClientPort,

View File

@@ -29,7 +29,7 @@ type APIKey struct {
LastSeen *time.Time
}
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
// CreateAPIKey creates a new ApiKey in a user, and returns it.
func (h *Headscale) CreateAPIKey(
expiration *time.Time,
) (string, *APIKey, error) {
@@ -64,7 +64,7 @@ func (h *Headscale) CreateAPIKey(
return keyStr, &key, nil
}
// ListAPIKeys returns the list of ApiKeys for a namespace.
// ListAPIKeys returns the list of ApiKeys for a user.
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
keys := []APIKey{}
if err := h.db.Find(&keys).Error; err != nil {

138
app.go
View File

@@ -81,14 +81,14 @@ type Headscale struct {
privateKey *key.MachinePrivate
noisePrivateKey *key.MachinePrivate
noiseMux *mux.Router
DERPMap *tailcfg.DERPMap
DERPServer *DERPServer
aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
sshPolicy *tailcfg.SSHPolicy
aclPolicy *ACLPolicy
aclRules []tailcfg.FilterRule
aclPeerCacheMapRW sync.RWMutex
aclPeerCacheMap map[string]map[string]struct{}
sshPolicy *tailcfg.SSHPolicy
lastStateChange *xsync.MapOf[string, time.Time]
@@ -164,6 +164,7 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
aclRules: tailcfg.FilterAllowAll, // default allowall
registrationCache: registrationCache,
pollNetMapStreamWG: sync.WaitGroup{},
lastStateChange: xsync.NewMapOf[time.Time](),
}
err = app.initDB()
@@ -219,29 +220,47 @@ func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
}
}
// expireExpiredMachines expires machines that have an explicit expiry set
// after that expiry time has passed.
func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
h.expireExpiredMachinesWorker()
}
}
func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
err := h.handlePrimarySubnetFailover()
if err != nil {
log.Error().Err(err).Msg("failed to handle primary subnet failover")
}
}
}
func (h *Headscale) expireEphemeralNodesWorker() {
namespaces, err := h.ListNamespaces()
users, err := h.ListUsers()
if err != nil {
log.Error().Err(err).Msg("Error listing namespaces")
log.Error().Err(err).Msg("Error listing users")
return
}
for _, namespace := range namespaces {
machines, err := h.ListMachinesInNamespace(namespace.Name)
for _, user := range users {
machines, err := h.ListMachinesByUser(user.Name)
if err != nil {
log.Error().
Err(err).
Str("namespace", namespace.Name).
Msg("Error listing machines in namespace")
Str("user", user.Name).
Msg("Error listing machines in user")
return
}
expiredFound := false
for _, machine := range machines {
if machine.AuthKey != nil && machine.LastSeen != nil &&
machine.AuthKey.Ephemeral &&
if machine.isEphemeral() && machine.LastSeen != nil &&
time.Now().
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
expiredFound = true
@@ -265,6 +284,53 @@ func (h *Headscale) expireEphemeralNodesWorker() {
}
}
func (h *Headscale) expireExpiredMachinesWorker() {
users, err := h.ListUsers()
if err != nil {
log.Error().Err(err).Msg("Error listing users")
return
}
for _, user := range users {
machines, err := h.ListMachinesByUser(user.Name)
if err != nil {
log.Error().
Err(err).
Str("user", user.Name).
Msg("Error listing machines in user")
return
}
expiredFound := false
for index, machine := range machines {
if machine.isExpired() &&
machine.Expiry.After(h.getLastStateChange(user)) {
expiredFound = true
err := h.ExpireMachine(&machines[index])
if err != nil {
log.Error().
Err(err).
Str("machine", machine.Hostname).
Str("name", machine.GivenName).
Msg("🤮 Cannot expire machine")
} else {
log.Info().
Str("machine", machine.Hostname).
Str("name", machine.GivenName).
Msg("Machine successfully expired")
}
}
}
if expiredFound {
h.setLastStateChangeToNow()
}
}
}
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
@@ -457,17 +523,7 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
apiRouter.Use(h.httpAuthenticationMiddleware)
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
router.PathPrefix("/").HandlerFunc(stdoutHandler)
return router
}
func (h *Headscale) createNoiseMux() *mux.Router {
router := mux.NewRouter()
router.HandleFunc("/machine/register", h.NoiseRegistrationHandler).
Methods(http.MethodPost)
router.HandleFunc("/machine/map", h.NoisePollNetMapHandler)
router.PathPrefix("/").HandlerFunc(notFoundHandler)
return router
}
@@ -496,6 +552,9 @@ func (h *Headscale) Serve() error {
}
go h.expireEphemeralNodes(updateInterval)
go h.expireExpiredMachines(updateInterval)
go h.failoverSubnetRoutes(updateInterval)
if zl.GlobalLevel() == zl.TraceLevel {
zerolog.RespLog = true
@@ -629,12 +688,6 @@ func (h *Headscale) Serve() error {
// over our main Addr. It also serves the legacy Tailcale API
router := h.createRouter(grpcGatewayMux)
// This router is served only over the Noise connection, and exposes only the new API.
//
// The HTTP2 server that exposes this router is created for
// a single hijacked connection from /ts2021, using netutil.NewOneConnListener
h.noiseMux = h.createNoiseMux()
httpServer := &http.Server{
Addr: h.cfg.Addr,
Handler: router,
@@ -857,31 +910,31 @@ func (h *Headscale) setLastStateChangeToNow() {
now := time.Now().UTC()
namespaces, err := h.ListNamespaces()
users, err := h.ListUsers()
if err != nil {
log.Error().
Caller().
Err(err).
Msg("failed to fetch all namespaces, failing to update last changed state.")
Msg("failed to fetch all users, failing to update last changed state.")
}
for _, namespace := range namespaces {
lastStateUpdate.WithLabelValues(namespace.Name, "headscale").Set(float64(now.Unix()))
for _, user := range users {
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix()))
if h.lastStateChange == nil {
h.lastStateChange = xsync.NewMapOf[time.Time]()
}
h.lastStateChange.Store(namespace.Name, now)
h.lastStateChange.Store(user.Name, now)
}
}
func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
func (h *Headscale) getLastStateChange(users ...User) time.Time {
times := []time.Time{}
// getLastStateChange takes a list of namespaces as a "filter", if no namespaces
// are past, then use the entier list of namespaces and look for the last update
if len(namespaces) > 0 {
for _, namespace := range namespaces {
if lastChange, ok := h.lastStateChange.Load(namespace.Name); ok {
// getLastStateChange takes a list of users as a "filter", if no users
// are past, then use the entier list of users and look for the last update
if len(users) > 0 {
for _, user := range users {
if lastChange, ok := h.lastStateChange.Load(user.Name); ok {
times = append(times, lastChange)
}
}
@@ -906,7 +959,7 @@ func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
}
}
func stdoutHandler(
func notFoundHandler(
writer http.ResponseWriter,
req *http.Request,
) {
@@ -918,6 +971,7 @@ func stdoutHandler(
Interface("url", req.URL).
Bytes("body", body).
Msg("Request did not match")
writer.WriteHeader(http.StatusNotFound)
}
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {

View File

@@ -0,0 +1,163 @@
package main
//go:generate go run ./main.go
import (
"bytes"
"fmt"
"log"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"text/template"
)
var (
githubWorkflowPath = "../../.github/workflows/"
jobFileNameTemplate = `test-integration-v2-%s.yaml`
jobTemplate = template.Must(
template.New("jobTemplate").
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - {{.Name}}
on: [pull_request]
concurrency:
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: cachix/install-nix-action@v18
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
- name: Run general integration tests
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go test ./... \
-tags ts2019 \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^{{.Name}}$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
`),
)
)
const workflowFilePerm = 0o600
func removeTests() {
glob := fmt.Sprintf(jobFileNameTemplate, "*")
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
if err != nil {
log.Fatalf("failed to find test files")
}
for _, file := range files {
err := os.Remove(file)
if err != nil {
log.Printf("failed to remove: %s", err)
}
}
}
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
ripgrep := exec.Command(
rgBin,
args...,
)
result, err := ripgrep.CombinedOutput()
if err != nil {
log.Printf("out: %s", result)
log.Fatalf("failed to run ripgrep: %s", err)
}
tests := strings.Split(string(result), "\n")
tests = tests[:len(tests)-1]
return tests
}
func main() {
type testConfig struct {
Name string
}
tests := findTests()
removeTests()
for _, test := range tests {
log.Printf("generating workflow for %s", test)
var content bytes.Buffer
if err := jobTemplate.Execute(&content, testConfig{
Name: test,
}); err != nil {
log.Fatalf("failed to render template: %s", err)
}
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
if err != nil {
log.Fatalf("failed to write github job: %s", err)
}
}
}

View File

@@ -0,0 +1,22 @@
package cli
import (
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(configTestCmd)
}
var configTestCmd = &cobra.Command{
Use: "configtest",
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) {
_, err := getHeadscaleApp()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
},
}

View File

@@ -27,8 +27,14 @@ func init() {
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err = createNodeCmd.MarkFlagRequired("namespace")
createNodeCmd.Flags().StringP("user", "u", "", "User")
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
createNodeNamespaceFlag.Hidden = true
err = createNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")
}
@@ -55,9 +61,9 @@ var createNodeCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
@@ -110,10 +116,10 @@ var createNodeCmd = &cobra.Command{
}
request := &v1.DebugCreateMachineRequest{
Key: machineKey,
Name: name,
Namespace: namespace,
Routes: routes,
Key: machineKey,
Name: name,
User: user,
Routes: routes,
}
response, err := client.DebugCreateMachine(ctx, request)

View File

@@ -16,10 +16,11 @@ const (
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
accessTTL = 10 * time.Minute
refreshTTL = 60 * time.Minute
)
var accessTTL = 2 * time.Minute
func init() {
rootCmd.AddCommand(mockOidcCmd)
}
@@ -54,6 +55,16 @@ func mockOIDC() error {
if portStr == "" {
return errMockOidcPortNotDefined
}
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
if accessTTLOverride != "" {
newTTL, err := time.ParseDuration(accessTTLOverride)
if err != nil {
return err
}
accessTTL = newTTL
}
log.Info().Msgf("Access token TTL: %s", accessTTL)
port, err := strconv.Atoi(portStr)
if err != nil {

View File

@@ -19,12 +19,24 @@ import (
func init() {
rootCmd.AddCommand(nodeCmd)
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
listNodesNamespaceFlag.Hidden = true
nodeCmd.AddCommand(listNodesCmd)
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err := registerNodeCmd.MarkFlagRequired("namespace")
registerNodeCmd.Flags().StringP("user", "u", "", "User")
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
registerNodeNamespaceFlag.Hidden = true
err := registerNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatalf(err.Error())
}
@@ -63,9 +75,14 @@ func init() {
log.Fatalf(err.Error())
}
moveNodeCmd.Flags().StringP("namespace", "n", "", "New namespace")
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
err = moveNodeCmd.MarkFlagRequired("namespace")
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
moveNodeNamespaceFlag.Hidden = true
err = moveNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatalf(err.Error())
}
@@ -93,9 +110,9 @@ var registerNodeCmd = &cobra.Command{
Short: "Registers a machine to your network",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
@@ -116,8 +133,8 @@ var registerNodeCmd = &cobra.Command{
}
request := &v1.RegisterMachineRequest{
Key: machineKey,
Namespace: namespace,
Key: machineKey,
User: user,
}
response, err := client.RegisterMachine(ctx, request)
@@ -146,9 +163,9 @@ var listNodesCmd = &cobra.Command{
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
@@ -164,7 +181,7 @@ var listNodesCmd = &cobra.Command{
defer conn.Close()
request := &v1.ListMachinesRequest{
Namespace: namespace,
User: user,
}
response, err := client.ListMachines(ctx, request)
@@ -184,7 +201,7 @@ var listNodesCmd = &cobra.Command{
return
}
tableData, err := nodesToPtables(namespace, showTags, response.Machines)
tableData, err := nodesToPtables(user, showTags, response.Machines)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -388,7 +405,7 @@ var deleteNodeCmd = &cobra.Command{
var moveNodeCmd = &cobra.Command{
Use: "move",
Short: "Move node to another namespace",
Short: "Move node to another user",
Aliases: []string{"mv"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
@@ -404,11 +421,11 @@ var moveNodeCmd = &cobra.Command{
return
}
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting namespace: %s", err),
fmt.Sprintf("Error getting user: %s", err),
output,
)
@@ -439,7 +456,7 @@ var moveNodeCmd = &cobra.Command{
moveRequest := &v1.MoveMachineRequest{
MachineId: identifier,
Namespace: namespace,
User: user,
}
moveResponse, err := client.MoveMachine(ctx, moveRequest)
@@ -456,12 +473,12 @@ var moveNodeCmd = &cobra.Command{
return
}
SuccessOutput(moveResponse.Machine, "Node moved to another namespace", output)
SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
},
}
func nodesToPtables(
currentNamespace string,
currentUser string,
showTags bool,
machines []*v1.Machine,
) (pterm.TableData, error) {
@@ -469,11 +486,13 @@ func nodesToPtables(
"ID",
"Hostname",
"Name",
"MachineKey",
"NodeKey",
"Namespace",
"User",
"IP addresses",
"Ephemeral",
"Last seen",
"Expiration",
"Online",
"Expired",
}
@@ -500,12 +519,24 @@ func nodesToPtables(
}
var expiry time.Time
var expiryTime string
if machine.Expiry != nil {
expiry = machine.Expiry.AsTime()
expiryTime = expiry.Format("2006-01-02 15:04:05")
} else {
expiryTime = "N/A"
}
var machineKey key.MachinePublic
err := machineKey.UnmarshalText(
[]byte(headscale.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
)
if err != nil {
machineKey = key.MachinePublic{}
}
var nodeKey key.NodePublic
err := nodeKey.UnmarshalText(
err = nodeKey.UnmarshalText(
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
)
if err != nil {
@@ -513,9 +544,7 @@ func nodesToPtables(
}
var online string
if lastSeen.After(
time.Now().Add(-5 * time.Minute),
) { // TODO: Find a better way to reliably show if online
if machine.Online {
online = pterm.LightGreen("online")
} else {
online = pterm.LightRed("offline")
@@ -548,12 +577,12 @@ func nodesToPtables(
}
validTags = strings.TrimLeft(validTags, ",")
var namespace string
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
namespace = pterm.LightMagenta(machine.Namespace.Name)
var user string
if currentUser == "" || (currentUser == machine.User.Name) {
user = pterm.LightMagenta(machine.User.Name)
} else {
// Shared into this namespace
namespace = pterm.LightYellow(machine.Namespace.Name)
// Shared into this user
user = pterm.LightYellow(machine.User.Name)
}
var IPV4Address string
@@ -570,11 +599,13 @@ func nodesToPtables(
strconv.FormatUint(machine.Id, headscale.Base10),
machine.Name,
machine.GetGivenName(),
machineKey.ShortString(),
nodeKey.ShortString(),
namespace,
user,
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
strconv.FormatBool(ephemeral),
lastSeenTime,
expiryTime,
online,
expired,
}

View File

@@ -20,8 +20,14 @@ const (
func init() {
rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
pakNamespaceFlag.Hidden = true
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")
}
@@ -46,14 +52,14 @@ var preauthkeysCmd = &cobra.Command{
var listPreAuthKeys = &cobra.Command{
Use: "list",
Short: "List the preauthkeys for this namespace",
Short: "List the preauthkeys for this user",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
@@ -63,7 +69,7 @@ var listPreAuthKeys = &cobra.Command{
defer conn.Close()
request := &v1.ListPreAuthKeysRequest{
Namespace: namespace,
User: user,
}
response, err := client.ListPreAuthKeys(ctx, request)
@@ -143,14 +149,14 @@ var listPreAuthKeys = &cobra.Command{
var createPreAuthKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new preauthkey in the specified namespace",
Short: "Creates a new preauthkey in the specified user",
Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
@@ -162,11 +168,11 @@ var createPreAuthKeyCmd = &cobra.Command{
log.Trace().
Bool("reusable", reusable).
Bool("ephemeral", ephemeral).
Str("namespace", namespace).
Str("user", user).
Msg("Preparing to create preauthkey")
request := &v1.CreatePreAuthKeyRequest{
Namespace: namespace,
User: user,
Reusable: reusable,
Ephemeral: ephemeral,
AclTags: tags,
@@ -225,9 +231,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
@@ -237,8 +243,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
defer conn.Close()
request := &v1.ExpirePreAuthKeyRequest{
Namespace: namespace,
Key: args[0],
User: user,
Key: args[0],
}
response, err := client.ExpirePreAuthKey(ctx, request)

View File

@@ -12,10 +12,15 @@ import (
"github.com/tcnksm/go-latest"
)
const (
deprecateNamespaceMessage = "use --user"
)
var cfgFile string = ""
func init() {
if len(os.Args) > 1 && (os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
if len(os.Args) > 1 &&
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
return
}

View File

@@ -3,37 +3,45 @@ package cli
import (
"fmt"
"log"
"net/netip"
"strconv"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
const (
Base10 = 10
)
func init() {
rootCmd.AddCommand(routesCmd)
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err := listRoutesCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(listRoutesCmd)
enableRouteCmd.Flags().
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
enableRouteCmd.Flags().BoolP("all", "a", false, "All routes from host")
err = enableRouteCmd.MarkFlagRequired("identifier")
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err := enableRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(enableRouteCmd)
nodeCmd.AddCommand(routesCmd)
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = disableRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(disableRouteCmd)
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = deleteRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(deleteRouteCmd)
}
var routesCmd = &cobra.Command{
@@ -44,7 +52,7 @@ var routesCmd = &cobra.Command{
var listRoutesCmd = &cobra.Command{
Use: "list",
Short: "List routes advertised and enabled by a given node",
Short: "List all routes",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
@@ -64,28 +72,51 @@ var listRoutesCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.GetMachineRouteRequest{
MachineId: machineID,
var routes []*v1.Route
if machineID == 0 {
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
routes = response.Routes
} else {
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
MachineId: machineID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
routes = response.Routes
}
response, err := client.GetMachineRoute(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.Routes, "", output)
return
}
tableData := routesToPtables(response.Routes)
tableData := routesToPtables(routes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
@@ -107,16 +138,12 @@ var listRoutesCmd = &cobra.Command{
var enableRouteCmd = &cobra.Command{
Use: "enable",
Short: "Set the enabled routes for a given node",
Long: `This command will take a list of routes that will _replace_
the current set of routes on a given node.
If you would like to disable a route, simply run the command again, but
omit the route you do not want to enable.
`,
Short: "Set a route as enabled",
Long: `This command will make as enabled a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
machineID, err := cmd.Flags().GetUint64("identifier")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
@@ -131,52 +158,13 @@ omit the route you do not want to enable.
defer cancel()
defer conn.Close()
var routes []string
isAll, _ := cmd.Flags().GetBool("all")
if isAll {
response, err := client.GetMachineRoute(ctx, &v1.GetMachineRouteRequest{
MachineId: machineID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot get machine routes: %s\n",
status.Convert(err).Message(),
),
output,
)
return
}
routes = response.GetRoutes().GetAdvertisedRoutes()
} else {
routes, err = cmd.Flags().GetStringSlice("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
return
}
}
request := &v1.EnableMachineRoutesRequest{
MachineId: machineID,
Routes: routes,
}
response, err := client.EnableMachineRoutes(ctx, request)
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot register machine: %s\n",
status.Convert(err).Message(),
),
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
@@ -184,50 +172,127 @@ omit the route you do not want to enable.
}
if output != "" {
SuccessOutput(response.Routes, "", output)
SuccessOutput(response, "", output)
return
}
},
}
tableData := routesToPtables(response.Routes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
var disableRouteCmd = &cobra.Command{
Use: "disable",
Short: "Set as disabled a given route",
Long: `This command will make as disabled a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
var deleteRouteCmd = &cobra.Command{
Use: "delete",
Short: "Delete a given route",
Long: `This command will delete a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
// routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes *v1.Routes) pterm.TableData {
tableData := pterm.TableData{{"Route", "Enabled"}}
func routesToPtables(routes []*v1.Route) pterm.TableData {
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
for _, route := range routes.GetAdvertisedRoutes() {
enabled := isStringInSlice(routes.EnabledRoutes, route)
for _, route := range routes {
var isPrimaryStr string
prefix, err := netip.ParsePrefix(route.Prefix)
if err != nil {
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
continue
}
if prefix == headscale.ExitRouteV4 || prefix == headscale.ExitRouteV6 {
isPrimaryStr = "-"
} else {
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
}
tableData = append(tableData,
[]string{
strconv.FormatUint(route.Id, Base10),
route.Machine.GivenName,
route.Prefix,
strconv.FormatBool(route.Advertised),
strconv.FormatBool(route.Enabled),
isPrimaryStr,
})
}
return tableData
}
func isStringInSlice(strs []string, s string) bool {
for _, s2 := range strs {
if s == s2 {
return true
}
}
return false
}

View File

@@ -13,26 +13,26 @@ import (
)
func init() {
rootCmd.AddCommand(namespaceCmd)
namespaceCmd.AddCommand(createNamespaceCmd)
namespaceCmd.AddCommand(listNamespacesCmd)
namespaceCmd.AddCommand(destroyNamespaceCmd)
namespaceCmd.AddCommand(renameNamespaceCmd)
rootCmd.AddCommand(userCmd)
userCmd.AddCommand(createUserCmd)
userCmd.AddCommand(listUsersCmd)
userCmd.AddCommand(destroyUserCmd)
userCmd.AddCommand(renameUserCmd)
}
const (
errMissingParameter = headscale.Error("missing parameters")
)
var namespaceCmd = &cobra.Command{
Use: "namespaces",
Short: "Manage the namespaces of Headscale",
Aliases: []string{"namespace", "ns", "user", "users"},
var userCmd = &cobra.Command{
Use: "users",
Short: "Manage the users of Headscale",
Aliases: []string{"user", "namespace", "namespaces", "ns"},
}
var createNamespaceCmd = &cobra.Command{
var createUserCmd = &cobra.Command{
Use: "create NAME",
Short: "Creates a new namespace",
Short: "Creates a new user",
Aliases: []string{"c", "new"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
@@ -44,7 +44,7 @@ var createNamespaceCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespaceName := args[0]
userName := args[0]
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
@@ -52,15 +52,15 @@ var createNamespaceCmd = &cobra.Command{
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
request := &v1.CreateNamespaceRequest{Name: namespaceName}
request := &v1.CreateUserRequest{Name: userName}
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
response, err := client.CreateNamespace(ctx, request)
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
response, err := client.CreateUser(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot create namespace: %s",
"Cannot create user: %s",
status.Convert(err).Message(),
),
output,
@@ -69,13 +69,13 @@ var createNamespaceCmd = &cobra.Command{
return
}
SuccessOutput(response.Namespace, "Namespace created", output)
SuccessOutput(response.User, "User created", output)
},
}
var destroyNamespaceCmd = &cobra.Command{
var destroyUserCmd = &cobra.Command{
Use: "destroy NAME",
Short: "Destroys a namespace",
Short: "Destroys a user",
Aliases: []string{"delete"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
@@ -87,17 +87,17 @@ var destroyNamespaceCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespaceName := args[0]
userName := args[0]
request := &v1.GetNamespaceRequest{
Name: namespaceName,
request := &v1.GetUserRequest{
Name: userName,
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
_, err := client.GetNamespace(ctx, request)
_, err := client.GetUser(ctx, request)
if err != nil {
ErrorOutput(
err,
@@ -113,8 +113,8 @@ var destroyNamespaceCmd = &cobra.Command{
if !force {
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
namespaceName,
"Do you want to remove the user '%s' and any associated preauthkeys?",
userName,
),
}
err := survey.AskOne(prompt, &confirm)
@@ -124,14 +124,14 @@ var destroyNamespaceCmd = &cobra.Command{
}
if confirm || force {
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
request := &v1.DeleteUserRequest{Name: userName}
response, err := client.DeleteNamespace(ctx, request)
response, err := client.DeleteUser(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot destroy namespace: %s",
"Cannot destroy user: %s",
status.Convert(err).Message(),
),
output,
@@ -139,16 +139,16 @@ var destroyNamespaceCmd = &cobra.Command{
return
}
SuccessOutput(response, "Namespace destroyed", output)
SuccessOutput(response, "User destroyed", output)
} else {
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
}
},
}
var listNamespacesCmd = &cobra.Command{
var listUsersCmd = &cobra.Command{
Use: "list",
Short: "List all the namespaces",
Short: "List all the users",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
@@ -157,13 +157,13 @@ var listNamespacesCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.ListNamespacesRequest{}
request := &v1.ListUsersRequest{}
response, err := client.ListNamespaces(ctx, request)
response, err := client.ListUsers(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
output,
)
@@ -171,19 +171,19 @@ var listNamespacesCmd = &cobra.Command{
}
if output != "" {
SuccessOutput(response.Namespaces, "", output)
SuccessOutput(response.Users, "", output)
return
}
tableData := pterm.TableData{{"ID", "Name", "Created"}}
for _, namespace := range response.GetNamespaces() {
for _, user := range response.GetUsers() {
tableData = append(
tableData,
[]string{
namespace.GetId(),
namespace.GetName(),
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
user.GetId(),
user.GetName(),
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
},
)
}
@@ -200,9 +200,9 @@ var listNamespacesCmd = &cobra.Command{
},
}
var renameNamespaceCmd = &cobra.Command{
var renameUserCmd = &cobra.Command{
Use: "rename OLD_NAME NEW_NAME",
Short: "Renames a namespace",
Short: "Renames a user",
Aliases: []string{"mv"},
Args: func(cmd *cobra.Command, args []string) error {
expectedArguments := 2
@@ -219,17 +219,17 @@ var renameNamespaceCmd = &cobra.Command{
defer cancel()
defer conn.Close()
request := &v1.RenameNamespaceRequest{
request := &v1.RenameUserRequest{
OldName: args[0],
NewName: args[1],
}
response, err := client.RenameNamespace(ctx, request)
response, err := client.RenameUser(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot rename namespace: %s",
"Cannot rename user: %s",
status.Convert(err).Message(),
),
output,
@@ -238,6 +238,6 @@ var renameNamespaceCmd = &cobra.Command{
return
}
SuccessOutput(response.Namespace, "Namespace renamed", output)
SuccessOutput(response.User, "User renamed", output)
},
}

View File

@@ -14,7 +14,7 @@ import (
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
)
const (

View File

@@ -58,7 +58,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "./db.sqlite")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
@@ -101,7 +101,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "./db.sqlite")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")

View File

@@ -44,9 +44,7 @@ grpc_allow_insecure: false
# and Tailscale clients.
# The private key file will be autogenerated if it's missing.
#
# For production:
# /var/lib/headscale/private.key
private_key_path: ./private.key
private_key_path: /var/lib/headscale/private.key
# The Noise section includes specific configuration for the
# TS2021 Noise protocol
@@ -55,14 +53,16 @@ noise:
# traffic between headscale and Tailscale clients when
# using the new Noise-based protocol. It must be different
# from the legacy private key.
#
# For production:
# private_key_path: /var/lib/headscale/noise_private.key
private_key_path: ./noise_private.key
private_key_path: /var/lib/headscale/noise_private.key
# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
# While this looks like it can take arbitrary values, it
# needs to be within IP ranges supported by the Tailscale
# client.
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
ip_prefixes:
- fd7a:115c:a1e0::/48
- 100.64.0.0/10
@@ -132,8 +132,7 @@ node_update_check_interval: 10s
db_type: sqlite3
# For production:
# db_path: /var/lib/headscale/db.sqlite
db_path: ./db.sqlite
db_path: /var/lib/headscale/db.sqlite
# # Postgres config
# If using a Unix socket to connect to Postgres, set the socket path in the 'host' field and leave 'port' blank.
@@ -167,8 +166,7 @@ tls_letsencrypt_hostname: ""
# Path to store certificates and metadata needed by
# letsencrypt
# For production:
# tls_letsencrypt_cache_dir: /var/lib/headscale/cache
tls_letsencrypt_cache_dir: ./cache
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
@@ -235,6 +233,17 @@ dns_config:
# Search domains to inject.
domains: []
# Extra DNS records
# so far only A-records are supported (on the tailscale side)
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
# extra_records:
# - name: "grafana.myvpn.example.com"
# type: "A"
# value: "100.64.0.3"
#
# # you can also put it in one line
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Only works if there is at least a nameserver defined.
magic_dns: true
@@ -242,13 +251,12 @@ dns_config:
# Defines the base domain to create the hostnames for MagicDNS.
# `base_domain` must be a FQDNs, without the trailing dot.
# The FQDN of the hosts will be
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
base_domain: example.com
# Unix socket used for the CLI to connect without authentication
# Note: for production you will want to set this to something like:
# unix_socket: /var/run/headscale.sock
unix_socket: ./headscale.sock
unix_socket: /var/run/headscale/headscale.sock
unix_socket_permission: "0770"
#
# headscale supports experimental OpenID connect support,
@@ -260,26 +268,45 @@ unix_socket_permission: "0770"
# issuer: "https://your-oidc.issuer.com/path"
# client_id: "your-oidc-client-id"
# client_secret: "your-oidc-client-secret"
# # Alternatively, set `client_secret_path` to read the secret from the file.
# # It resolves environment variables, making integration to systemd's
# # `LoadCredential` straightforward:
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
# # client_secret and client_secret_path are mutually exclusive.
#
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
# # The amount of time from a node is authenticated with OpenID until it
# # expires and needs to reauthenticate.
# # Setting the value to "0" will mean no expiry.
# expiry: 180d
#
# # Use the expiry from the token received from OpenID when the user logged
# # in, this will typically lead to frequent need to reauthenticate and should
# # only been enabled if you know what you are doing.
# # Note: enabling this will cause `oidc.expiry` to be ignored.
# use_expiry_from_token: false
#
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
#
# scope: ["openid", "profile", "email", "custom"]
# extra_params:
# domain_hint: example.com
#
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# authentication request will be rejected.
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
# # authentication request will be rejected.
#
# allowed_domains:
# - example.com
# # Note: Groups from keycloak have a leading '/'
# allowed_groups:
# - /headscale
# allowed_users:
# - alice@example.com
#
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# namespace: `first-name.last-name.example.com`
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# user: `first-name.last-name.example.com`
#
# strip_email_domain: true

110
config.go
View File

@@ -6,10 +6,12 @@ import (
"io/fs"
"net/netip"
"net/url"
"os"
"strings"
"time"
"github.com/coreos/go-oidc/v3/oidc"
"github.com/prometheus/common/model"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/spf13/viper"
@@ -24,6 +26,13 @@ const (
JSONLogFormat = "json"
TextLogFormat = "text"
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
maxDuration time.Duration = 1<<63 - 1
)
var errOidcMutuallyExclusive = errors.New(
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
)
// Config contains the initial Headscale configuration.
@@ -96,7 +105,10 @@ type OIDCConfig struct {
ExtraParams map[string]string
AllowedDomains []string
AllowedUsers []string
AllowedGroups []string
StripEmaildomain bool
Expiry time.Duration
UseExpiryFromToken bool
}
type DERPConfig struct {
@@ -171,9 +183,13 @@ func LoadConfig(path string, isFile bool) error {
viper.SetDefault("cli.timeout", "5s")
viper.SetDefault("cli.insecure", false)
viper.SetDefault("db_ssl", false)
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
viper.SetDefault("oidc.strip_email_domain", true)
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
viper.SetDefault("oidc.expiry", "180d")
viper.SetDefault("oidc.use_expiry_from_token", false)
viper.SetDefault("logtail.enabled", false)
viper.SetDefault("randomize_client_port", false)
@@ -405,39 +421,37 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
}
if viper.IsSet("dns_config.restricted_nameservers") {
if len(dnsConfig.Nameservers) > 0 {
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers",
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
domains := []string{}
restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers",
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make(
[]*dnstype.Resolver,
len(restrictedNameservers),
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make(
[]*dnstype.Resolver,
len(restrictedNameservers),
)
for index, nameserverStr := range restrictedNameservers {
nameserver, err := netip.ParseAddr(nameserverStr)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
}
restrictedResolvers[index] = &dnstype.Resolver{
Addr: nameserver.String(),
}
for index, nameserverStr := range restrictedNameservers {
nameserver, err := netip.ParseAddr(nameserverStr)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
}
restrictedResolvers[index] = &dnstype.Resolver{
Addr: nameserver.String(),
}
dnsConfig.Routes[domain] = restrictedResolvers
}
} else {
log.Warn().
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
dnsConfig.Routes[domain] = restrictedResolvers
domains = append(domains, domain)
}
dnsConfig.Domains = domains
}
if viper.IsSet("dns_config.domains") {
domains := viper.GetStringSlice("dns_config.domains")
if len(dnsConfig.Nameservers) > 0 {
if len(dnsConfig.Resolvers) > 0 {
dnsConfig.Domains = domains
} else if domains != nil {
log.Warn().
@@ -445,6 +459,20 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
}
}
if viper.IsSet("dns_config.extra_records") {
var extraRecords []tailcfg.DNSRecord
err := viper.UnmarshalKey("dns_config.extra_records", &extraRecords)
if err != nil {
log.Error().
Str("func", "getDNSConfig").
Err(err).
Msgf("Could not parse dns_config.extra_records")
}
dnsConfig.ExtraRecords = extraRecords
}
if viper.IsSet("dns_config.magic_dns") {
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
}
@@ -511,6 +539,19 @@ func GetHeadscaleConfig() (*Config, error) {
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
}
oidcClientSecret := viper.GetString("oidc.client_secret")
oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
if oidcClientSecretPath != "" && oidcClientSecret != "" {
return nil, errOidcMutuallyExclusive
}
if oidcClientSecretPath != "" {
secretBytes, err := os.ReadFile(os.ExpandEnv(oidcClientSecretPath))
if err != nil {
return nil, err
}
oidcClientSecret = string(secretBytes)
}
return &Config{
ServerURL: viper.GetString("server_url"),
Addr: viper.GetString("listen_addr"),
@@ -563,12 +604,29 @@ func GetHeadscaleConfig() (*Config, error) {
),
Issuer: viper.GetString("oidc.issuer"),
ClientID: viper.GetString("oidc.client_id"),
ClientSecret: viper.GetString("oidc.client_secret"),
ClientSecret: oidcClientSecret,
Scope: viper.GetStringSlice("oidc.scope"),
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
Expiry: func() time.Duration {
// if set to 0, we assume no expiry
if value := viper.GetString("oidc.expiry"); value == "0" {
return maxDuration
} else {
expiry, err := model.ParseDuration(value)
if err != nil {
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
return defaultOIDCExpiryTime
}
return time.Duration(expiry)
}
}(),
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
},
LogTail: logConfig,

109
db.go
View File

@@ -18,8 +18,10 @@ import (
)
const (
dbVersion = "1"
errValueNotFound = Error("not found")
dbVersion = "1"
errValueNotFound = Error("not found")
ErrCannotParsePrefix = Error("cannot parse prefix")
)
// KV is a key-value store in a psql table. For future use...
@@ -39,6 +41,16 @@ func (h *Headscale) initDB() error {
db.Exec(`create extension if not exists "uuid-ossp";`)
}
_ = db.Migrator().RenameTable("namespaces", "users")
err = db.AutoMigrate(&User{})
if err != nil {
return err
}
_ = db.Migrator().RenameColumn(&Machine{}, "namespace_id", "user_id")
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
@@ -79,6 +91,70 @@ func (h *Headscale) initDB() error {
}
}
err = db.AutoMigrate(&Route{})
if err != nil {
return err
}
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
type MachineAux struct {
ID uint64
EnabledRoutes IPPrefixes
}
machinesAux := []MachineAux{}
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
if err != nil {
log.Fatal().Err(err).Msg("Error accessing db")
}
for _, machine := range machinesAux {
for _, prefix := range machine.EnabledRoutes {
if err != nil {
log.Error().
Err(err).
Str("enabled_route", prefix.String()).
Msg("Error parsing enabled_route")
continue
}
err = db.Preload("Machine").
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
First(&Route{}).
Error
if err == nil {
log.Info().
Str("enabled_route", prefix.String()).
Msg("Route already migrated to new table, skipping")
continue
}
route := Route{
MachineID: machine.ID,
Advertised: true,
Enabled: true,
Prefix: IPPrefix(prefix),
}
if err := h.db.Create(&route).Error; err != nil {
log.Error().Err(err).Msg("Error creating route")
} else {
log.Info().
Uint64("machine_id", route.MachineID).
Str("prefix", prefix.String()).
Msg("Route migrated")
}
}
}
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
if err != nil {
log.Error().Err(err).Msg("Error dropping enabled_routes column")
}
}
err = db.AutoMigrate(&Machine{})
if err != nil {
return err
@@ -121,11 +197,6 @@ func (h *Headscale) initDB() error {
return err
}
err = db.AutoMigrate(&Namespace{})
if err != nil {
return err
}
err = db.AutoMigrate(&PreAuthKey{})
if err != nil {
return err
@@ -264,6 +335,30 @@ func (hi HostInfo) Value() (driver.Value, error) {
return string(bytes), err
}
type IPPrefix netip.Prefix
func (i *IPPrefix) Scan(destination interface{}) error {
switch value := destination.(type) {
case string:
prefix, err := netip.ParsePrefix(value)
if err != nil {
return err
}
*i = IPPrefix(prefix)
return nil
default:
return fmt.Errorf("%w: unexpected data type %T", ErrCannotParsePrefix, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i IPPrefix) Value() (driver.Value, error) {
prefixStr := netip.Prefix(i).String()
return prefixStr, nil
}
type IPPrefixes []netip.Prefix
func (i *IPPrefixes) Scan(destination interface{}) error {

View File

@@ -10,7 +10,7 @@ import (
"time"
"github.com/rs/zerolog/log"
"gopkg.in/yaml.v2"
"gopkg.in/yaml.v3"
"tailscale.com/tailcfg"
)

View File

@@ -157,14 +157,14 @@ func (h *Headscale) DERPHandler(
if !fastStart {
pubKey := h.privateKey.Public()
pubKeyStr := pubKey.UntypedHexString() //nolint
pubKeyStr, _ := pubKey.MarshalText() //nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+
"Connection: Upgrade\r\n"+
"Derp-Version: %v\r\n"+
"Derp-Public-Key: %s\r\n\r\n",
derp.ProtocolVersion,
pubKeyStr)
string(pubKeyStr))
}
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())

14
dns.go
View File

@@ -190,23 +190,23 @@ func getMapResponseDNSConfig(
) *tailcfg.DNSConfig {
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
// Only inject the Search Domain of the current user - shared nodes should use their full FQDN
dnsConfig.Domains = append(
dnsConfig.Domains,
fmt.Sprintf(
"%s.%s",
machine.Namespace.Name,
machine.User.Name,
baseDomain,
),
)
namespaceSet := mapset.NewSet[Namespace]()
namespaceSet.Add(machine.Namespace)
userSet := mapset.NewSet[User]()
userSet.Add(machine.User)
for _, p := range peers {
namespaceSet.Add(p.Namespace)
userSet.Add(p.User)
}
for _, namespace := range namespaceSet.ToSlice() {
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
for _, user := range userSet.ToSlice() {
dnsRoute := fmt.Sprintf("%v.%v", user.Name, baseDomain)
dnsConfig.Routes[dnsRoute] = nil
}
} else {

View File

@@ -112,17 +112,17 @@ func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
}
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
namespaceShared1, err := app.CreateNamespace("shared1")
userShared1, err := app.CreateUser("shared1")
c.Assert(err, check.IsNil)
namespaceShared2, err := app.CreateNamespace("shared2")
userShared2, err := app.CreateUser("shared2")
c.Assert(err, check.IsNil)
namespaceShared3, err := app.CreateNamespace("shared3")
userShared3, err := app.CreateUser("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
userShared1.Name,
false,
false,
nil,
@@ -131,7 +131,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
userShared2.Name,
false,
false,
nil,
@@ -140,7 +140,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
userShared3.Name,
false,
false,
nil,
@@ -149,7 +149,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
c.Assert(err, check.IsNil)
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
userShared1.Name,
false,
false,
nil,
@@ -157,7 +157,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
@@ -166,15 +166,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Hostname: "test_get_shared_nodes_1",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
@@ -183,15 +183,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_2",
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
UserID: userShared2.ID,
User: *userShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
@@ -200,15 +200,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_3",
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
UserID: userShared3.ID,
User: *userShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
@@ -217,8 +217,8 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_4",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
AuthKeyID: uint(PreAuthKey2InShared1.ID),
@@ -245,31 +245,31 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
domainRouteShared1 := fmt.Sprintf("%s.%s", userShared1.Name, baseDomain)
_, ok := dnsConfig.Routes[domainRouteShared1]
c.Assert(ok, check.Equals, true)
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
domainRouteShared2 := fmt.Sprintf("%s.%s", userShared2.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared2]
c.Assert(ok, check.Equals, true)
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
domainRouteShared3 := fmt.Sprintf("%s.%s", userShared3.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared3]
c.Assert(ok, check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
namespaceShared1, err := app.CreateNamespace("shared1")
userShared1, err := app.CreateUser("shared1")
c.Assert(err, check.IsNil)
namespaceShared2, err := app.CreateNamespace("shared2")
userShared2, err := app.CreateUser("shared2")
c.Assert(err, check.IsNil)
namespaceShared3, err := app.CreateNamespace("shared3")
userShared3, err := app.CreateUser("shared3")
c.Assert(err, check.IsNil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
userShared1.Name,
false,
false,
nil,
@@ -278,7 +278,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
c.Assert(err, check.IsNil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
userShared2.Name,
false,
false,
nil,
@@ -287,7 +287,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
c.Assert(err, check.IsNil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
userShared3.Name,
false,
false,
nil,
@@ -296,7 +296,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
c.Assert(err, check.IsNil)
preAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
userShared1.Name,
false,
false,
nil,
@@ -304,7 +304,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
)
c.Assert(err, check.IsNil)
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
machineInShared1 := &Machine{
@@ -313,15 +313,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Hostname: "test_get_shared_nodes_1",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
app.db.Save(machineInShared1)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
c.Assert(err, check.IsNil)
machineInShared2 := &Machine{
@@ -330,15 +330,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_2",
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
UserID: userShared2.ID,
User: *userShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
app.db.Save(machineInShared2)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
c.Assert(err, check.IsNil)
machineInShared3 := &Machine{
@@ -347,15 +347,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_3",
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
UserID: userShared3.ID,
User: *userShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
app.db.Save(machineInShared3)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
c.Assert(err, check.IsNil)
machine2InShared1 := &Machine{
@@ -364,8 +364,8 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Hostname: "test_get_shared_nodes_4",
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
UserID: userShared1.ID,
User: *userShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
AuthKeyID: uint(preAuthKey2InShared1.ID),

View File

@@ -1,54 +0,0 @@
# headscale documentation
This page contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
## Official documentation
### How-to
- [Running headscale on Linux](running-headscale-linux.md)
- [Control headscale remotely](remote-cli.md)
- [Using a Windows client with headscale](windows-client.md)
### References
- [Configuration](../config-example.yaml)
- [Glossary](glossary.md)
- [TLS](tls.md)
## Community documentation
Community documentation is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
- [Running headscale in a container](running-headscale-container.md)
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
- [Running headscale behind a reverse proxy](reverse-proxy.md)
## Misc
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use namespaces (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the Namespace borders are no longer applied. All machines
whichever the Namespace have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
The [ACLs](acls.md) document should help understand a fictional case of setting
up ACLs in a small company. All concepts presented in this document could be
applied outside of business oriented usage.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.

View File

@@ -1,4 +1,15 @@
# ACLs use case example
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use users (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACLs use case example
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
@@ -29,19 +40,21 @@ servers.
## ACL setup
Note: Namespaces will be created automatically when users authenticate with the
Note: Users will be created automatically when users authenticate with the
Headscale server.
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
or YAML. Check the [test ACLs](../tests/acls) for further information.
When registering the servers we will need to add the flag
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
registering the server should be allowed to do it. Since anyone can add tags to
a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the namespace that is
and only valid tags are applied. A tag is valid if the user that is
registering it is allowed to do it.
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
Here are the ACL's to implement the same permissions as above:
```json
@@ -164,8 +177,8 @@ Here are the ACL's to implement the same permissions as above:
"dst": ["tag:dev-app-servers:80,443"]
},
// We still have to allow internal namespaces communications since nothing guarantees that each user have
// their own namespaces.
// We still have to allow internal users communications since nothing guarantees that each user have
// their own users.
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },

90
docs/dns-records.md Normal file
View File

@@ -0,0 +1,90 @@
# Setting custom DNS records
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
An example use case is to serve apps on the same host via a reverse proxy like NGINX, in this case a Prometheus monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the hostname and portnum combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
## Setup
### 1. Change the configuration
1. Change the `config.yaml` to contain the desired records like so:
```yaml
dns_config:
...
extra_records:
- name: "prometheus.myvpn.example.com"
type: "A"
value: "100.64.0.3"
- name: "grafana.myvpn.example.com"
type: "A"
value: "100.64.0.3"
...
```
2. Restart your headscale instance.
Beware of the limitations listed later on!
### 2. Verify that the records are set
You can use a DNS querying tool of your choice on one of your hosts to verify that your newly set records are actually available in MagicDNS, here we used [`dig`](https://man.archlinux.org/man/dig.1.en):
```
$ dig grafana.myvpn.example.com
; <<>> DiG 9.18.10 <<>> grafana.myvpn.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44054
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;grafana.myvpn.example.com. IN A
;; ANSWER SECTION:
grafana.myvpn.example.com. 593 IN A 100.64.0.3
;; Query time: 0 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Sat Dec 31 11:46:55 CET 2022
;; MSG SIZE rcvd: 66
```
### 3. Optional: Setup the reverse proxy
The motivating example here was to be able to access internal monitoring services on the same host without specifying a port:
```
server {
listen 80;
listen [::]:80;
server_name grafana.myvpn.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
## Limitations
[Not all types of records are supported](https://github.com/tailscale/tailscale/blob/6edf357b96b28ee1be659a70232c0135b2ffedfd/ipn/ipnlocal/local.go#L2989-L3007), especially no CNAME records.

47
docs/exit-node.md Normal file
View File

@@ -0,0 +1,47 @@
# Exit Nodes
## On the node
Register the node and make it advertise itself as an exit node:
```console
$ sudo tailscale up --login-server https://my-server.com --advertise-exit-node
```
If the node is already registered, it can advertise exit capabilities like this:
```console
$ sudo tailscale set --advertise-exit-node
```
## On the control server
```console
$ # list nodes
$ headscale routes list
ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | false | -
4 | phobos | ::/0 | true | false | -
$ # enable routes for phobos
$ headscale routes enable -r 3
$ headscale routes enable -r 4
$ # Check node list again. The routes are now enabled.
$ headscale routes list
ID | Machine | Prefix | Advertised | Enabled | Primary
1 | | 0.0.0.0/0 | false | false | -
2 | | ::/0 | false | false | -
3 | phobos | 0.0.0.0/0 | true | true | -
4 | phobos | ::/0 | true | true | -
```
## On the client
The exit node can now be used with:
```console
$ sudo tailscale set --exit-node phobos
```
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes/?q=exit#step-3-use-the-exit-node) for how to do it on your device.

View File

@@ -1,6 +1,6 @@
# Glossary
| Term | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------- |
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
| Namespace | A namespace is a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User |
| Term | Description |
| --------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
| Namespace | A namespace was a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User (This is now called user) |

30
docs/iOS-client.md Normal file
View File

@@ -0,0 +1,30 @@
# Connecting an iOS client
## Goal
This documentation has the goal of showing how a user can use the official iOS [Tailscale](https://tailscale.com) client with `headscale`.
## Installation
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
Ensure that the installed version is at least 1.38.1, as that is the first release to support alternate control servers.
## Configuring the headscale URL
!!! info "Apple devices"
An endpoint with information on how to connect your Apple devices
(currently macOS only) is available at `/apple` on your running instance.
Ensure that the tailscale app is logged out before proceeding.
Go to iOS settings, scroll down past game center and tv provider to the tailscale app and select it. The headscale URL can be entered into the _"ALTERNATE COORDINATION SERVER URL"_ box.
> **Note**
>
> If the app was previously logged into tailscale, toggle on the _Reset Keychain_ switch.
Restart the app by closing it from the iOS app switcher, open the app and select the regular _Sign in_ option (non-SSO), and it should open up to the headscale authentication page.
Enter your credentials and log in. Headscale should now be working on your iOS device.

12
docs/index.md Normal file
View File

@@ -0,0 +1,12 @@
---
hide:
- navigation
- toc
---
# headscale documentation
This site contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 7.8 KiB

172
docs/oidc.md Normal file
View File

@@ -0,0 +1,172 @@
# Configuring Headscale to use OIDC authentication
In order to authenticate users through a centralized solution one must enable the OIDC integration.
Known limitations:
- No dynamic ACL support
- OIDC groups cannot be used in ACLs
## Basic configuration
In your `config.yaml`, customize this to your liking:
```yaml
oidc:
# Block further startup until the OIDC provider is healthy and available
only_start_if_oidc_is_available: true
# Specified by your OIDC provider
issuer: "https://your-oidc.issuer.com/path"
# Specified/generated by your OIDC provider
client_id: "your-oidc-client-id"
client_secret: "your-oidc-client-secret"
# alternatively, set `client_secret_path` to read the secret from the file.
# It resolves environment variables, making integration to systemd's
# `LoadCredential` straightforward:
#client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
scope: ["openid", "profile", "email", "custom"]
# Optional: Passed on to the browser login request used to tweak behaviour for the OIDC provider
extra_params:
domain_hint: example.com
# Optional: List allowed principal domains and/or users. If an authenticated user's domain is not in this list,
# the authentication request will be rejected.
allowed_domains:
- example.com
# Optional. Note that groups from Keycloak have a leading '/'.
allowed_groups:
- /headscale
# Optional.
allowed_users:
- alice@example.com
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# user: `first-name.last-name.example.com`
strip_email_domain: true
```
## Azure AD example
In order to integrate Headscale with Azure Active Directory, we'll need to provision an App Registration with the correct scopes and redirect URI. Here with Terraform:
```hcl
resource "azuread_application" "headscale" {
display_name = "Headscale"
sign_in_audience = "AzureADMyOrg"
fallback_public_client_enabled = false
required_resource_access {
// Microsoft Graph
resource_app_id = "00000003-0000-0000-c000-000000000000"
resource_access {
// scope: profile
id = "14dad69e-099b-42c9-810b-d002981feec1"
type = "Scope"
}
resource_access {
// scope: openid
id = "37f7f235-527c-4136-accd-4a02d197296e"
type = "Scope"
}
resource_access {
// scope: email
id = "64a6cdd6-aab1-4aaf-94b8-3cc8405e90d0"
type = "Scope"
}
}
web {
# Points at your running Headscale instance
redirect_uris = ["https://headscale.example.com/oidc/callback"]
implicit_grant {
access_token_issuance_enabled = false
id_token_issuance_enabled = true
}
}
group_membership_claims = ["SecurityGroup"]
optional_claims {
# Expose group memberships
id_token {
name = "groups"
}
}
}
resource "azuread_application_password" "headscale-application-secret" {
display_name = "Headscale Server"
application_object_id = azuread_application.headscale.object_id
}
resource "azuread_service_principal" "headscale" {
application_id = azuread_application.headscale.application_id
}
resource "azuread_service_principal_password" "headscale" {
service_principal_id = azuread_service_principal.headscale.id
end_date_relative = "44640h"
}
output "headscale_client_id" {
value = azuread_application.headscale.application_id
}
output "headscale_client_secret" {
value = azuread_application_password.headscale-application-secret.value
}
```
And in your Headscale `config.yaml`:
```yaml
oidc:
issuer: "https://login.microsoftonline.com/<tenant-UUID>/v2.0"
client_id: "<client-id-from-terraform>"
client_secret: "<client-secret-from-terraform>"
# Optional: add "groups"
scope: ["openid", "profile", "email"]
extra_params:
# Use your own domain, associated with Azure AD
domain_hint: example.com
# Optional: Force the Azure AD account picker
prompt: select_account
```
## Google OAuth Example
In order to integrate Headscale with Google, you'll need to have a [Google Cloud Console](https://console.cloud.google.com) account.
Google OAuth has a [verification process](https://support.google.com/cloud/answer/9110914?hl=en) if you need to have users authenticate who are outside of your domain. If you only need to authenticate users from your domain name (ie `@example.com`), you don't need to go through the verification process.
However if you don't have a domain, or need to add users outside of your domain, you can manually add emails via Google Console.
### Steps
1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one.
2. Create a project (if you don't already have one).
3. On the left hand menu, go to `APIs and services` -> `Credentials`
4. Click `Create Credentials` -> `OAuth client ID`
5. Under `Application Type`, choose `Web Application`
6. For `Name`, enter whatever you like
7. Under `Authorised redirect URIs`, use `https://example.com/oidc/callback`, replacing example.com with your Headscale URL.
8. Click `Save` at the bottom of the form
9. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it.
10. Edit your headscale config, under `oidc`, filling in your `client_id` and `client_secret`:
```yaml
oidc:
issuer: "https://accounts.google.com"
client_id: ""
client_secret: ""
scope: ["openid", "profile", "email"]
```
You can also use `allowed_domains` and `allowed_users` to restrict the users who can authenticate.

5
docs/packaging/README.md Normal file
View File

@@ -0,0 +1,5 @@
# Packaging
We use [nFPM](https://nfpm.goreleaser.com/) for making `.deb`, `.rpm` and `.apk`.
This folder contains files we need to package with these releases.

View File

@@ -0,0 +1,52 @@
[Unit]
After=syslog.target
After=network.target
Description=headscale coordination server for Tailscale
X-Restart-Triggers=/etc/headscale/config.yaml
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/bin/headscale serve
Restart=always
RestartSec=5
WorkingDirectory=/var/lib/headscale
ReadWritePaths=/var/lib/headscale /var/run
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_CHOWN
CapabilityBoundingSet=CAP_CHOWN
LockPersonality=true
NoNewPrivileges=true
PrivateDevices=true
PrivateMounts=true
PrivateTmp=true
ProcSubset=pid
ProtectClock=true
ProtectControlGroups=true
ProtectHome=true
ProtectHome=yes
ProtectHostname=true
ProtectKernelLogs=true
ProtectKernelModules=true
ProtectKernelTunables=true
ProtectProc=invisible
ProtectSystem=strict
RemoveIPC=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=true
RestrictRealtime=true
RestrictSUIDSGID=true
RuntimeDirectory=headscale
RuntimeDirectoryMode=0750
StateDirectory=headscale
StateDirectoryMode=0750
SystemCallArchitectures=native
SystemCallFilter=@chown
SystemCallFilter=@system-service
SystemCallFilter=~@privileged
UMask=0077
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,86 @@
#!/bin/sh
# Determine OS platform
# shellcheck source=/dev/null
. /etc/os-release
HEADSCALE_EXE="/usr/bin/headscale"
BSD_HIER=""
HEADSCALE_RUN_DIR="/var/run/headscale"
HEADSCALE_USER="headscale"
HEADSCALE_GROUP="headscale"
ensure_sudo() {
if [ "$(id -u)" = "0" ]; then
echo "Sudo permissions detected"
else
echo "No sudo permission detected, please run as sudo"
exit 1
fi
}
ensure_headscale_path() {
if [ ! -f "$HEADSCALE_EXE" ]; then
echo "headscale not in default path, exiting..."
exit 1
fi
printf "Found headscale %s\n" "$HEADSCALE_EXE"
}
create_headscale_user() {
printf "PostInstall: Adding headscale user %s\n" "$HEADSCALE_USER"
useradd -s /bin/sh -c "headscale default user" headscale
}
create_headscale_group() {
if command -V systemctl >/dev/null 2>&1; then
printf "PostInstall: Adding headscale group %s\n" "$HEADSCALE_GROUP"
groupadd "$HEADSCALE_GROUP"
printf "PostInstall: Adding headscale user %s to group %s\n" "$HEADSCALE_USER" "$HEADSCALE_GROUP"
usermod -a -G "$HEADSCALE_GROUP" "$HEADSCALE_USER"
fi
if [ "$ID" = "alpine" ]; then
printf "PostInstall: Adding headscale group %s\n" "$HEADSCALE_GROUP"
addgroup "$HEADSCALE_GROUP"
printf "PostInstall: Adding headscale user %s to group %s\n" "$HEADSCALE_USER" "$HEADSCALE_GROUP"
addgroup "$HEADSCALE_USER" "$HEADSCALE_GROUP"
fi
}
create_run_dir() {
printf "PostInstall: Creating headscale run directory \n"
mkdir -p "$HEADSCALE_RUN_DIR"
printf "PostInstall: Modifying group ownership of headscale run directory \n"
chown "$HEADSCALE_USER":"$HEADSCALE_GROUP" "$HEADSCALE_RUN_DIR"
}
summary() {
echo "----------------------------------------------------------------------"
echo " headscale package has been successfully installed."
echo ""
echo " Please follow the next steps to start the software:"
echo ""
echo " sudo systemctl enable headscale"
echo " sudo systemctl start headscale"
echo ""
echo " Configuration settings can be adjusted here:"
echo " ${BSD_HIER}/etc/headscale/config.yaml"
echo ""
echo "----------------------------------------------------------------------"
}
#
# Main body of the script
#
{
ensure_sudo
ensure_headscale_path
create_headscale_user
create_headscale_group
create_run_dir
summary
}

View File

@@ -0,0 +1,15 @@
#!/bin/sh
# Determine OS platform
# shellcheck source=/dev/null
. /etc/os-release
if command -V systemctl >/dev/null 2>&1; then
echo "Stop and disable headscale service"
systemctl stop headscale >/dev/null 2>&1 || true
systemctl disable headscale >/dev/null 2>&1 || true
echo "Running daemon-reload"
systemctl daemon-reload || true
fi
echo "Removing run directory"
rm -rf "/var/run/headscale.sock"

View File

@@ -43,7 +43,7 @@ headscale apikeys expire --prefix "<PREFIX>"
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
2. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headcale`
2. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
3. Make `headscale` executable:

View File

@@ -1,5 +1,12 @@
# Running headscale behind a reverse proxy
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
Running headscale behind a reverse proxy is useful when running multiple applications on the same server, and you want to reuse the same external IP and port - usually tcp/443 for HTTPS.
### WebSockets
@@ -98,3 +105,34 @@ spec:
upgrade_configs:
- upgrade_type: tailscale-control-protocol
```
## Caddy
The following Caddyfile is all that is necessary to use Caddy as a reverse proxy for headscale, in combination with the `config.yaml` specifications above to disable headscale's built in TLS. Replace values as necessary - `<YOUR_SERVER_NAME>` should be the FQDN at which headscale will be served, and `<IP:PORT>` should be the IP address and port where headscale is running. In most cases, this will be `localhost:8080`.
```
<YOUR_SERVER_NAME> {
reverse_proxy <IP:PORT>
}
```
Caddy v2 will [automatically](https://caddyserver.com/docs/automatic-https) provision a certficate for your domain/subdomain, force HTTPS, and proxy websockets - no further configuration is necessary.
For a slightly more complex configuration which utilizes Docker containers to manage Caddy, Headscale, and Headscale-UI, [Guru Computing's guide](https://blog.gurucomputing.com.au/smart-vpns-with-headscale/) is an excellent reference.
## Apache
The following minimal Apache config will proxy traffic to the Headscale instance on `<IP:PORT>`. Note that `upgrade=any` is required as a parameter for `ProxyPass` so that WebSockets traffic whose `Upgrade` header value is not equal to `WebSocket` (i. e. Tailscale Control Protocol) is forwarded correctly. See the [Apache docs](https://httpd.apache.org/docs/2.4/mod/mod_proxy_wstunnel.html) for more information on this.
```
<VirtualHost *:443>
ServerName <YOUR_SERVER_NAME>
ProxyPreserveHost On
ProxyPass / http://<IP:PORT>/ upgrade=any
SSLEngine On
SSLCertificateFile <PATH_TO_CERT>
SSLCertificateKeyFile <PATH_CERT_KEY>
</VirtualHost>
```

View File

@@ -1,7 +1,11 @@
# Running headscale in a container
**Note:** the container documentation is maintained by the _community_ and there is no guarentee
it is up to date, or working.
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
@@ -24,7 +28,7 @@ cd ./headscale
touch ./config/db.sqlite
```
3. **(Strongly Recommended)** Download a copy of the [example configuration](../config-example.yaml) from the [headscale repository](https://github.com/juanfont/headscale/).
3. **(Strongly Recommended)** Download a copy of the [example configuration][config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml) from the headscale repository.
Using wget:
@@ -101,11 +105,11 @@ Verify `headscale` is available:
curl http://127.0.0.1:9090/metrics
```
6. Create a namespace ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
6. Create a user ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
docker exec headscale \
headscale namespaces create myfirstnamespace
headscale users create myfirstuser
```
### Register a machine (normal login)
@@ -120,7 +124,7 @@ To register a machine when running `headscale` in a container, take the headscal
```shell
docker exec headscale \
headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
headscale --user myfirstuser nodes register --key <YOU_+MACHINE_KEY>
```
### Register machine using a pre authenticated key
@@ -129,7 +133,7 @@ Generate a key using the command line:
```shell
docker exec headscale \
headscale --namespace myfirstnamespace preauthkeys create --reusable --expiration 24h
headscale --user myfirstuser preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:

View File

@@ -0,0 +1,198 @@
# Running headscale on Linux
## Note: Outdated and "advanced"
This documentation is considered the "legacy"/advanced/manual version of the documentation, you most likely do not
want to use this documentation and rather look at the distro specific documentation (TODO LINK)[].
## Goal
This documentation has the goal of showing a user how-to set up and run `headscale` on Linux.
In additional to the "get up and running section", there is an optional [SystemD section](#running-headscale-in-the-background-with-systemd)
describing how to make `headscale` run properly in a server environment.
## Configure and run `headscale`
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell
wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
```
2. Make `headscale` executable:
```shell
chmod +x /usr/local/bin/headscale
```
3. Prepare a directory to hold `headscale` configuration and the [SQLite](https://www.sqlite.org/) database:
```shell
# Directory for configuration
mkdir -p /etc/headscale
# Directory for Database, and other variable data (like certificates)
mkdir -p /var/lib/headscale
# or if you create a headscale user:
useradd \
--create-home \
--home-dir /var/lib/headscale/ \
--system \
--user-group \
--shell /usr/bin/nologin \
headscale
```
4. Create an empty SQLite database:
```shell
touch /var/lib/headscale/db.sqlite
```
5. Create a `headscale` configuration:
```shell
touch /etc/headscale/config.yaml
```
**(Strongly Recommended)** Download a copy of the [example configuration][config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml) from the headscale repository.
6. Start the headscale server:
```shell
headscale serve
```
This command will start `headscale` in the current terminal session.
---
To continue the tutorial, open a new terminal and let it run in the background.
Alternatively use terminal emulators like [tmux](https://github.com/tmux/tmux) or [screen](https://www.gnu.org/software/screen/).
To run `headscale` in the background, please follow the steps in the [SystemD section](#running-headscale-in-the-background-with-systemd) before continuing.
7. Verify `headscale` is running:
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
8. Create a user ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
headscale users create myfirstuser
```
### Register a machine (normal login)
On a client machine, execute the `tailscale` login command:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
```
Register the machine:
```shell
headscale --user myfirstuser nodes register --key <YOUR_MACHINE_KEY>
```
### Register machine using a pre authenticated key
Generate a key using the command line:
```shell
headscale --user myfirstuser preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Running `headscale` in the background with SystemD
:warning: **Deprecated**: This part is very outdated and you should use the [pre-packaged Headscale for this](./running-headscale-linux.md
This section demonstrates how to run `headscale` as a service in the background with [SystemD](https://www.freedesktop.org/wiki/Software/systemd/).
This should work on most modern Linux distributions.
1. Create a SystemD service configuration at `/etc/systemd/system/headscale.service` containing:
```systemd
[Unit]
Description=headscale controller
After=syslog.target
After=network.target
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/local/bin/headscale serve
Restart=always
RestartSec=5
# Optional security enhancements
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
WorkingDirectory=/var/lib/headscale
ReadWritePaths=/var/lib/headscale /var/run/headscale
AmbientCapabilities=CAP_NET_BIND_SERVICE
RuntimeDirectory=headscale
[Install]
WantedBy=multi-user.target
```
Note that when running as the headscale user ensure that, either you add your current user to the headscale group:
```shell
usermod -a -G headscale current_user
```
or run all headscale commands as the headscale user:
```shell
su - headscale
```
2. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with path that is writable by the `headscale` user or group:
```yaml
unix_socket: /var/run/headscale/headscale.sock
```
3. Reload SystemD to load the new configuration file:
```shell
systemctl daemon-reload
```
4. Enable and start the new `headscale` service:
```shell
systemctl enable --now headscale
```
5. Verify the headscale service:
```shell
systemctl status headscale
```
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
`headscale` will now run in the background and start at boot.

View File

@@ -1,101 +1,82 @@
# Running headscale on Linux
## Requirements
- Ubuntu 20.04 or newer, Debian 11 or newer.
## Goal
This documentation has the goal of showing a user how-to set up and run `headscale` on Linux.
In additional to the "get up and running section", there is an optional [SystemD section](#running-headscale-in-the-background-with-systemd)
describing how to make `headscale` run properly in a server environment.
Get Headscale up and running.
## Configure and run `headscale`
This includes running Headscale with SystemD.
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
## Migrating from manual install
If you are migrating from the old manual install, the best thing would be to remove
the files installed by following [the guide in reverse](./running-headscale-linux-manual.md).
You should _not_ delete the database (`/var/headscale/db.sqlite`) and the
configuration (`/etc/headscale/config.yaml`).
## Installation
1. Download the lastest Headscale package for your platform (`.deb` for Ubuntu and Debian) from [Headscale's releases page]():
```shell
wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
wget --output-document=headscale.deb \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>.deb
```
2. Make `headscale` executable:
2. Install Headscale:
```shell
chmod +x /usr/local/bin/headscale
sudo dpkg --install headscale.deb
```
3. Prepare a directory to hold `headscale` configuration and the [SQLite](https://www.sqlite.org/) database:
3. Enable Headscale service, this will start Headscale at boot:
```shell
# Directory for configuration
mkdir -p /etc/headscale
# Directory for Database, and other variable data (like certificates)
mkdir -p /var/lib/headscale
# or if you create a headscale user:
useradd \
--create-home \
--home-dir /var/lib/headscale/ \
--system \
--user-group \
--shell /usr/bin/nologin \
headscale
sudo systemctl enable headscale
```
4. Create an empty SQLite database:
4. Configure Headscale by editing the configuration file:
```shell
touch /var/lib/headscale/db.sqlite
nano /etc/headscale/config.yaml
```
5. Create a `headscale` configuration:
5. Start Headscale:
```shell
touch /etc/headscale/config.yaml
sudo systemctl start headscale
```
It is **strongly recommended** to copy and modify the [example configuration](../config-example.yaml)
from the [headscale repository](../)
6. Start the headscale server:
6. Check that Headscale is running as intended:
```shell
headscale serve
systemctl status headscale
```
This command will start `headscale` in the current terminal session.
## Using Headscale
---
To continue the tutorial, open a new terminal and let it run in the background.
Alternatively use terminal emulators like [tmux](https://github.com/tmux/tmux) or [screen](https://www.gnu.org/software/screen/).
To run `headscale` in the background, please follow the steps in the [SystemD section](#running-headscale-in-the-background-with-systemd) before continuing.
7. Verify `headscale` is running:
Verify `headscale` is available:
### Create a user
```shell
curl http://127.0.0.1:9090/metrics
```
8. Create a namespace ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
headscale namespaces create myfirstnamespace
headscale users create myfirstuser
```
### Register a machine (normal login)
On a client machine, execute the `tailscale` login command:
On a client machine, run the `tailscale` login command:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
tailscale up --login-server <YOUR_HEADSCALE_URL>
```
Register the machine:
```shell
headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
headscale --user myfirstuser nodes register --key <YOUR_MACHINE_KEY>
```
### Register machine using a pre authenticated key
@@ -103,89 +84,12 @@ headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
Generate a key using the command line:
```shell
headscale --namespace myfirstnamespace preauthkeys create --reusable --expiration 24h
headscale --user myfirstuser preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:
This will return a pre-authenticated key that is used to
connect a node to `headscale` during the `tailscale` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Running `headscale` in the background with SystemD
This section demonstrates how to run `headscale` as a service in the background with [SystemD](https://www.freedesktop.org/wiki/Software/systemd/).
This should work on most modern Linux distributions.
1. Create a SystemD service configuration at `/etc/systemd/system/headscale.service` containing:
```systemd
[Unit]
Description=headscale controller
After=syslog.target
After=network.target
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/local/bin/headscale serve
Restart=always
RestartSec=5
# Optional security enhancements
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/headscale /var/run/headscale
AmbientCapabilities=CAP_NET_BIND_SERVICE
RuntimeDirectory=headscale
[Install]
WantedBy=multi-user.target
```
Note that when running as the headscale user ensure that, either you add your current user to the headscale group:
```shell
usermod -a -G headscale current_user
```
or run all headscale commands as the headscale user:
```shell
su - headscale
```
2. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with path that is writable by the `headscale` user or group:
```yaml
unix_socket: /var/run/headscale/headscale.sock
```
3. Reload SystemD to load the new configuration file:
```shell
systemctl daemon-reload
```
4. Enable and start the new `headscale` service:
```shell
systemctl enable --now headscale
```
5. Verify the headscale service:
```shell
systemctl status headscale
```
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
`headscale` will now run in the background and start at boot.

View File

@@ -1,5 +1,12 @@
# Running headscale on OpenBSD
!!! warning "Community documentation"
This page is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
## Goal
This documentation has the goal of showing a user how-to install and run `headscale` on OpenBSD 7.1.
@@ -10,17 +17,14 @@ describing how to make `headscale` run properly in a server environment.
1. Install from ports (Not Recommend)
As of OpenBSD 7.1, there's a headscale in ports collection, however, it's severely outdated(v0.12.4).
As of OpenBSD 7.2, there's a headscale in ports collection, however, it's severely outdated(v0.12.4).
You can install it via `pkg_add headscale`.
2. Install from source on OpenBSD 7.1
2. Install from source on OpenBSD 7.2
```shell
# Install prerequistes
# 1. go v1.19+: headscale newer than 0.17 needs go 1.19+ to compile
# 2. gmake: Makefile in the headscale repo is written in GNU make syntax
pkg_add -D snap go
pkg_add gmake
pkg_add go
git clone https://github.com/juanfont/headscale.git
@@ -33,7 +37,7 @@ latestTag=$(git describe --tags `git rev-list --tags --max-count=1`)
git checkout $latestTag
gmake build
go build -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$latestTag" github.com/juanfont/headscale
# make it executable
chmod a+x headscale
@@ -46,7 +50,7 @@ cp headscale /usr/local/sbin
```shell
# Install prerequistes
# 1. go v1.19+: headscale newer than 0.17 needs go 1.19+ to compile
# 1. go v1.20+: headscale newer than 0.21 needs go 1.20+ to compile
# 2. gmake: Makefile in the headscale repo is written in GNU make syntax
git clone https://github.com/juanfont/headscale.git
@@ -90,8 +94,7 @@ touch /var/lib/headscale/db.sqlite
touch /etc/headscale/config.yaml
```
It is **strongly recommended** to copy and modify the [example configuration](../config-example.yaml)
from the [headscale repository](../)
**(Strongly Recommended)** Download a copy of the [example configuration][config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml) from the headscale repository.
4. Start the headscale server:
@@ -116,10 +119,10 @@ Verify `headscale` is available:
curl http://127.0.0.1:9090/metrics
```
6. Create a namespace ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
6. Create a user ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
headscale namespaces create myfirstnamespace
headscale users create myfirstuser
```
### Register a machine (normal login)
@@ -133,7 +136,7 @@ tailscale up --login-server YOUR_HEADSCALE_URL
Register the machine:
```shell
headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
headscale --user myfirstuser nodes register --key <YOU_+MACHINE_KEY>
```
### Register machine using a pre authenticated key
@@ -141,7 +144,7 @@ headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
Generate a key using the command line:
```shell
headscale --namespace myfirstnamespace preauthkeys create --reusable --expiration 24h
headscale --user myfirstuser preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:

View File

@@ -11,8 +11,17 @@ To make the Windows client behave as expected and to run well with `headscale`,
- `HKLM:\SOFTWARE\Tailscale IPN\UnattendedMode` must be set to `always` as a `string` type, to allow Tailscale to run properly in the background
- `HKLM:\SOFTWARE\Tailscale IPN\LoginURL` must be set to `<YOUR HEADSCALE URL>` as a `string` type, to ensure Tailscale contacts the correct control server.
You can set these using the Windows Registry Editor:
![windows-registry](./images/windows-registry.png)
Or via the following Powershell commands (right click Powershell icon and select "Run as administrator"):
```
New-ItemProperty -Path 'HKLM:\Software\Tailscale IPN' -Name UnattendedMode -PropertyType String -Value always
New-ItemProperty -Path 'HKLM:\Software\Tailscale IPN' -Name LoginURL -PropertyType String -Value https://YOUR-HEADSCALE-URL
```
The Tailscale Windows client has been observed to reset its configuration on logout/reboot and these two keys [resolves that issue](https://github.com/tailscale/tailscale/issues/2798).
For a guide on how to edit registry keys, [check out Computer Hope](https://www.computerhope.com/issues/ch001348.htm).

Some files were not shown because too many files have changed in this diff Show More