Compare commits

...

944 Commits

Author SHA1 Message Date
Kristoffer Dalby
150ae1846a Merge pull request #517 from juanfont/changelog-prep-0.15
Prepare CHANGELOG for v0.15.0
2022-03-20 14:18:01 +00:00
Juan Font
452286552c Update CHANGELOG.md to include future 0.16.0
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-03-20 15:07:22 +01:00
Juan Font Alonso
631cf58ff0 Added date for 0.15.0 in changelog 2022-03-20 13:36:25 +01:00
Juan Font
8a2c0e88f4 Merge pull request #513 from juanfont/unstable-integration-tests
Add Tailscale unstable channel and repo HEAD to integration tests
2022-03-20 13:35:23 +01:00
Juan Font Alonso
af6a47fdd3 Changelog updated 2022-03-20 12:36:30 +01:00
Juan Font
94d910557f Merge branch 'main' into unstable-integration-tests 2022-03-20 12:34:04 +01:00
Juan Font Alonso
a8a683d3cc Added default values in Dockerfile.tailscale 2022-03-20 12:33:41 +01:00
Juan Font Alonso
a1caa5b45c Minor improvements on logging 2022-03-20 12:31:18 +01:00
Juan Font Alonso
f42868f67f Docker requires lowercase for the container names 2022-03-20 12:30:56 +01:00
Juan Font Alonso
a6455653c0 Added missing package 2022-03-20 12:30:08 +01:00
Kristoffer Dalby
c8503075e0 Merge pull request #514 from aofei/main 2022-03-18 21:18:22 +00:00
Kristoffer Dalby
4068a7b00b Merge branch 'main' into main 2022-03-18 21:02:05 +00:00
Kristoffer Dalby
daae2fe549 Merge pull request #512 from restanrm/feat-add-debug-log 2022-03-18 21:01:16 +00:00
Kristoffer Dalby
739653fa71 Merge branch 'main' into feat-add-debug-log 2022-03-18 20:44:21 +00:00
Kristoffer Dalby
304109a6c5 Merge pull request #511 from restanrm/fix-machine-registration-expired 2022-03-18 20:44:05 +00:00
Kristoffer Dalby
c29af96a19 Merge branch 'main' into main 2022-03-18 20:42:44 +00:00
Kristoffer Dalby
d21e9d29d1 Merge branch 'main' into feat-add-debug-log 2022-03-18 19:41:32 +00:00
Kristoffer Dalby
b65bd5baa8 Merge branch 'main' into fix-machine-registration-expired 2022-03-18 19:41:26 +00:00
Juan Font Alonso
0165b89941 Fixed paths 2022-03-18 19:35:09 +01:00
Kristoffer Dalby
53b62f3f39 Merge pull request #499 from juanfont/mandatory-stun 2022-03-18 18:28:37 +00:00
Kristoffer Dalby
cd2914ab3b Merge branch 'main' into mandatory-stun 2022-03-18 17:44:12 +00:00
Kristoffer Dalby
e85b97143c Merge pull request #509 from kradalby/go118 2022-03-18 17:43:41 +00:00
Aofei Sheng
1eafe960b8 fix: possible panic in Headscale.scheduledDERPMapUpdateWorker
There is a possible nil pointer dereference panic in the
`Headscale.scheduledDERPMapUpdateWorker`. Such as when the embedded
DERP server is disabled.
2022-03-19 01:20:25 +08:00
Juan Font Alonso
749c92954c Add Tailscale unstable channel and repo HEAD to integration tests
In preparation for the implementation of the new TS2021 protocol (Tailscale control protocol v2) we are expanding the test infrastructure
2022-03-18 17:05:28 +01:00
Juan Font Alonso
db9ba17920 Added missing file 2022-03-18 13:10:35 +01:00
Juan Font Alonso
d5ce7d7523 Prettier 2022-03-18 13:09:57 +01:00
Juan Font Alonso
2e6687209b Make STUN server mandatory if DERP embedded is enabled 2022-03-18 12:58:00 +01:00
Adrien Raffin-Caboisse
2e04abf4bb feat(oidc): add debug log 2022-03-18 09:40:12 +01:00
Adrien Raffin-Caboisse
882c0c34c1 chore(changelog): update changelog 2022-03-18 09:34:18 +01:00
Adrien Raffin-Caboisse
61ebb713f2 fix(oidc): Reset expiry for reauthentication
The previous code resetted the expiry time to be expired.  So the machine was never reauthenticated
2022-03-18 09:32:07 +01:00
Kristoffer Dalby
b781446e86 Upgrade to go 1.18 2022-03-17 17:43:11 +00:00
Kristoffer Dalby
1c9b1c0579 Merge pull request #507 from juanfont/update-contributors 2022-03-17 07:28:42 +00:00
github-actions[bot]
ade9552736 docs(README): update contributors 2022-03-17 06:38:00 +00:00
Kristoffer Dalby
68403cb76e Merge pull request #505 from y0ngb1n/fix-docs-metrics-endpoint 2022-03-17 06:37:27 +00:00
Yang Bin
537ecb8db0 docs: fixed /metrics endpoint 8080 → 9090, reference config-example.yaml 2022-03-17 09:25:42 +08:00
Juan Font Alonso
8f5875efe4 Reorg errors 2022-03-16 19:46:59 +01:00
Juan Font
98ac88d5ef Changed comment position
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-03-16 18:45:34 +01:00
Kristoffer Dalby
d13338a9fb Merge branch 'main' into mandatory-stun 2022-03-16 07:18:18 +00:00
Juan Font
1579ffb66a Merge pull request #500 from bravechamp/patch-1
Fix API access
2022-03-15 14:53:23 +01:00
bravechamp
0bfa5302a7 Fix API access
By allowing API keys to be validated
2022-03-15 16:05:56 +03:00
Juan Font Alonso
b8aad5451d Make STUN run by default when embedded DERP is enabled
This commit also allows to set an external STUN server, while running the embedded DERP server (without embedded STUN)
2022-03-15 13:22:25 +01:00
Kristoffer Dalby
61440c42d3 Merge pull request #496 from juanfont/update-contributors
docs(README): update contributors
2022-03-10 19:58:24 +00:00
github-actions[bot]
18ee6274e1 docs(README): update contributors 2022-03-10 19:50:59 +00:00
Kristoffer Dalby
0abfbdc18a Merge pull request #495 from appbricks/appbricks/main-bug-fix
Regression bug fix when re-authenticating machine with auth-key
2022-03-10 19:50:23 +00:00
Mevan Samaratunga
082a852c5e fixed linting recommendation 2022-03-10 10:40:20 -05:00
Mevan Samaratunga
af081e9fd3 fixed lint errors 2022-03-10 10:22:21 -05:00
Mevan Samaratunga
8b5e8b7dfc Refresh expired machine on re-auth - closes #489 2022-03-10 08:59:28 -05:00
Kristoffer Dalby
62d7fae056 Merge pull request #311 from restanrm/docs-acl-modifications
Issues with current ACL implementation and solution proposal
2022-03-08 17:08:17 +00:00
Kristoffer Dalby
dd219d0ff6 Merge branch 'main' into docs-acl-modifications 2022-03-08 17:05:59 +00:00
Kristoffer Dalby
6087e1cf6f Merge pull request #488 from juanfont/update-contributors 2022-03-08 17:02:47 +00:00
github-actions[bot]
c47fb1ae54 docs(README): update contributors 2022-03-08 16:50:11 +00:00
Kristoffer Dalby
48cec3cd90 Merge pull request #486 from e-zk/main 2022-03-08 16:49:32 +00:00
Kristoffer Dalby
e54c508c10 Merge branch 'main' into main 2022-03-08 16:05:41 +00:00
Kristoffer Dalby
941e9d9b0f Merge pull request #388 from juanfont/embedded-derp 2022-03-08 16:05:30 +00:00
Juan Font Alonso
b803240dc1 Added new line for prettier 2022-03-08 12:21:08 +01:00
Juan Font Alonso
bdbf620ece Merge branch 'embedded-derp' of https://github.com/juanfont/headscale into embedded-derp 2022-03-08 12:16:43 +01:00
Juan Font
e5d22b8a70 Merge branch 'main' into embedded-derp 2022-03-08 12:16:34 +01:00
Juan Font Alonso
05c5e2280b Updated CHANGELOG and README 2022-03-08 12:15:05 +01:00
Juan Font Alonso
b41d89946a Merge branch 'embedded-derp' of https://github.com/juanfont/headscale into embedded-derp 2022-03-08 12:11:59 +01:00
Juan Font Alonso
cc0c88a63a Added small integration test for stun 2022-03-08 12:11:51 +01:00
e-zk
c06689dec1 fix: make register html/template consistent with other html
- makes the html/template for /register follow the same formatting
  as /apple and /windows
- adds a <title> element
- minor change for consistency's sake
2022-03-08 18:34:46 +10:00
Kristoffer Dalby
b85dd7abbd Merge pull request #484 from juanfont/prtemplate-fix
Fix checkboxes in PR template
2022-03-08 07:29:30 +00:00
Kristoffer Dalby
6aeaff43aa Fix checkboxes in PR template 2022-03-08 07:21:04 +00:00
Kristoffer Dalby
dd26cbd193 Merge branch 'main' into embedded-derp 2022-03-08 07:18:51 +00:00
Kristoffer Dalby
b0ae3240fd Merge pull request #387 from restanrm/fix-magic-dns-and-uppercase-letters
Fix magic dns and uppercase letters
2022-03-08 07:17:45 +00:00
Adrien Raffin-Caboisse
41efe98953 fix: apply fmt and fix missing name changes 2022-03-07 23:20:30 +01:00
Adrien Raffin-Caboisse
2b68c90778 chore: update changelog 2022-03-07 23:14:39 +01:00
Adrien Raffin-Caboisse
f19c048569 fix: change normalization function name 2022-03-07 22:55:54 +01:00
Adrien Raffin-Caboisse
6cc8bbc24f feat(api): add normalisation at machine register step 2022-03-07 22:46:29 +01:00
Juan Font Alonso
03452a8dca Prettied 2022-03-07 00:29:40 +01:00
Juan Font Alonso
15ed71315c Merge branch 'embedded-derp' of https://github.com/juanfont/headscale into embedded-derp 2022-03-06 23:47:31 +01:00
Juan Font Alonso
05df8e947a Added missing file 2022-03-06 23:47:14 +01:00
Juan Font Alonso
b3fa66dbd2 Check for DERP in test 2022-03-06 23:46:16 +01:00
Juan Font Alonso
a27b386123 Clarified expiration dates 2022-03-06 23:45:01 +01:00
Juan Font Alonso
580db9b58f Mention that STUN is UDP 2022-03-06 23:19:21 +01:00
Adrien Raffin-Caboisse
1114449601 change: update name of method to check and normalize Domain name 2022-03-06 20:46:17 +01:00
Juan Font
b47de07eea Update Dockerfile.tailscale
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-03-06 20:42:27 +01:00
Juan Font
e1fcf0da26 Added more version
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-03-06 20:40:55 +01:00
Juan Font
dcf3ea567c Merge branch 'main' into fix-magic-dns-and-uppercase-letters 2022-03-06 17:37:48 +01:00
Juan Font Alonso
de2ea83b3b Linting here and there 2022-03-06 17:35:54 +01:00
Juan Font Alonso
eb06054a7b Make DERP Region configurable 2022-03-06 17:25:21 +01:00
Juan Font Alonso
eb500155e8 Make STUN server configurable 2022-03-06 17:00:56 +01:00
Juan Font Alonso
dc909ba6d7 Improved logging on startup 2022-03-06 16:54:19 +01:00
Juan Font Alonso
70910c4595 Working /bootstrap-dns DERP helper 2022-03-06 01:23:35 +01:00
Juan Font Alonso
54c3e00a1f Merge local DERP server region with other configured DERP sources 2022-03-05 20:04:31 +01:00
Juan Font Alonso
e78c002f5a Fix minor issue 2022-03-05 19:48:30 +01:00
Juan Font Alonso
237f7f1027 Merge branch 'main' into embedded-derp 2022-03-05 19:42:29 +01:00
Juan Font Alonso
992efbd84a Added missing private TLS key 2022-03-05 19:35:15 +01:00
Juan Font Alonso
e9eb90fa76 Added integration tests for the embedded DERP server 2022-03-05 19:34:06 +01:00
Juan Font Alonso
88378c22fb Rename the file to derp_server.go for coherence 2022-03-05 19:31:50 +01:00
Juan Font Alonso
b742379627 Do not use the term embedded 2022-03-05 19:30:30 +01:00
Juan Font Alonso
df37d1a639 Do not offer the option to be DERP insecure
Websockets, in which DERP is based, requires a TLS certificate. At the same time,
if we use a certificate it must be valid... otherwise Tailscale wont connect (does not
have an Insecure option). So there is no option to expose insecure here
2022-03-05 19:19:21 +01:00
Juan Font Alonso
758b1ba1cb Renamed configuration items of the DERP server 2022-03-05 16:22:02 +01:00
Kristoffer Dalby
435ee36d78 Merge pull request #394 from juanfont/renovateaction/dockerfiles 2022-03-05 00:41:22 +00:00
Renovate Bot
35efd8f95a chore(deps): update dependency docker.io/golang to v1.17.8 2022-03-05 00:09:36 +00:00
Juan Font Alonso
09d78c7a05 Even more stuff moved to common 2022-03-04 13:54:59 +01:00
Kristoffer Dalby
60655c5242 Merge pull request #393 from juanfont/update-contributors 2022-03-04 12:30:20 +00:00
Juan Font Alonso
22d2443281 Move more stuff to common 2022-03-04 13:26:45 +01:00
github-actions[bot]
a70669fca7 docs(README): update contributors 2022-03-04 11:04:12 +00:00
Kristoffer Dalby
0720473033 Merge pull request #392 from e-zk/windows-endpoint 2022-03-04 11:03:33 +00:00
Kristoffer Dalby
e799307e74 Merge branch 'main' into windows-endpoint 2022-03-04 10:47:52 +00:00
e-zk
575f33d183 docs: fix comments to comply with golangci-lint 2022-03-04 20:35:09 +10:00
Juan Font Alonso
607c1eb316 Be consistent with uppercase DERP 2022-03-04 11:31:41 +01:00
e-zk
d69dada8ff feat(windows): rename apple_mobileconfig.go => platform_config.go
rename apple_mobileconfig.go to platform_config.go since the file
includes configuration info for multiple platforms now.
2022-03-04 20:03:49 +10:00
e-zk
f9e0c13890 docs: update CHANGELOG 2022-03-04 19:53:57 +10:00
e-zk
12a50ac8ac feat(windows): add /windows endpoint for Windows configuration
- registry file /windows/tailscale.reg is generated, filling in the
  associated control server URL
- also includes CLI instructions
- fix /apple incorrect template: 'Url' is supposed to be '.URL'
2022-03-04 19:53:44 +10:00
e-zk
b342cf0240 feat(windows): cleanup /apple endpoint
- rename the gin function to AppleConfigMessage
- use <pre> + <code> for code blocks
- add headscale heading
- reword some sections
2022-03-04 19:53:29 +10:00
Kristoffer Dalby
e3ff87b7ef Merge pull request #389 from e-zk/main 2022-03-04 07:26:35 +00:00
zakaria
745696b310 docs: fix mistake in ACME challenge type comment 2022-03-04 12:11:43 +10:00
Juan Font Alonso
23cde8445f Merge branch 'main' into embedded-derp 2022-03-04 00:04:59 +01:00
Juan Font Alonso
9d43f589ae Added missing deps 2022-03-04 00:04:28 +01:00
Juan Font Alonso
897d480f4d Add an embedded DERP server to Headscale
This series of commit will be adding an embedded DERP server (and STUN) to Headscale,
thus making it completely self-contained and not dependant in other infrastructure.
2022-03-04 00:01:31 +01:00
Adrien Raffin-Caboisse
6f172a6e4c fix(acls): remove dead error code 2022-03-03 23:53:08 +01:00
Adrien Raffin-Caboisse
44a5372c53 fix(poll): Normalize hostname
This function is called often. Normalization of the hostname will be written in database.
2022-03-03 23:52:25 +01:00
Kristoffer Dalby
f2ea6fb30f Merge pull request #384 from restanrm/fix-issue-with-empty-namespace-and-acl-evaluation 2022-03-03 08:43:37 +00:00
Adrien Raffin-Caboisse
4a4952899b feat(acls): add some logs and skip error
logs looks like the following
```
2022-03-02T20:43:08Z DBG Expanding alias=app-test
2022-03-02T20:43:08Z DBG Expanding alias=kube-test
2022-03-02T20:43:08Z DBG Expanding alias=test
2022-03-02T20:43:08Z WRN No IPs found with the alias test
2022-03-02T20:43:08Z DBG Expanding alias=prod
2022-03-02T20:43:08Z WRN No IPs found with the alias prod
2022-03-02T20:43:08Z DBG Expanding alias=prod
2022-03-02T20:43:08Z WRN No IPs found with the alias prod
```
2022-03-02 21:54:43 +01:00
Kristoffer Dalby
b72a8aa7d1 Merge pull request #381 from juanfont/update-contributors 2022-03-02 14:18:09 +00:00
github-actions[bot]
e301d0d1df docs(README): update contributors 2022-03-02 13:44:26 +00:00
Kristoffer Dalby
75ca91b0f7 Merge pull request #380 from juanfont/update-contributors
docs(README): update contributors
2022-03-02 13:43:53 +00:00
github-actions[bot]
e208ccc982 docs(README): update contributors 2022-03-02 13:42:25 +00:00
Kristoffer Dalby
71a62697aa Merge pull request #379 from juanfont/kradalby-patch-1
Second contributor attempt
2022-03-02 13:41:50 +00:00
Kristoffer Dalby
f9c0597875 Second contributor attempt 2022-03-02 13:40:37 +00:00
Kristoffer Dalby
aa3eb5171a Merge pull request #344 from reynico/metrics-listen 2022-03-02 13:06:29 +00:00
Nico Rey
dcc46af8de Changelog: add breaking change 2022-03-02 09:22:29 -03:00
Kristoffer Dalby
b61500670c Merge branch 'main' into metrics-listen 2022-03-02 11:35:33 +00:00
Kristoffer Dalby
ccec534e19 Merge pull request #377 from juanfont/smarter-contribute-pipeline 2022-03-02 11:17:02 +00:00
Kristoffer Dalby
9b10457209 Merge branch 'main' into smarter-contribute-pipeline 2022-03-02 11:14:50 +00:00
Kristoffer Dalby
9a8f605cba Merge pull request #371 from kradalby/use-specific-database-typess 2022-03-02 11:14:04 +00:00
Kristoffer Dalby
1246267ead Merge branch 'main' into smarter-contribute-pipeline 2022-03-02 10:43:07 +00:00
Kristoffer Dalby
a0a56d43f8 Merge branch 'main' into use-specific-database-typess 2022-03-02 10:29:34 +00:00
Kristoffer Dalby
63d87110f6 Merge pull request #376 from e-zk/feat/command-aliases 2022-03-02 10:28:18 +00:00
Kristoffer Dalby
7c99d963e2 Merge branch 'main' into feat/command-aliases 2022-03-02 10:06:38 +00:00
zakaria
a614f158be docs: update changelog 2022-03-02 19:53:07 +10:00
Kristoffer Dalby
2b6a5173da Allow upstream delete continue on failure 2022-03-02 09:12:00 +00:00
Kristoffer Dalby
32ac690494 Update contributors.yml 2022-03-02 09:08:30 +00:00
Kristoffer Dalby
0835bffc3c Update changelog 2022-03-02 08:15:21 +00:00
Kristoffer Dalby
c80e364f02 Remove always nil error 2022-03-02 08:15:14 +00:00
Kristoffer Dalby
5b169010be Resolve merge conflict 2022-03-02 08:11:50 +00:00
Kristoffer Dalby
eeded85d9c Merge pull request #366 from kradalby/registration-simplification 2022-03-02 08:02:26 +00:00
Kristoffer Dalby
e4d81bbb16 Merge branch 'main' into registration-simplification 2022-03-02 07:31:02 +00:00
Kristoffer Dalby
1f8c7f427b Add comment 2022-03-02 07:29:56 +00:00
Kristoffer Dalby
ef422e6988 Protect against expiry nil 2022-03-02 07:29:56 +00:00
Kristoffer Dalby
ec4dc68524 Use correct machinekey format for oidc reg 2022-03-02 07:29:56 +00:00
Kristoffer Dalby
86ade72c19 Remove err check 2022-03-02 07:29:56 +00:00
Kristoffer Dalby
0c0653df8b Merge pull request #375 from restanrm/fix-limitations-in-source-acls-rules
Fix limitations in source acls rules
2022-03-02 07:02:29 +00:00
zakaria
12b3b5f8f1 feat(aliases): add aliases for preauthkeys command
- `preauthkey`, `authkey`, `pre` are aliases for `preauthkey` command
- `ls`, `show` are aliases for `list` subcommand
- `c`, `new` are aliases for `create` subcommand
- `revoke`, `exp`, `e` are aliases for `expire` subcommand
2022-03-02 15:42:12 +10:00
zakaria
052dbfe440 feat(aliases): add aliases for apikeys command
- `apikey`, `api` are aliases for `apikeys` command
- `ls`, `show` are aliases for `list` subcommand
- `c`, `new` are aliases for `create` subcommand
- `revoke`, `exp`, `e` are aliases for the `expire` subcommand
2022-03-02 15:32:35 +10:00
zakaria
5310f8692b feat(aliases): add aliases for namespaces command
- `namespace`, `ns`, `user`, `users` are aliases for `namespaces`
   command
- `c`, `new` are aliases for the `create` subcommand
- `delete` is an alias for the `destroy` subcommand
- `mv` is an alias for the `rename` subcommand
- `ls`, `show` are aliases for the `list` subcommand
2022-03-02 14:35:20 +10:00
zakaria
aff6b84250 feat(aliases): add 'gen' alias for 'generate' command 2022-03-02 14:29:33 +10:00
zakaria
21eee912a3 feat(aliases): add aliases for nodes command
- `node`, `machine`, `machines` are aliases for `nodes` command
- `ls`, `show` aliases for `list` subcommand
- `logout`, `exp`, `e` are aliases for `expire` subcommand
- `del` is an alias for `delete` subcommand
2022-03-02 14:28:03 +10:00
zakaria
dbb2af0238 feat(aliases): add aliases for route command
- `r` is alias for `route` command
- `ls`, or `show` is alias for `list` subcommand
2022-03-02 14:27:56 +10:00
Adrien Raffin-Caboisse
77fe0b01f7 docs: update changelog 2022-03-01 22:50:22 +01:00
Adrien Raffin-Caboisse
361b4f7f4f fix(machine): allow to use * in ACL sources 2022-03-01 22:48:21 +01:00
Kristoffer Dalby
dec4ee5f73 Merge pull request #373 from restanrm/feat-email-in-acls 2022-03-01 21:18:13 +00:00
Adrien Raffin-Caboisse
b2dca80e7a docs: update changelog 2022-03-01 21:16:33 +01:00
Adrien Raffin-Caboisse
a455a874ad feat(acls): normalize the group name 2022-03-01 21:10:52 +01:00
Kristoffer Dalby
49cd761bf6 Use new machine types in tests 2022-03-01 16:34:35 +00:00
Kristoffer Dalby
6477e6a583 Use new machine types 2022-03-01 16:34:24 +00:00
Kristoffer Dalby
8a95fe517a Use specific types for all fields on machine (no datatypes.json)
This commit removes the need for datatypes.JSON and makes the code a bit
cleaner by allowing us to use proper types throughout the code when it
comes to hostinfo and other datatypes on the machine object.

This allows us to remove alot of unmarshal/marshal operations and remove
a lot of obsolete error checks.

This following commits will clean away a lot of untyped data and
uneccessary error checks.
2022-03-01 16:31:25 +00:00
Kristoffer Dalby
a9d4fa89dc Merge branch 'main' into registration-simplification 2022-03-01 15:53:06 +01:00
Kristoffer Dalby
94c5474212 Merge pull request #369 from kradalby/update-dependencies
Update dependencies
2022-03-01 15:52:27 +01:00
Kristoffer Dalby
d34d617935 Merge branch 'main' into registration-simplification 2022-03-01 15:18:24 +01:00
Kristoffer Dalby
573008757d Merge branch 'main' into update-dependencies 2022-03-01 15:16:56 +01:00
Kristoffer Dalby
4c74043f72 Merge pull request #359 from kradalby/yaml-acls
Add YAML support to ACLs
2022-03-01 15:16:37 +01:00
Kristoffer Dalby
0551b34de5 Merge branch 'main' into update-dependencies 2022-03-01 14:51:57 +01:00
Kristoffer Dalby
105812421e Merge branch 'main' into yaml-acls 2022-03-01 14:49:37 +01:00
Kristoffer Dalby
4a9fd3a680 Merge pull request #368 from kradalby/apple-profile-fix
Fix apple profile issue being generated with escaped characters
2022-03-01 14:49:07 +01:00
Kristoffer Dalby
1cb39d914c Update dependencies 2022-03-01 07:35:17 +00:00
Kristoffer Dalby
5157f356cb Fix apple profile issue being generated with escaped characters 2022-03-01 07:30:35 +00:00
Kristoffer Dalby
7c63412df5 Remove todo 2022-02-28 23:02:41 +00:00
Kristoffer Dalby
82cb6b9ddc Cleanup some unreachable code 2022-02-28 23:00:41 +00:00
Kristoffer Dalby
379017602c Reformat and add db backup note 2022-02-28 22:50:35 +00:00
Kristoffer Dalby
8bef04d8df Remove sorted todo 2022-02-28 22:45:42 +00:00
Kristoffer Dalby
5e92ddad43 Remove redundant caches
This commit removes the two extra caches (oidc, requested time) and uses
the new central registration cache instead. The requested time is
unified into the main machine object and the oidc key is just added to
the same cache, as a string with the state as a key instead of machine
key.
2022-02-28 22:42:30 +00:00
Kristoffer Dalby
e64bee778f Regenerate proto 2022-02-28 22:21:14 +00:00
Kristoffer Dalby
5e1b12948e Remove registered field from proto 2022-02-28 22:21:06 +00:00
Kristoffer Dalby
eea8e7ba6f Update changelog 2022-02-28 22:11:31 +00:00
Kristoffer Dalby
78251ce8ec Remove registrated field
This commit removes the field from the database and does a DB migration
**removing** all unregistered machines from headscale.

This means that from this version, all machines in the database is
considered registered.
2022-02-28 18:05:03 +00:00
Kristoffer Dalby
a8649d83c4 Remove all references to Machine.Registered from tests 2022-02-28 17:42:03 +00:00
Kristoffer Dalby
16b21e8158 Remove all references to Machine.Registered 2022-02-28 16:55:57 +00:00
Kristoffer Dalby
35616eb861 Fix oidc error were namespace isnt created #365 2022-02-28 16:41:28 +00:00
Kristoffer Dalby
e7bef56718 Remove reference to registered in integration test 2022-02-28 16:36:29 +00:00
Kristoffer Dalby
c6b87de959 Remove poorly aged test 2022-02-28 16:36:16 +00:00
Kristoffer Dalby
50053e616a Ignore complexity linter 2022-02-28 16:35:08 +00:00
Kristoffer Dalby
54cc3c067f Implement new machine register parameter 2022-02-28 16:34:50 +00:00
Kristoffer Dalby
402a76070f Reuse machine structure for parameters, named parameters 2022-02-28 16:34:28 +00:00
Nico Rey
9a61725e9f Metrics: Disable toggle. Set default port to 9090 2022-02-28 10:40:02 -03:00
Kristoffer Dalby
6126d6d9b5 Merge branch 'main' into metrics-listen 2022-02-28 14:24:25 +01:00
Kristoffer Dalby
469551bc5d Register new machines needing callback in memory
This commit stores temporary registration data in cache, instead of
memory allowing us to only have actually registered machines in the
database.
2022-02-28 08:06:39 +00:00
Kristoffer Dalby
1caa6f5d69 Add todo for JSON datatype 2022-02-27 18:48:25 +01:00
Kristoffer Dalby
ecc26432fd Fix excessive replace 2022-02-27 18:48:12 +01:00
Kristoffer Dalby
caffbd8956 Update cli registration with new method 2022-02-27 18:42:43 +01:00
Kristoffer Dalby
fd1e4a1dcd Generalise registration for openid 2022-02-27 18:42:24 +01:00
Kristoffer Dalby
acb945841c Generalise registration for pre auth keys 2022-02-27 18:42:15 +01:00
Kristoffer Dalby
c58ce6f60c Generalise the registration method to DRY stuff up 2022-02-27 18:40:10 +01:00
Kristoffer Dalby
d6f6939c54 Update changelog 2022-02-27 09:08:29 +01:00
Kristoffer Dalby
e0b9a317f4 Add note to config example 2022-02-27 09:05:08 +01:00
Kristoffer Dalby
c159eb7541 Add basic test of yaml parsing 2022-02-27 09:04:59 +01:00
Kristoffer Dalby
8a3a0b6403 Add YAML support to ACLs 2022-02-27 09:04:48 +01:00
Kristoffer Dalby
67d6c8f946 Remove oversensitive tracing output 2022-02-27 09:04:27 +01:00
Nico Rey
06e6c29a5b metrics: make metrics endpoint toggleable 2022-02-25 18:36:03 -03:00
Nico Rey
a9122c3de3 prometheus: replace default port by a port between the recommended prometheus range 2022-02-25 18:21:20 -03:00
Kristoffer Dalby
b1bd17f316 Merge pull request #350 from restanrm/feat-oidc-login-as-namespace 2022-02-25 13:22:26 +01:00
Adrien Raffin-Caboisse
b39faa124a Merge remote-tracking branch 'origin/main' into feat-oidc-login-as-namespace 2022-02-25 11:28:17 +01:00
Kristoffer Dalby
8689a39c96 Merge pull request #357 from kradalby/make-namespace-to-users
Remove boundaries between Namespaces
2022-02-25 11:01:41 +01:00
Kristoffer Dalby
bae8ed3e70 Merge branch 'main' into make-namespace-to-users 2022-02-25 10:39:12 +01:00
Kristoffer Dalby
08c7076667 Merge pull request #346 from kradalby/integration-test-concurrent-join
Fix ip allocation bug, make integration tests faster
2022-02-25 10:37:35 +01:00
Kristoffer Dalby
91b50550ee Update readme and glossary to reflect features and goals 2022-02-25 10:34:35 +01:00
Kristoffer Dalby
2c7064462a Update changelog 2022-02-25 10:30:58 +01:00
Kristoffer Dalby
d9e7f37280 Uncomment previous test and update them for no boundries 2022-02-25 10:27:27 +01:00
Kristoffer Dalby
e03b3d558f Remove boundries between namespaces 2022-02-25 10:26:34 +01:00
Kristoffer Dalby
2fd36dd254 Resolve merge 2022-02-25 09:08:15 +00:00
Kristoffer Dalby
381598663d Merge pull request #347 from kradalby/remove-shared
Remove the concept of "shared nodes"
2022-02-25 10:06:58 +01:00
Kristoffer Dalby
6d699d3c29 Update changelog 2022-02-25 08:44:16 +00:00
Kristoffer Dalby
ebe59a5a27 Fix utils tests, use ipset datastructure 2022-02-25 08:28:22 +00:00
Nico
d55c79e75b Merge branch 'main' into metrics-listen 2022-02-24 10:41:07 -03:00
Kristoffer Dalby
eda0a9f88a Lock allocation of IP address
current logic is not safe as it will allow an IP that isnt persisted to
the DB to be given out multiple times if machines joins in quick
succession.

This adds a lock around the "get ip" and machine registration and save
to DB so we ensure thiis isnt happning.

Currently this had to be done three places, which is silly, and outlined
in #294.
2022-02-24 13:18:18 +00:00
Adrien Raffin-Caboisse
47e8442d91 Update CHANGELOG.md
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-02-24 13:34:48 +01:00
Adrien Raffin-Caboisse
f9ce32fe1a Update CHANGELOG.md
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-02-24 13:34:36 +01:00
Kristoffer Dalby
189e883f91 Resolve merge 2022-02-24 11:41:54 +00:00
Kristoffer Dalby
aa506503e2 Merge branch 'main' into feat-oidc-login-as-namespace 2022-02-24 11:40:34 +00:00
Kristoffer Dalby
9c2c09fce7 Merge branch 'main' into remove-shared 2022-02-24 11:39:44 +00:00
Kristoffer Dalby
5596a0acef Merge pull request #297 from arch4ngel/configurable-mtls
Configurable mtls
2022-02-24 11:32:02 +00:00
Kristoffer Dalby
9687e6768d Remove retry from integration tests 2022-02-24 11:29:53 +00:00
Kristoffer Dalby
fb85c78e8a Fail integration tests fast 2022-02-24 11:28:34 +00:00
Kristoffer Dalby
d27f2bc538 Merge branch 'main' into metrics-listen 2022-02-24 11:16:57 +00:00
Kristoffer Dalby
8c33907655 Sort lint 2022-02-24 11:10:40 +00:00
Kristoffer Dalby
afb67b6e75 Merge branch 'main' into configurable-mtls 2022-02-24 11:09:05 +00:00
Kristoffer Dalby
69f220fe5c Merge branch 'main' into feat-oidc-login-as-namespace 2022-02-24 11:01:32 +00:00
Kristoffer Dalby
c46dfd761c Merge pull request #349 from kradalby/remove-cgo 2022-02-24 10:11:15 +00:00
Adrien Raffin-Caboisse
95453cba75 Merge branch 'main' into feat-oidc-login-as-namespace 2022-02-23 17:56:45 +01:00
Kristoffer Dalby
ed2175706c Merge branch 'remove-cgo' of github.com:kradalby/headscale into remove-cgo 2022-02-23 16:23:53 +00:00
Kristoffer Dalby
686e45cf27 Set all anti-cgo options and add comment 2022-02-23 16:15:20 +00:00
Adrien Raffin-Caboisse
ae6a20e4d9 fix: add valid test identified by linter 2022-02-23 14:28:25 +01:00
Adrien Raffin-Caboisse
046116656b chore: update formatting 2022-02-23 14:22:21 +01:00
Adrien Raffin-Caboisse
972bef1194 feat: add length error if hostname too long 2022-02-23 14:21:46 +01:00
Adrien Raffin-Caboisse
4f1f235a2e feat: add strip_email_domain to normalization of namespace 2022-02-23 14:03:07 +01:00
Adrien Raffin-Caboisse
7e4709c13f fix(namespace): remove name validation for destroy and get 2022-02-23 13:35:57 +01:00
Adrien Raffin-Caboisse
cef0a2b0b3 fix(namespaces_test): fix missing namespace name 2022-02-23 11:40:48 +01:00
Adrien Raffin-Caboisse
fcdbe7c510 fix(utils_test): fix namespace name 2022-02-23 11:38:20 +01:00
Adrien Raffin-Caboisse
995731a29c fix(namespace): checknamespace name before actions
I keep the check server side because it's better from a security point of view.
2022-02-23 11:32:16 +01:00
Adrien Raffin-Caboisse
45727dbb21 feat(namespace): add check function for namespace 2022-02-23 11:32:14 +01:00
Kristoffer Dalby
f0a73632e0 Merge branch 'main' into remove-cgo 2022-02-23 09:01:38 +00:00
Kristoffer Dalby
823cc493f0 Merge branch 'main' into configurable-mtls 2022-02-23 07:29:31 +00:00
Juan Font
a86b33f1ff Merge pull request #345 from juanfont/update-contributors
docs(README): update contributors
2022-02-22 23:46:54 +01:00
Juan Font
28c2bbeb27 Merge branch 'main' into update-contributors 2022-02-22 23:35:34 +01:00
github-actions[bot]
d4761da27c docs(README): update contributors 2022-02-22 22:34:27 +00:00
Juan Font
b0c7ebeb7d Merge pull request #351 from pernila/patch-1
Added FreeBSD to the supported clients
2022-02-22 23:33:58 +01:00
Juan Font
5f375d69b5 Merge branch 'main' into update-contributors 2022-02-22 23:32:02 +01:00
Juan Font
9eb705a4fb Merge branch 'main' into patch-1 2022-02-22 23:31:29 +01:00
Juan Font
1b87396a8c Merge pull request #333 from ohdearaugustin/topic/renovatebot
Topic/renovatebot
2022-02-22 23:30:46 +01:00
Juan Font
bb14bcd4d2 Merge branch 'main' into topic/renovatebot 2022-02-22 23:29:08 +01:00
pernila
48c866b058 Added FreeBSD to the supported clients
Added FreeBSD to the supported clients
Now in ports: https://www.freshports.org/security/headscale/
2022-02-22 23:06:35 +02:00
Adrien Raffin-Caboisse
fe0b43eaaf chore: update changelog 2022-02-22 21:20:59 +01:00
Adrien Raffin-Caboisse
afd4a3706e chore: update formating 2022-02-22 21:05:39 +01:00
Adrien Raffin-Caboisse
717250adb3 feat: removing matchmap from headscale 2022-02-22 20:58:08 +01:00
Kristoffer Dalby
67f5c32b49 Only allow one connection to sqlite 2022-02-22 19:04:52 +00:00
Adrien Raffin-Caboisse
0191ea93ff feat(oidc): bind email to namespace 2022-02-22 19:59:15 +01:00
Adrien Raffin-Caboisse
92ffac625e feat(namespace): add normalization function for namespace 2022-02-22 19:59:12 +01:00
Kristoffer Dalby
bfbcea35a0 Remove dependency on CGO
This commit changes the SQLite dependency to one that does not depend on
CGO. It uses a C-to-Go translated sqlite library that is Pure go.
2022-02-22 16:51:54 +00:00
Kristoffer Dalby
638a84adb9 Merge branch 'main' into integration-test-concurrent-join 2022-02-22 16:49:32 +00:00
Kristoffer Dalby
ec58979ce0 Merge branch 'main' into remove-shared 2022-02-22 16:48:14 +00:00
Kristoffer Dalby
7e6e093f17 Merge branch 'integration-test-concurrent-join' of github.com:kradalby/headscale into integration-test-concurrent-join 2022-02-22 16:19:28 +00:00
Kristoffer Dalby
4962335860 Remove dependency on CGO
This commit changes the SQLite dependency to one that does not depend on
CGO. It uses a C-to-Go translated sqlite library that is Pure go.
2022-02-22 16:18:25 +00:00
Kristoffer Dalby
a37339fa54 Merge pull request #348 from restanrm/remove-comment
fix(machine): remove comment
2022-02-22 14:06:12 +00:00
Kristoffer Dalby
f7eeb979fb Add timeout 2022-02-22 13:46:59 +00:00
Adrien Raffin-Caboisse
f2f8d834e8 fix(machine): remove comment
After some more tests in tailscale I couldn't replicate the behavior
described in there.

When adding a rule, allowing A to talk to B the reverse connection was
instantly added to B to allow communication to B.

The previous assumption was probably wrong.
2022-02-22 11:26:21 +01:00
Kristoffer Dalby
fe2f75d13d Allow integration test to retry 2022-02-22 07:40:56 +00:00
Kristoffer Dalby
52db6188df Merge branch 'main' into update-contributors 2022-02-21 23:38:56 +00:00
Kristoffer Dalby
8dca40535f Test if we can join headscale in parallell to speed up 2022-02-21 23:16:39 +00:00
Kristoffer Dalby
f4c302f1fb Uncomment tests that will failed in transition period 2022-02-21 23:10:20 +00:00
Kristoffer Dalby
4ca8181dcb Remove sharing from integration tests 2022-02-21 23:04:10 +00:00
Kristoffer Dalby
24a8e198a1 Remove sharing references across the code 2022-02-21 23:01:35 +00:00
Kristoffer Dalby
9411ec47c3 Remove sharing class and tests 2022-02-21 22:53:30 +00:00
Kristoffer Dalby
1e8f4dbdff Drop shared node table 2022-02-21 22:52:55 +00:00
Kristoffer Dalby
9399754489 Remove protobuf share/unshare generated go 2022-02-21 22:48:27 +00:00
Kristoffer Dalby
9d1752acbc Remove protobuf share/unshare 2022-02-21 22:48:14 +00:00
Kristoffer Dalby
6da2a19d10 Remove grpc share/unshare functions 2022-02-21 22:45:04 +00:00
Kristoffer Dalby
9ceac5c0fc Remove CLI and tests for Shared node 2022-02-21 22:44:08 +00:00
Kristoffer Dalby
f562ad579a Merge branch 'main' into configurable-mtls 2022-02-21 21:44:49 +00:00
github-actions[bot]
bbadeb567a docs(README): update contributors 2022-02-21 21:41:48 +00:00
Kristoffer Dalby
69cdfbb56f Merge pull request #320 from restanrm/feat-improve-acls-usage
Improvements on the ACLs and bug fixing
2022-02-21 21:41:15 +00:00
Adrien Raffin-Caboisse
d971f0f0e6 fix(acls_test): fix comment in go code 2022-02-21 21:48:05 +01:00
Adrien Raffin-Caboisse
650108c7c7 chore(fmt): apply fmt 2022-02-21 21:46:40 +01:00
Adrien Raffin-Caboisse
baae266db0 Update acls_test.go
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-02-21 20:25:41 +01:00
Adrien Raffin-Caboisse
50af44bc2f fix: add error checking in acl and poll
If aclPolicy is not defined, in updateAclPolicy, return an error.
2022-02-21 20:06:31 +01:00
Nico Rey
e3bcc88880 Linter: make linter happy 2022-02-21 15:22:36 -03:00
Nico Rey
14e49885fb metrics/kustomize: update Kustomize examples 2022-02-21 12:51:25 -03:00
Nico Rey
fbc1843889 metrics/tests: update tests 2022-02-21 12:51:05 -03:00
Nico Rey
45d5ab30ff metrics/cfg: add a new entry for the Prometheus listen address 2022-02-21 12:50:44 -03:00
Nico Rey
d5fd7a5c00 metrics: add a new router and listener for Prometheus' metrics endpoint 2022-02-21 12:50:15 -03:00
Justin Angel
b5a59d4e7a updating changelog and docs 2022-02-21 10:20:11 -05:00
Adrien Raffin-Caboisse
211fe4034a chore(linter): ignore tt var as it's generated code (vscode) 2022-02-21 16:10:20 +01:00
Justin Angel
daa75da277 Linting and updating tests 2022-02-21 10:09:23 -05:00
Adrien Raffin-Caboisse
25550f8866 chore(format): run prettier on repo 2022-02-21 16:06:20 +01:00
Adrien Raffin-Caboisse
4bbe0051f6 chore(machines): apply lint 2022-02-21 10:02:59 +01:00
Adrien Raffin-Caboisse
5ab62378ae tests(machines): test all combinations of peer filtering 2022-02-21 09:58:19 +01:00
Adrien Raffin-Caboisse
f006860136 feat(machines): untie dependency with class for filter func
The dependency to the `headscale` struct makes tests harder to do.

This change allow to easily add some tests for this quite sensible function.
2022-02-21 09:58:19 +01:00
Adrien Raffin-Caboisse
9c6ce02554 fix(machines): use ListAllMachines function
added a simple filter to remove the current node
2022-02-21 09:58:19 +01:00
Adrien Raffin-Caboisse
960412a335 fix(machines): simplify complex if check
This should fix the performance issue with computation of `dst` variable. It's also easier to read now.
2022-02-21 09:58:19 +01:00
Kristoffer Dalby
ecb3ee6bfa Merge branch 'main' into feat-improve-acls-usage 2022-02-21 08:51:21 +00:00
Adrien Raffin-Caboisse
5242025ab3 fix(machines): renaming following review comments 2022-02-20 23:50:08 +01:00
Adrien Raffin-Caboisse
b3d0fb7a93 fix(machine): revert modifications
Using h.ListAllMachines also listed the current machine in the result. It's unnecessary (I don't know if it's harmful).

Breaking the check with the `matchSourceAndDestinationWithRule` broke the tests. We have a specificity with the '*' destination that isn't symetrical.
I need to think of a better way to do this. It too hard to read.
2022-02-20 23:47:04 +01:00
Adrien Raffin-Caboisse
5e167cc00a fix(tests): fix naming issues related to code review 2022-02-20 23:00:31 +01:00
Adrien Raffin-Caboisse
d00251c63e fix(acls,machines): apply code review suggestions 2022-02-20 21:26:20 +01:00
Adrien Raffin-Caboisse
4f9ece14c5 Apply suggestions from code review on changelog
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2022-02-20 20:47:12 +01:00
Kristoffer Dalby
7bf2a91dd0 Merge branch 'main' into configurable-mtls 2022-02-20 14:33:23 +00:00
Justin Angel
385dd9cc34 refactoring 2022-02-20 09:06:14 -05:00
Kristoffer Dalby
602291df61 Merge pull request #338 from juanfont/update-contributors 2022-02-19 23:13:08 +00:00
github-actions[bot]
5245f1accc docs(README): update contributors 2022-02-19 22:48:26 +00:00
Kristoffer Dalby
91babb5130 Merge pull request #336 from ohdearaugustin/topic/fix-contributors-action 2022-02-19 21:08:53 +00:00
ohdearaugustin
8798efd353 contributor: set specific version 2022-02-19 21:36:08 +01:00
Kristoffer Dalby
66a12004e7 Merge branch 'main' into topic/renovatebot 2022-02-19 19:34:30 +00:00
Kristoffer Dalby
74621e2750 Merge pull request #332 from e-zk/main
Fix spelling error
2022-02-19 19:34:24 +00:00
Kristoffer Dalby
74c3c6bb60 Merge branch 'main' into main 2022-02-19 19:32:34 +00:00
Kristoffer Dalby
84b98e716a Merge pull request #334 from ohdearaugustin/topic/renovatebot-codeowner
CODEOWNER: add renovate config ohdearaugustin
2022-02-19 19:32:21 +00:00
ohdearaugustin
e9f13b6031 CODEOWNER: add renovate config ohdearaugustin 2022-02-19 20:28:08 +01:00
ohdearaugustin
fe6d47030f renovatebot: configure 2022-02-19 20:15:53 +01:00
ohdearaugustin
a19550adbf prettier: renovatebot.yml 2022-02-19 19:49:12 +01:00
ohdearaugustin
3db88d27de github/workflows: init renovatebot 2022-02-19 19:49:12 +01:00
e-zk
a6b7bc5939 Fix spelling error 2022-02-20 03:14:51 +10:00
Kristoffer Dalby
397b6fc4bf Merge branch 'main' into docs-acl-modifications 2022-02-18 20:13:10 +00:00
Kristoffer Dalby
7d5e6d3f0f Merge pull request #330 from kradalby/codeowners
Add ohdearaugustin to CODEOWNERS for config and docs
2022-02-18 20:12:29 +00:00
Kristoffer Dalby
7a90c2fba1 Merge branch 'main' into codeowners 2022-02-18 20:11:33 +00:00
Kristoffer Dalby
5cf215a44b Merge pull request #325 from juanfont/kradalby-patch-4
Update changelog for 0.13.0
2022-02-18 20:11:03 +00:00
Kristoffer Dalby
7916fa8b45 Add ohdearaugustin to CODEOWNERS for config and docs 2022-02-18 19:57:03 +00:00
Kristoffer Dalby
5fbef07627 Update changelog for 0.13.0 2022-02-18 18:54:27 +00:00
Kristoffer Dalby
21df798f07 Merge branch 'main' into feat-improve-acls-usage 2022-02-18 17:19:19 +00:00
Kristoffer Dalby
67bb1fc9dd Merge pull request #324 from m-tanner-dev0/patch-1 2022-02-18 07:18:22 +00:00
Tanner
61bfa79be2 Update README.md
change flippant language
2022-02-17 17:55:40 -08:00
Adrien Raffin-Caboisse
f073d8f43c chore(lint): ignore linting on test_expandalias
This is a false positive on the way the function is built.
Small tests cases are all inside this functions, making it big.
2022-02-17 09:32:55 +01:00
Adrien Raffin-Caboisse
5f642eef76 chore(lint): more lint fixing 2022-02-17 09:32:54 +01:00
Adrien Raffin-Caboisse
d8c4c3163b chore(fmt): apply make fmt command 2022-02-17 09:32:54 +01:00
Adrien Raffin-Caboisse
9cedbbafd4 chore(all): update some files for linter 2022-02-17 09:32:51 +01:00
Adrien Raffin-Caboisse
aceaba60f1 docs(changelog): bump changelog 2022-02-17 09:30:09 +01:00
Adrien Raffin-Caboisse
7b5ba9f781 docs(acl): add configuration example to explain acls 2022-02-17 09:30:09 +01:00
Adrien Raffin
de59946447 feat(acls): rewrite functions to be testable
Rewrite some function to get rid of the dependency on Headscale object. This allows us
to write succinct test that are more easy to review and implement.

The improvements of the tests allowed to write the removal of the tagged hosts
from the namespace as specified here: https://tailscale.com/kb/1068/acl-tags/
2022-02-17 09:30:09 +01:00
Adrien Raffin
97eac3b938 feat(acl): update frequently the aclRules
This call should be done quite at each modification of a server resources like RequestTags.
When a server changes it's tag we should rebuild the ACL rules.

When a server is added to headscale we also should update the ACLRules.
2022-02-17 09:30:08 +01:00
Adrien Raffin
fb45138fc1 feat(acls): check acl owners and add bunch of tests 2022-02-17 09:30:08 +01:00
Adrien Raffin
e9949b4c70 feat(acls): simplify updating rules 2022-02-17 09:30:08 +01:00
Adrien Raffin
e482dfeed4 feat(machine): add ACLFilter if ACL's are enabled.
This commit change the default behaviour and remove the notion of namespaces between the hosts. It allows all namespaces to be only filtered by the ACLs. This behavior is closer to tailsnet.
2022-02-17 09:30:05 +01:00
Jamie Greeff
9b7d657cbe Return all peers instead of peers in same namespace 2022-02-17 09:27:59 +01:00
Adrien Raffin-Caboisse
55d746d3f5 docs(acls-proposal): wording comment
A hidden thing was implied in this document is that each person should have his own namespace.
Hidden information in spicification isn't good.
Thank's @kradalby for pointing it out.
2022-02-16 09:16:25 +01:00
Kristoffer Dalby
73497382b7 Merge pull request #306 from kradalby/apiwork
Introduce API keys and enable remote control API
2022-02-15 22:23:32 +00:00
Adrien Raffin-Caboisse
c364c2a382 chore(acl-proposals): apply prettier 2022-02-15 09:53:22 +01:00
Adrien Raffin-Caboisse
e540679dbd docs(acl-proposals): integrate comments 2022-02-15 09:52:05 +01:00
Adrien Raffin-Caboisse
86b329d8bf chore(docs): create proposals directory 2022-02-15 09:27:33 +01:00
Kristoffer Dalby
b2b2954545 Merge branch 'main' into apiwork 2022-02-14 22:29:20 +00:00
Kristoffer Dalby
a3360b082f Merge pull request #321 from ohdearaugustin/topic/specific-go-version 2022-02-14 22:03:17 +00:00
Kristoffer Dalby
b721502147 Merge branch 'main' into topic/specific-go-version 2022-02-14 20:51:33 +00:00
Kristoffer Dalby
1869bff4ba Merge pull request #316 from kradalby/kv-worker-cleanup 2022-02-14 20:51:00 +00:00
ohdearaugustin
0b9dd19ec7 Dockerfiles: update go version to 1.17.7 2022-02-14 21:32:20 +01:00
ohdearaugustin
b2889bc355 github/workflows: set specific go version 2022-02-14 21:31:49 +01:00
Kristoffer Dalby
28c824acaf Merge branch 'main' into apiwork 2022-02-14 16:17:34 +00:00
Kristoffer Dalby
57f1da6dca Merge branch 'main' into kv-worker-cleanup 2022-02-14 11:35:15 +00:00
Kristoffer Dalby
c9640b2f3e Merge pull request #317 from kradalby/sponsor 2022-02-14 11:34:56 +00:00
Kristoffer Dalby
546b1e8a05 Merge branch 'main' into kv-worker-cleanup 2022-02-14 10:17:03 +00:00
Kristoffer Dalby
3b54a68f5c Merge branch 'main' into sponsor 2022-02-14 10:16:58 +00:00
Kristoffer Dalby
1b1aac18d2 Merge pull request #315 from kradalby/windows-client-docs2 2022-02-14 10:16:25 +00:00
Kristoffer Dalby
f30ee3d2df Add note about support in readme 2022-02-13 11:07:45 +00:00
Kristoffer Dalby
9f80349471 Add sponsorship button
This commit adds a sponsor/funding section to headscale.

@juanfont and I have discussed this and this arrangement is agreed upon
and hopefully this can bring us to a place in the future were even more
features and prioritization can be put upon the project.
2022-02-13 11:02:31 +00:00
Kristoffer Dalby
14b23544e4 Add note about running grpc behind a proxy and combining ports 2022-02-13 09:48:33 +00:00
Kristoffer Dalby
4e54796384 Allow gRPC server to run insecure 2022-02-13 09:08:46 +00:00
Kristoffer Dalby
c3b68adfed Fix lint 2022-02-13 08:46:35 +00:00
Kristoffer Dalby
0018a78d5a Add insecure option
Add option to not _validate_ if the certificate served from headscale is
trusted.
2022-02-13 08:41:49 +00:00
Kristoffer Dalby
50f0270543 Merge branch 'main' into windows-client-docs2 2022-02-12 22:35:23 +00:00
Kristoffer Dalby
bb80b679bc Remove RequestMapUpdates function 2022-02-12 21:04:00 +00:00
Kristoffer Dalby
6fa0903a8e Update changelog 2022-02-12 20:50:17 +00:00
Kristoffer Dalby
2bc8051ae5 Remove kv-namespace-worker
This commit removes the namespace kv worker and related code, now that
we talk over gRPC to the server, and not directly to the DB, we should
not need this anymore.
2022-02-12 20:46:05 +00:00
Kristoffer Dalby
4841e16386 Add remote control doc 2022-02-12 20:39:42 +00:00
Kristoffer Dalby
d79ccfc05a Add comment on why grpc is on its own port, replace deprecated 2022-02-12 19:50:12 +00:00
Kristoffer Dalby
ead8b68a03 Fix lint 2022-02-12 19:42:55 +00:00
Kristoffer Dalby
3bb4c28c9a Merge branch 'main' into apiwork 2022-02-12 19:39:30 +00:00
Kristoffer Dalby
2fbcc38f8f Emph trusted cert 2022-02-12 19:36:43 +00:00
Kristoffer Dalby
315ff9daf0 Remove insecure, only allow valid certs 2022-02-12 19:35:55 +00:00
Kristoffer Dalby
4078e75b50 Correct log message 2022-02-12 19:30:25 +00:00
Kristoffer Dalby
58bfea4e64 Update examples and docs 2022-02-12 19:08:59 +00:00
Kristoffer Dalby
e18078d7f8 Rename j 2022-02-12 19:08:41 +00:00
Kristoffer Dalby
c73b57e7dc Use undeprecated method for insecure 2022-02-12 19:08:33 +00:00
Kristoffer Dalby
531298fa59 Fix import 2022-02-12 17:13:51 +00:00
Kristoffer Dalby
30a2ccd975 Add tls certs as creds for grpc 2022-02-12 17:05:30 +00:00
Kristoffer Dalby
59e48993f2 Change the http listener 2022-02-12 16:33:18 +00:00
Kristoffer Dalby
bfc6f6e0eb Split grpc and http 2022-02-12 16:15:26 +00:00
Kristoffer Dalby
811d3d510c Add grpc_listen_addr config option 2022-02-12 16:14:33 +00:00
Kristoffer Dalby
2aba37d2ef Try to support plaintext http2 after termination 2022-02-12 14:42:23 +00:00
Kristoffer Dalby
8853ccd5b4 Terminate tls immediatly, mux after 2022-02-12 13:25:27 +00:00
Kristoffer Dalby
c794f32f58 Merge pull request #312 from hdhoang/patch-1 2022-02-12 12:27:57 +00:00
Kristoffer Dalby
dd8bae8c61 Add link from the docs readme 2022-02-11 18:39:41 +00:00
Kristoffer Dalby
1b47ddd583 Improve the windows client docs as per discord recommendations 2022-02-11 18:36:53 +00:00
Hoàng Đức Hiếu
20991d6883 Merge branch 'main' into patch-1 2022-02-11 17:56:46 +07:00
Kristoffer Dalby
96f09e3f30 Merge pull request #313 from kradalby/windows-client-docs 2022-02-11 09:34:11 +00:00
Kristoffer Dalby
8f40696f35 Merge branch 'main' into windows-client-docs 2022-02-11 09:32:49 +00:00
Kristoffer Dalby
c1845477ef Merge pull request #314 from kradalby/tailscale-204 2022-02-11 09:32:24 +00:00
Kristoffer Dalby
1d40de3095 Update changelog 2022-02-11 08:45:02 +00:00
Kristoffer Dalby
2357fb6f80 Upgrade all dependencies 2022-02-11 08:43:31 +00:00
Kristoffer Dalby
ba8afdb7be Upgrade to tailscale 1.20.4 2022-02-11 08:39:00 +00:00
Kristoffer Dalby
d9aaa0bdfc Add docs on how to set up Windows clients 2022-02-11 08:26:22 +00:00
Hoàng Đức Hiếu
66ff34c2dd apply changelog 2022-02-11 13:49:09 +07:00
Hoàng Đức Hiếu
150652e939 poll: fix swapped machine<->namespace labels 2022-02-11 13:46:36 +07:00
Adrien Raffin-Caboisse
7bdd7748e4 fix(acl): add missing internal namespace communications 2022-02-10 12:03:03 +01:00
Adrien Raffin-Caboisse
0426212348 docs(acls): add example use case 2022-02-10 10:42:26 +01:00
Adrien Raffin-Caboisse
85cf443ac6 docs(acls): Issues with ACL and proposition 2022-02-08 16:59:35 +01:00
Justin Angel
1b2fff4337 Merge branch 'main' into configurable-mtls 2022-02-02 11:54:49 -05:00
Kristoffer Dalby
8c79165b0d Merge pull request #305 from lachy-2849/main 2022-02-02 08:09:06 +00:00
lachy-2849
7b607b3fe8 Forgot to run Prettier 2022-02-01 19:32:13 -05:00
lachy-2849
41fbe47cdf Note when running as another user in systemd
Headscale commands fail when running them as the current user instead of the user defined in the systemd file. This note provides 2 methods of how to correctly run the headscale commands.
2022-02-01 14:23:18 -05:00
Justin Angel
af25aa75d9 Merge branch 'configurable-mtls' of github.com:arch4ngel/headscale into configurable-mtls 2022-01-31 10:27:57 -05:00
Justin Angel
da5250ea32 linting again 2022-01-31 10:27:43 -05:00
Kristoffer Dalby
168b1bd579 Merge branch 'main' into configurable-mtls 2022-01-31 12:28:00 +00:00
Justin Angel
9de5c7f8b8 updating default 2022-01-31 07:22:17 -05:00
Justin Angel
52db80ab0d Merge branch 'configurable-mtls' of github.com:arch4ngel/headscale into configurable-mtls 2022-01-31 07:19:14 -05:00
Justin Angel
0c3fd16113 refining and adding tests 2022-01-31 07:18:50 -05:00
Kristoffer Dalby
e05c5e0b93 Merge pull request #303 from kradalby/migrate_ipaddresses 2022-01-31 08:57:22 +00:00
Justin Angel
310e7b15c7 making alternatives constants 2022-01-30 10:46:57 -05:00
Kristoffer Dalby
9e3318ca27 Simplify postgres uuid-ossp stirng 2022-01-30 14:53:40 +00:00
Kristoffer Dalby
e9adfcd678 Migrate ip_address field to ip_addresses
The automatic migration did not pick up this change and it breaks
current setups as it only adds a new field which is empty.

This means that clients who dont request a new IP will not have any IP
at all.
2022-01-30 14:18:24 +00:00
Justin Angel
d44b2a7c01 adding default for tls_client_auth_mode 2022-01-30 07:26:28 -05:00
Kristoffer Dalby
5b5ecd52e1 Merge pull request #208 from enoperm/ipv6
Support for IPv6 prefixes in namespaces
2022-01-30 10:46:20 +00:00
Kristoffer Dalby
eddd62eee0 Merge branch 'main' into ipv6 2022-01-30 10:09:52 +00:00
Kristoffer Dalby
38c27f6bf8 Merge pull request #300 from stensonb/patch-2
Typo
2022-01-30 10:09:35 +00:00
Kristoffer Dalby
90fb9aa4ed Merge branch 'main' into ipv6 2022-01-30 10:08:44 +00:00
Kristoffer Dalby
3af1253a65 Merge branch 'main' into patch-2 2022-01-30 10:08:28 +00:00
Kristoffer Dalby
eb1ce64b7c Merge pull request #301 from kradalby/goreleaser
Set goreleaser to only care about Go 1.17
2022-01-30 10:08:04 +00:00
Kristoffer Dalby
2c9ed63021 Merge branch 'main' into goreleaser 2022-01-30 10:07:35 +00:00
Kristoffer Dalby
4c779d306b Merge pull request #302 from kradalby/build-avoidance
Set up build avoidance
2022-01-30 10:07:30 +00:00
Kristoffer Dalby
0862f60ff0 Put depth in the correct place 2022-01-30 09:54:26 +00:00
Kristoffer Dalby
991175f2aa Add depth and avoidance for build 2022-01-30 09:49:15 +00:00
Kristoffer Dalby
1815040d98 Set up build build avoidance
This commit configures the CI to run specific parts of the CI when
relevant changes has been made.

This should help us not have to deal with the integration tests when we
do doc/admin changes.
2022-01-30 09:43:48 +00:00
Kristoffer Dalby
71ab4c9b2c Fix type according to config schema 2022-01-30 08:59:25 +00:00
Kristoffer Dalby
e0c22a414b Remove wrong comment 2022-01-30 08:56:28 +00:00
Kristoffer Dalby
4e63bba4fe Only compat go 1.17 in go mod tidy 2022-01-30 08:55:41 +00:00
Kristoffer Dalby
445c04baf7 Fix lint 2022-01-30 08:35:10 +00:00
Kristoffer Dalby
ad4e3a89e0 Format changelog 2022-01-30 08:25:49 +00:00
Kristoffer Dalby
6f6018bad5 Merge branch 'main' into ipv6 2022-01-30 08:21:11 +00:00
Bryan Stenson
ccd41b9a13 Typo 2022-01-29 22:34:51 -08:00
Juan Font
d8ce440309 Merge pull request #299 from kradalby/0124prep
Tag 0.12.4 in CHANGELOG
2022-01-30 01:13:10 +01:00
Kristoffer Dalby
0609c97459 Merge branch 'main' into configurable-mtls 2022-01-29 20:15:58 +00:00
Kristoffer Dalby
2f576b2fb1 Tag 0.12.4 in CHANGELOG 2022-01-29 20:04:56 +00:00
Kristoffer Dalby
853a5288f1 Merge pull request #292 from kradalby/socket-permission
Make Unix socket permissions configurable
2022-01-29 19:55:11 +00:00
Kristoffer Dalby
cd0df1e46f Merge branch 'main' into socket-permission 2022-01-29 19:30:49 +00:00
Juan Font
b195c87418 Merge pull request #290 from kradalby/generate-privkey
Add generate private-key command
2022-01-29 20:24:52 +01:00
Justin Angel
c98a559b4d linting/formatting 2022-01-29 14:15:33 -05:00
Justin Angel
5935b13b67 refining 2022-01-29 13:35:08 -05:00
Justin Angel
9e619fc020 Making client authentication mode configurable 2022-01-29 12:59:31 -05:00
Csaba Sarkadi
45bcf39894 fixup! fixup! cmd/headscale/cli/utils: merge ip_prefix with ip_prefixes in config 2022-01-29 16:52:27 +01:00
Csaba Sarkadi
0a1db89d33 fixup! cmd/headscale/cli/utils: merge ip_prefix with ip_prefixes in config 2022-01-29 16:27:36 +01:00
Kristoffer Dalby
dbfb9e16e0 Merge branch 'main' into socket-permission 2022-01-29 15:26:19 +00:00
Kristoffer Dalby
8aa2606853 Merge branch 'main' into generate-privkey 2022-01-29 15:26:15 +00:00
Kristoffer Dalby
a238a8b33a Merge pull request #291 from kradalby/tailscale-203
Upgrade to latest tailscale
2022-01-29 15:26:06 +00:00
Csaba Sarkadi
74f26d3685 fixup! CHANGELOG: document breaking configuration change regarding multiple prefixes 2022-01-29 16:08:02 +01:00
Csaba Sarkadi
e66f8b0eeb cmd/headscale/cli/utils: merge ip_prefix with ip_prefixes in config 2022-01-29 16:04:15 +01:00
Kristoffer Dalby
e7b69dbf91 Merge branch 'main' into generate-privkey 2022-01-29 14:35:24 +00:00
Kristoffer Dalby
13f23d2e7e Merge branch 'main' into socket-permission 2022-01-29 14:34:36 +00:00
Csaba Sarkadi
7a86321252 CHANGELOG: document breaking configuration change regarding multiple prefixes 2022-01-29 15:33:54 +01:00
Kristoffer Dalby
7aace7eb6b Update CHANGELOG.md 2022-01-29 14:33:12 +00:00
Kristoffer Dalby
7a6be36f46 Merge branch 'main' into tailscale-203 2022-01-29 14:32:04 +00:00
Kristoffer Dalby
bb27c80bad Update CHANGELOG.md 2022-01-29 14:31:42 +00:00
Csaba Sarkadi
c0c3b7d511 Merge remote-tracking branch 'origin/main' into ipv6 2022-01-29 15:27:49 +01:00
Csaba Sarkadi
6220836050 utils: extract GetIPPrefixEndpoints from anonymous function 2022-01-29 15:26:28 +01:00
Kristoffer Dalby
b122d06f12 Merge pull request #278 from enoperm/pollnetmap-update-only 2022-01-29 13:46:08 +00:00
Kristoffer Dalby
6f9ed958ca Merge branch 'main' into pollnetmap-update-only 2022-01-29 13:12:09 +00:00
Kristoffer Dalby
39ce59fcb1 Merge branch 'main' into generate-privkey 2022-01-29 13:11:26 +00:00
Kristoffer Dalby
052fccdc98 Merge pull request #289 from juanfont/whitespace 2022-01-29 13:09:16 +00:00
Csaba Sarkadi
17411b65f3 fixup! fixup! update CHANGELOG
prettier changes
2022-01-29 13:48:14 +01:00
Csaba Sarkadi
bf7ee78324 config-example: add configuration for a dual-stack tailnet 2022-01-28 22:13:45 +01:00
Csaba Sarkadi
fbe5054a67 fixup! update CHANGELOG 2022-01-28 22:00:13 +01:00
Csaba Sarkadi
761147ea3b update CHANGELOG 2022-01-28 21:59:08 +01:00
Csaba Sarkadi
25ccf5ef18 PollNetMapStream: do not create any rows during long-poll operation 2022-01-28 21:59:08 +01:00
Kristoffer Dalby
b4f8961e44 Make Unix socket permissions configurable 2022-01-28 18:58:22 +00:00
Kristoffer Dalby
726ccc8c1f Upgrade to latest tailscale 2022-01-28 18:15:41 +00:00
Kristoffer Dalby
126e694f26 Add generate private-key command
This commit adds a command to generate a private key for headscale.

Mostly useful for systems were you drive the deployment from another
machine and use a secret management system.
2022-01-28 18:08:52 +00:00
Kristoffer Dalby
ab45cd37f8 Only golint new problems 2022-01-28 17:40:39 +00:00
Kristoffer Dalby
f59071ff1c Trim whitespace from privateKey before parsing 2022-01-28 17:23:01 +00:00
Kristoffer Dalby
537cd35cb2 Try to add the grpc cert correctly 2022-01-25 22:22:15 +00:00
Kristoffer Dalby
56b6528e3b Run prettier 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
bae7ba46de Update changelog 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
fa197cc183 Add docs for remote access 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
00c69ce50c Enable remote gRPC and HTTP API
This commit enables the existing gRPC and HTTP API from remote locations
as long as the user can provide a valid API key. This allows users to
control their headscale with the CLI from a workstation. 🎉
2022-01-25 22:11:15 +00:00
Kristoffer Dalby
a6e22387fd Formatting of machine.go 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
a730f007d8 Formatting of DNS files 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
3393363a67 Add safe random hash generators 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
8218ef96ef Formatting of integration tests 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
e8e573de62 Add apikeys command integration test 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
05db1b7109 Formatting and improving logs for config loading 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
6e14fdf0d3 More reusable stuff in cli 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
1fd57a3375 Add apikeys command to create, list and expire 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
b4259fcd79 Add helper function for colouring expiries 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
f9137f3bb0 Create helper functions around gRPC interface 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
b1a9b1ada1 Generate code from proto 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
b8e9024845 Add proto model for api key 2022-01-25 22:11:15 +00:00
Kristoffer Dalby
70d82ea184 Add migration for new data model 2022-01-25 22:11:05 +00:00
Kristoffer Dalby
9dc20580c7 Add api key data model and helpers
This commits introduces a new data model for holding api keys for the
API. The keys are stored in the database with a prefix and a hash and
bcrypt with 10 passes is used to store the hash and it is "one way
safe".

Api keys have an expiry logic similar to pre auth keys.

A key cannot be retrieved after it has created, only verified.
2022-01-25 22:11:05 +00:00
Kristoffer Dalby
4d60aeae18 Merge pull request #282 from ryanfowler/main 2022-01-23 21:23:50 +00:00
Ryan Fowler
67d1dd984f Fix missing return in PollNetMapHandler 2022-01-21 21:48:58 -08:00
Juan Font
b02f8dd45d Merge pull request #274 from majst01/reduce-binary-size
Strip binary, update to go-1.17.6
2022-01-20 11:38:52 +01:00
Kristoffer Dalby
3837f1714a Merge branch 'main' into reduce-binary-size 2022-01-17 08:47:19 +00:00
Kristoffer Dalby
ed5498ef86 Merge pull request #276 from jimt/patch-1 2022-01-17 08:46:42 +00:00
Csaba Sarkadi
e2f8c69e2e integration-test: use tailscale ip to test dual-stack MagicDNS 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
beb3e9abc2 integration-test: taildrop test refactor 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
78039f4cea integration-test: use TUN devices, enable IPv6 addresses on local interfaces in containers 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
ed39b91f71 Dockerfiles: specify origin registry explicitly 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
8f632e9062 machine: isOutdated: handle machines without LastSuccefulUpdate set 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
a32175f791 PollNetMapHandler: refactor with chan lifetimes in mind
* Resolves an issue where sometimes attempted sends on a closed channel
  happened by ensuring the channels remain open for the entire goroutine.
* May be of help with regards to issue #203
2022-01-16 14:18:22 +01:00
Csaba Sarkadi
d35fb8bba0 integration-test: add IPv6 prefix to configuration 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
115d0cbe85 dns: IPv6 roots generation 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
1a6e5d8770 Add support for multiple IP prefixes 2022-01-16 14:18:22 +01:00
Csaba Sarkadi
3a3aecb774 Regenerate files based on ProtoBuf schema. 2022-01-16 14:17:51 +01:00
Csaba Sarkadi
8b40343277 Add multiple IP prefixes support to ProtoBuf schema 2022-01-16 14:17:27 +01:00
Csaba Sarkadi
7ec8346179 Do not assume IPv4 during Tailscale node construction 2022-01-15 16:06:34 +01:00
Csaba Sarkadi
46cdce00af Do not assume IPv4 during address generation 2022-01-15 16:06:34 +01:00
Jim Tittsler
86f3f26a18 Fix typos 2022-01-15 16:44:36 +09:00
Stefan Majer
febbb6006f use most recent go 1.17 for tests 2022-01-14 12:59:53 +01:00
Stefan Majer
1d68509463 Strip binary, update to go-1.17.6 2022-01-14 09:19:16 +01:00
Kristoffer Dalby
b6d0c4f2aa Merge pull request #273 from juanfont/tailscale-1-20
Add new tailscale version to integration test
2022-01-13 22:21:27 +00:00
Kristoffer Dalby
2c057c2d89 Update integration_test.go 2022-01-13 19:16:12 +00:00
Juan Font
cf7effda1b Merge pull request #272 from juanfont/cl-0.12.3
Prepare CHANGELOG for 0.12.3
2022-01-13 17:02:37 +01:00
Juan Font Alonso
19effe7034 Updated changelog for 0.12.3 2022-01-13 12:42:56 +01:00
Juan Font
41053482b3 Merge pull request #271 from juanfont/minor-security-fixes
Minor security updates in go.mod
2022-01-12 22:13:28 +01:00
Juan Font
e463283a58 Merge branch 'main' into minor-security-fixes 2022-01-12 17:10:15 +01:00
Juan Font
99814b468b Merge pull request #270 from artemklevtsov/docker-alpine
Add docker alpine image
2022-01-12 17:10:05 +01:00
Juan Font Alonso
163ecb2a6b go.sum updated 2022-01-12 15:22:16 +01:00
Juan Font Alonso
4660b265d9 Minor security updates in go.mod 2022-01-12 15:17:55 +01:00
Artem Klevtsov
1b6bad0b63 Add docker alpine image 2022-01-12 19:29:59 +07:00
Juan Font
26623d794b Merge pull request #268 from juanfont/prepare-0.12.2
Prepare CHANGELOG for 0.12.2
2022-01-11 16:49:51 +01:00
Juan Font Alonso
e9cc60e49c Updated changelog for 0.12.2 2022-01-11 15:45:13 +01:00
Juan Font
be2a28dd61 Merge pull request #267 from piec/patch-1
Fix example config link
2022-01-11 15:39:55 +01:00
Pierre Carru
cec236ce24 Fix example config link
Everything is in the title :p
2022-01-10 14:44:11 +01:00
Kristoffer Dalby
45d331da99 Merge pull request #263 from JJGadgets/patch-1 2022-01-06 10:40:04 +00:00
JJGadgets
897fa558b0 prettier: Docker docs 2022-01-06 17:25:07 +08:00
JJGadgets
d971cf1295 Improve Docker docs
- Fix URLs referring to files in this repository
- Better explain that we are creating the headscale directory and running the commands on the host Docker node
- Place instructions to download example config file to use as config file, as recommended steps.
2022-01-06 14:10:28 +08:00
Kristoffer Dalby
42bed58329 Merge pull request #262 from kradalby/improve-docs
Rewrite main documentation
2022-01-04 16:26:27 +00:00
Kristoffer Dalby
d9f52efe70 Fix typo 2022-01-02 23:17:48 +00:00
Kristoffer Dalby
25b5eb8d7f Update tests to aline with new config example 2022-01-02 23:17:42 +00:00
Kristoffer Dalby
81c60939c9 Add vertical line for breathing 2022-01-02 19:55:38 +00:00
Kristoffer Dalby
4edc96d14d Make strongly strong 2022-01-02 19:54:37 +00:00
Kristoffer Dalby
6b7c74133d Use markdown numbering so github gets it 2022-01-02 19:53:49 +00:00
Kristoffer Dalby
8da029bd14 Add missing links 2022-01-02 19:48:57 +00:00
Kristoffer Dalby
1d01103b67 Link to example config from docs 2022-01-02 19:43:06 +00:00
Kristoffer Dalby
5df100539c Remove outdated configuration page in favour of config-example 2022-01-02 19:42:35 +00:00
Kristoffer Dalby
11c86acbe3 update case in readme 2022-01-02 19:39:51 +00:00
Kristoffer Dalby
86f36f9a43 Lowercase markdown docs 2022-01-02 19:39:03 +00:00
Kristoffer Dalby
271cb71754 Add more explaination and less redunancy with docs 2022-01-02 19:38:04 +00:00
Kristoffer Dalby
80d196cbfd Remove DNS file, it will be merged into example configuration 2022-01-02 19:37:48 +00:00
Kristoffer Dalby
3ce3ccb559 Reorder tls docs 2022-01-02 19:36:50 +00:00
Kristoffer Dalby
a11c6fd8b9 Fix formatting error in container doc 2022-01-02 20:08:18 +01:00
Kristoffer Dalby
a75c5a4cff Add eradme to examples 2022-01-02 20:08:04 +01:00
Kristoffer Dalby
8d504c35bf Move kubernetes to kustomize, since thats what it is 2022-01-02 20:06:32 +01:00
Kristoffer Dalby
8a07a63b1c Write disclaimer in kubernetes example 2022-01-02 20:06:21 +01:00
Kristoffer Dalby
74fd5de43d Move kubernetes example under docs 2022-01-02 20:04:35 +01:00
Kristoffer Dalby
f9e6722635 Rewrite main documentation
This commit starts restructuring the documentation and updating it to be
compliant with 0.12.x+ releases.

The main change is that the documentation has been rewritten for the
ground up, and hopefully simplified.

The documentation has been split into an official documentation for
running headscale as a binary under Linux with SystemD and a "community"
provided documentation for Docker.

This should make the two documents a lot easier to read and follow than
the mishmash document we had.
2022-01-02 19:11:36 +01:00
Juan Font
0bd4250a53 Merge pull request #261 from juanfont/kradalby-patch-2
Add note about outdated docs until we fix them
2021-12-29 10:53:15 +01:00
Kristoffer Dalby
4b44aa2180 Add note about outdated docs until we fix them 2021-12-28 10:16:20 +01:00
Kristoffer Dalby
f78984f2ef Merge pull request #258 from ohdearaugustin/fix-docker-release
Fix docker release
2021-12-27 22:09:16 +01:00
ohdearaugustin
3de311b7f4 Fix docker release 2021-12-27 21:24:57 +01:00
Juan Font
5192841016 Merge pull request #256 from juanfont/prepare-0.12.1-cl
Prepare CHANGELOG for 0.12.1
2021-12-24 23:40:33 +01:00
Juan Font
07384fd2bb Leave the TDB 2021-12-24 16:46:04 +01:00
Juan Font
a795e7c0c9 Minor correction on the purpose of Headscale 2021-12-24 16:40:18 +01:00
Juan Font
ebfbd4a37d Update changelog for 0.12.1 2021-12-24 16:39:22 +01:00
Juan Font
fb933b7d41 Merge pull request #255 from Wakeful-Cloud/main
Template Fixes
2021-12-24 16:12:33 +01:00
wakeful-cloud
1c7cb98042 Template Fixes 2021-12-22 19:43:53 -07:00
Kristoffer Dalby
fb634cdfc2 Merge pull request #242 from kradalby/changelog 2021-12-07 14:16:34 +00:00
Kristoffer Dalby
f60f62792a Merge branch 'main' into changelog 2021-12-07 13:21:17 +00:00
Kristoffer Dalby
418fde2731 Merge pull request #243 from dragetd/feature/github_templates 2021-12-07 13:20:54 +00:00
Kristoffer Dalby
53108207be Merge branch 'main' into feature/github_templates 2021-12-07 11:07:10 +00:00
Kristoffer Dalby
3fb3db6f20 Merge pull request #248 from negbie/main 2021-12-07 11:07:03 +00:00
Eugen Biegler
5a504fa711 Better error description
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-12-07 11:44:09 +01:00
Eugen Biegler
b4cce22415 Better error description
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-12-07 11:44:00 +01:00
Kristoffer Dalby
54c2306637 Merge branch 'main' into main 2021-12-07 10:39:08 +00:00
Juan Font
bc8f5f484d Merge branch 'main' into feature/github_templates 2021-12-07 11:21:50 +01:00
Juan Font
d00780b574 Merge pull request #250 from juanfont/use-1.17-for-release
Use go1.17 in goreleaser as required my go mod
2021-12-07 11:21:37 +01:00
Eugen
686384ebb7 Merge branch 'main' of https://github.com/negbie/headscale into main 2021-12-07 08:49:08 +01:00
Eugen
3a85c4d367 Better error description 2021-12-07 08:46:55 +01:00
Juan Font
1b007d2208 Use go1.17 in goreleaser as required my go mod 2021-12-06 23:19:08 +01:00
Kristoffer Dalby
5a7f669505 Update .github/ISSUE_TEMPLATE/config.yml 2021-12-05 09:37:11 +00:00
Michael Ko. Gajda
0c13d9da15 Fix format with prettier 2021-12-04 18:51:09 +01:00
Kristoffer Dalby
58ec26ee89 Merge branch 'main' into changelog 2021-12-04 12:16:01 +00:00
Kristoffer Dalby
969bcf17c4 Merge branch 'main' into feature/github_templates 2021-12-04 10:55:43 +00:00
Kristoffer Dalby
04d81a0e5c Merge branch 'main' into main 2021-12-02 08:31:48 +00:00
Eugen
a6e99525ac Add log_level to config, more ACL debug log 2021-12-01 20:02:00 +01:00
Eugen
7e95b3501d Ignoe derp.yaml, don't panic in Serve() 2021-12-01 19:32:47 +01:00
Kristoffer Dalby
ab52acba4b Merge pull request #247 from negbie/main 2021-12-01 16:18:46 +00:00
Eugen
07a437c707 Add private_key_path to example config 2021-12-01 14:34:08 +01:00
Kristoffer Dalby
d56fb1aaa1 Merge pull request #245 from Bpazy/patch-1 2021-12-01 11:53:25 +00:00
ZiYuan
c046ffbceb fix typo
Remove redundant quotation: `ephemeral_node_inactivity_timeout": "30m"` => `ephemeral_node_inactivity_timeout: "30m"`
2021-12-01 14:50:08 +08:00
Kristoffer Dalby
3435d95c80 Clarify and formatting 2021-11-30 09:17:21 +00:00
Kristoffer Dalby
acaab7a3de Add Open ID connect to changelog 2021-11-30 09:16:09 +00:00
Juan Font
74ba452025 Merge branch 'main' into feature/github_templates 2021-11-29 21:28:43 +01:00
Juan Font
500be2de58 Merge branch 'main' into changelog 2021-11-29 21:25:31 +01:00
Juan Font
5bc0398aaf Merge pull request #241 from juanfont/add-goreleaser-prerelease
Enable marking releases as prerelease
2021-11-29 21:24:01 +01:00
Michael Ko. Gajda
78eba97bf9 Add GitHub templates 2021-11-29 20:53:04 +01:00
Kristoffer Dalby
6350d528a7 Change changelog format 2021-11-29 19:45:31 +00:00
Kristoffer Dalby
42eb6b9e01 format 2021-11-29 17:34:41 +00:00
Kristoffer Dalby
2e2fb68715 Remove unreleased 2021-11-29 17:32:05 +00:00
Kristoffer Dalby
6fc6355d66 Add initial CHANGELOG 2021-11-29 17:31:19 +00:00
Michael Ko. Gajda
48fc93bbdc Add simple overview README for docs 2021-11-29 14:36:47 +01:00
Juan Font
3e941ef959 Enable marking releases as prerelease 2021-11-28 22:00:22 +00:00
Kristoffer Dalby
1dc008133c Merge pull request #229 from juanfont/kradalby-patch-2 2021-11-28 21:46:37 +00:00
Kristoffer Dalby
2d2ae62176 Merge branch 'kradalby-patch-2' of github.com:juanfont/headscale into kradalby-patch-2 2021-11-28 11:00:43 +00:00
Kristoffer Dalby
8e9a94613c Remove outdate integration test private key 2021-11-28 11:00:22 +00:00
Kristoffer Dalby
8932133ae7 Merge branch 'main' into kradalby-patch-2 2021-11-28 09:28:32 +00:00
Kristoffer Dalby
5b8587037d Remove non-existing field from oidc test 2021-11-28 09:25:27 +00:00
Kristoffer Dalby
e167be6d64 Remove generate private key step from docs 2021-11-28 09:18:24 +00:00
Kristoffer Dalby
34f4109fbd Add back privatekey, but automatically generate it if it does not exist 2021-11-28 09:17:18 +00:00
Kristoffer Dalby
fa813bc0d7 Merge pull request #239 from cure/debug-make-sure-machine-key-is-correct-length
The `create-node` subcommand under `debug` needs a 64 character key.
2021-11-28 08:38:54 +00:00
Kristoffer Dalby
32006f3a20 Use go 1.17 2021-11-28 08:26:36 +00:00
Kristoffer Dalby
ff8c961dbb Make sure comparison of nodekey is on the same format 2021-11-28 08:23:45 +00:00
Kristoffer Dalby
e9d5214d1c Disable tests which is broken due to split version 2021-11-27 21:04:19 +00:00
Kristoffer Dalby
6295b0bd84 Go mod tidy 2021-11-27 20:34:46 +00:00
Kristoffer Dalby
550f4016dc Merge branch 'kradalby-patch-2' of github.com:juanfont/headscale into kradalby-patch-2 2021-11-27 20:32:08 +00:00
Kristoffer Dalby
2ae882d801 Update go version 2021-11-27 20:31:33 +00:00
Kristoffer Dalby
ef81845deb Merge branch 'main' into kradalby-patch-2 2021-11-27 20:30:27 +00:00
Kristoffer Dalby
d96b681c83 Fix node cli integration test 2021-11-27 20:25:37 +00:00
Kristoffer Dalby
59aeaa8476 Ensure we always have the key prefix when needed 2021-11-27 20:25:12 +00:00
Ward Vandewege
cb2ea300ad Fix linter errors. 2021-11-27 13:59:39 -05:00
Kristoffer Dalby
c38f00fab8 Unmarshal keys in the non-deprecated way 2021-11-26 23:50:42 +00:00
Kristoffer Dalby
0012c76170 Make it easier to run cli integration tests 2021-11-26 23:34:11 +00:00
Kristoffer Dalby
cfd53bc4aa Factor wgkey to types/key
This commit converts all the uses of wgkey to the new key interfaces.

It now has specific  machine, node and discovery keys and we now should
use them correctly.

Please note the new logic which strips a key prefix (in utils.go) that
is now standard inside tailscale.

In theory we could put it in the database, but to preserve backwards
compatibility and not spend a lot of resources on accounting for both,
we just strip them.
2021-11-26 23:30:42 +00:00
Kristoffer Dalby
07418140a2 Remove config loading of private key path 2021-11-26 23:29:41 +00:00
Kristoffer Dalby
c63c259d31 Switch wgkey for types/key
We dont seem to need the wireguard key anymore, we generate a key on
startup based on the new library and the users fetch it from /key.

Clean up app.go and update docs
2021-11-26 23:28:06 +00:00
Kristoffer Dalby
50b47adaa3 Upgrade tailscale to 1.18 2021-11-26 23:27:09 +00:00
Ward Vandewege
b6ae60cc44 The create-node subcommand under debug needs a 64 character key. 2021-11-26 14:49:51 -05:00
Ward Vandewege
d944aa6e79 Merge pull request #237 from cure/preauthkeys-fix-default-expiration
Fix default preauthkey expiration
2021-11-26 11:09:43 -05:00
Kristoffer Dalby
06f05d6cc2 Merge branch 'main' into preauthkeys-fix-default-expiration 2021-11-26 15:46:00 +00:00
Kristoffer Dalby
0819c6515a Merge pull request #238 from juanfont/kradalby-patch-3 2021-11-26 15:45:38 +00:00
Ward Vandewege
c7f3e0632b When creating a preauthkey, the default expiration was passed through as
a nil value, instead of the default value (1h). This resulted in the
preauthkey being created with expiration key '0001-01-01 00:00:00',
which meant the key would not work, because it was already expired.

This commit applies the default expiration time (1h) when a preauthkey
is created without a specific expiration. It also updates an integration
test to make sure this bug does not reoccur.
2021-11-26 10:04:26 -05:00
Kristoffer Dalby
58fd6c4ba5 Revert postgres constant value
changes "postgresql" to "postgres"
2021-11-26 07:13:00 +00:00
Kristoffer Dalby
aab4a6043a Merge branch 'main' into kradalby-patch-2 2021-11-25 08:38:59 +00:00
Kristoffer Dalby
a52a4d45c0 Merge pull request #236 from restanrm/fix-derp-example-config
fix(derp-example): change regionid in node
2021-11-25 08:37:45 +00:00
Juan Font
45bc3f7a09 Merge branch 'main' into fix-derp-example-config 2021-11-24 18:57:31 +01:00
Kristoffer Dalby
5620858549 Merge pull request #227 from kradalby/expired-issue 2021-11-24 17:49:33 +00:00
Adrien Raffin-Caboisse
f2e273b8a2 fix(derp-example): change regionid in nodes
Using a wrong regionid value lead to non working DERP custom server. No checks are performed for this kind of errors making it difficult to find.
2021-11-24 15:54:22 +01:00
Kristoffer Dalby
cec1e86b58 Add missing request arguemnt 2021-11-24 12:16:56 +00:00
Kristoffer Dalby
dcbf289470 Rename idKey to machineKey to keep consistency 2021-11-24 12:15:55 +00:00
Kristoffer Dalby
fdd64d98c8 Add missing iff to handle expired preauthkey machines 2021-11-24 12:15:32 +00:00
Kristoffer Dalby
9968992be0 Fix prettier 2021-11-24 10:47:20 +00:00
Kristoffer Dalby
f50f9ac894 Merge branch 'expired-issue' of github.com:kradalby/headscale into expired-issue 2021-11-24 10:13:49 +00:00
Kristoffer Dalby
2eca344f0e Fix gocritic 2021-11-24 10:13:41 +00:00
Kristoffer Dalby
349264830b Use .1 2021-11-23 11:27:44 +00:00
Kristoffer Dalby
0b5c29022b Merge branch 'main' into expired-issue 2021-11-22 20:13:33 +00:00
Kristoffer Dalby
1f1c45a2c0 Fix cli_test 2021-11-22 19:59:44 +00:00
Kristoffer Dalby
68dc2a70db Update neighbours if node is expired or refreshed
In addition, only pass the map of registered and not expired nodes to
clients.
2021-11-22 19:51:16 +00:00
Kristoffer Dalby
caf1b1cabc Fix typo 2021-11-22 19:35:24 +00:00
Kristoffer Dalby
021c464148 Add cache for requested expiry times
This commit adds a sentral cache to keep track of clients whom has
requested an expiry time, but were we need to keep hold of it until the
second request comes in.
2021-11-22 19:32:52 +00:00
Kristoffer Dalby
e600ead3e9 Make sure nodes can reauthenticate
This commit fixes an issue where nodes were not able to reauthenticate.
2021-11-22 19:32:11 +00:00
Kristoffer Dalby
200c10e48c Add missing return in oidc.go 2021-11-22 17:22:47 +00:00
Kristoffer Dalby
e8faff4fe2 Use uint64 straight instead of converting 2021-11-22 17:22:22 +00:00
Kristoffer Dalby
5cbd4513a4 Simplify register function if 2021-11-22 17:21:56 +00:00
Kristoffer Dalby
a477c808c7 Merge pull request #230 from lion24/patch-1 2021-11-22 09:47:37 +00:00
Kristoffer Dalby
74044f62f4 Remove anouther potential error leak 2021-11-21 21:54:19 +00:00
Kristoffer Dalby
fcd4d94927 Clean up logging and error handling in oidc
We should never expose errors via web, it gives attackers a lot of info
(Insert OWASP guide).

Also handle error that didnt separate not found gorm issue and other
errors.
2021-11-21 21:51:39 +00:00
Kristoffer Dalby
fac33e46e1 Add long description for expire 2021-11-21 21:35:36 +00:00
Kristoffer Dalby
b152e53b13 Use correct type for nodes command 2021-11-21 21:34:03 +00:00
Kristoffer Dalby
1687e3b03f Removed unused parameter 2021-11-21 21:29:27 +00:00
Kristoffer Dalby
c2393685f1 Remove expiry update in expiry, we dont want to extend it just because they _try_ to connect 2021-11-21 21:14:40 +00:00
Kristoffer Dalby
fd5f42c2e6 Move handle expired machine to the end of registration 2021-11-21 21:14:13 +00:00
Kristoffer Dalby
bda2d9c3b0 Remove unused param 2021-11-21 14:00:48 +00:00
Kristoffer Dalby
c4ecc4db91 Simplify control flow in RegistrationHandler
This commits tries to dismantle the complicated "if and or" in the
RegistrationHandler by factoring out the "is Registrated" into a root
if.

This, together with some new comments, should hopefully make it a bit
easier to follow what is happening in all the different cases that needs
to be handled when a Node contacts the registration endpoint.
2021-11-21 13:59:24 +00:00
Kristoffer Dalby
8ccc51ae57 Remove special case for authkey
We no longer have weird expire behaviour, so we dont need this case
2021-11-21 13:45:19 +00:00
Kristoffer Dalby
a2b9f3bede Add expire (logout) machine command 2021-11-21 13:40:44 +00:00
Kristoffer Dalby
bd1d1b1a3b Implement ExpireMachine rpc 2021-11-21 13:40:19 +00:00
Kristoffer Dalby
f1c05f8010 Add ExpireMachine spec to rpc 2021-11-21 13:40:04 +00:00
Kristoffer Dalby
f85a77edb5 Remove println statement 2021-11-21 09:48:59 +00:00
Kristoffer Dalby
1c7aff5dd9 Add expired column to machine list command 2021-11-21 09:44:38 +00:00
lion24
e91f72fe4c Running.md: fix missing backslash (\)
* This would cause otherwise the command to abort after the first statement of the docker command ;)
2021-11-20 23:31:49 +01:00
Kristoffer Dalby
5a2cae5081 Add new Tailscale version to integration tests 2021-11-19 09:16:11 +00:00
Kristoffer Dalby
6a9dd2029e Remove expiry logic, this needs to be redone 2021-11-19 09:02:49 +00:00
Kristoffer Dalby
9aac1fb255 Remove expiry logic, this needs to be redone 2021-11-19 09:02:29 +00:00
Kristoffer Dalby
106b1e7e8d Create constants for other reg methods 2021-11-18 17:51:54 +00:00
Kristoffer Dalby
836986aa59 Merge pull request #225 from fdelucchijr/patch-1 2021-11-18 10:13:07 +00:00
Kristoffer Dalby
58d1255357 Remove unneeded returns 2021-11-18 08:51:33 +00:00
Kristoffer Dalby
981f712660 Remove unused param 2021-11-18 08:51:21 +00:00
Kristoffer Dalby
50dcb8bb75 Use valid handler for registered authkey machines 2021-11-18 08:50:53 +00:00
Kristoffer Dalby
a8a8f01429 Make "authKey" a constant 2021-11-18 08:49:55 +00:00
Kristoffer Dalby
35c3fe9608 Move registration workflow into functions 2021-11-17 22:39:41 +00:00
Fernando De Lucchi
f74b9f5fe2 Styling and prettier 2021-11-17 16:25:37 -05:00
Fernando De Lucchi
de42fe3b04 Merge branch 'main' into patch-1 2021-11-16 19:10:42 -05:00
Kristoffer Dalby
49f835d8cf Merge pull request #214 from ItalyPaleAle/docker-distroless 2021-11-16 07:34:08 +00:00
ItalyPaleAle
7bc2f41b33 Run prettier 💄 2021-11-15 15:37:40 -08:00
Alessandro (Ale) Segala
a10388b709 Merge branch 'main' into docker-distroless 2021-11-15 15:19:06 -08:00
Kristoffer Dalby
bd7b5e97cb Merge branch 'main' into patch-1 2021-11-15 23:00:45 +00:00
Kristoffer Dalby
4b525a3967 Merge pull request #223 from kradalby/golanglint 2021-11-15 22:36:02 +00:00
Kristoffer Dalby
d6739386a0 Get rid of dynamic errors 2021-11-15 19:18:14 +00:00
Kristoffer Dalby
25b790d025 Add and fix forcetypeassert 2021-11-15 18:42:44 +00:00
Kristoffer Dalby
db8be91d8b Add and fix forbidigo 2021-11-15 18:36:02 +00:00
Kristoffer Dalby
c4d4c9c4e4 Add and fix gosec 2021-11-15 18:31:52 +00:00
Kristoffer Dalby
715542ac1c Add and fix stylecheck (golint replacement) 2021-11-15 17:24:24 +00:00
Kristoffer Dalby
0c005a6b01 Add and fix errname 2021-11-15 16:33:16 +00:00
Kristoffer Dalby
0c45f8d252 Add and fix errorlint 2021-11-15 16:26:41 +00:00
Kristoffer Dalby
2dde1242cf Fix formatting 2021-11-15 16:16:35 +00:00
Kristoffer Dalby
78cfba0a31 Add exceptions to var name length 2021-11-15 16:16:16 +00:00
Kristoffer Dalby
8ae682b412 Fix var name length in tests 2021-11-15 16:16:04 +00:00
Kristoffer Dalby
333be80f9c Fix rest of var name in main code 2021-11-15 16:15:50 +00:00
Alessandro (Ale) Segala
2efefca737 Merge pull request #1 from ohdearaugustin/docker-workflows
Docker workflows
2021-11-14 13:33:13 -08:00
ohdearaugustin
c6bc9fffe9 workflows/release: fix docker debug tags 2021-11-14 22:24:27 +01:00
ohdearaugustin
8454c1b52c workflows/release: add docker debug 2021-11-14 21:15:33 +01:00
Kristoffer Dalby
471c0b4993 Initial work eliminating one/two letter variables 2021-11-14 20:32:03 +01:00
Kristoffer Dalby
53ed749f45 Start work on making gocritic pass 2021-11-14 18:44:37 +01:00
Kristoffer Dalby
ba084b9987 Lint fix integration tests 2021-11-14 18:35:49 +01:00
Kristoffer Dalby
85f28a3f4a Remove all instances of undefined numbers (gonmd) 2021-11-14 18:31:51 +01:00
Kristoffer Dalby
796072a5a4 Add and fix ifshort 2021-11-14 18:09:22 +01:00
Kristoffer Dalby
9390348a65 Add and fix goconst 2021-11-14 18:06:25 +01:00
Kristoffer Dalby
c9c16c7fb8 Remove unused params or returns 2021-11-14 18:03:21 +01:00
Kristoffer Dalby
19cd7a4eac Add and fix exhaustive 2021-11-14 17:52:55 +01:00
Kristoffer Dalby
0315f55fcd Add and fix nilnil 2021-11-14 17:51:34 +01:00
Kristoffer Dalby
668e958d3e Add and fix unconvert 2021-11-14 17:49:54 +01:00
Kristoffer Dalby
4ace54c4e1 Move wsl, might not be feasible 2021-11-14 16:49:54 +01:00
Kristoffer Dalby
89eb13c6cb Add and fix nlreturn (new line return) 2021-11-14 16:46:09 +01:00
Kristoffer Dalby
d0ef850035 Add and fix noctx linter 2021-11-14 16:37:43 +01:00
Kristoffer Dalby
2f8e9f272c Merge branch 'main' into docker-distroless 2021-11-14 14:35:44 +01:00
Fernando De Lucchi
1af4a3b958 Merge branch 'main' into patch-1 2021-11-14 04:16:00 -05:00
Kristoffer Dalby
1969802c6b Fix golanglint 2021-11-14 08:32:58 +00:00
Kristoffer Dalby
052883aa55 Fix merge conflict 2021-11-14 08:30:48 +00:00
Kristoffer Dalby
d2918edc14 Merge pull request #224 from cure/namespace-deletion-fixes
Improvements for namespace deletion
2021-11-14 09:27:58 +01:00
Kristoffer Dalby
f3da299457 Format readme 2021-11-14 08:09:33 +00:00
Kristoffer Dalby
e8726b1e22 Add readme note about codestyle 2021-11-14 08:08:03 +00:00
Fernando De Lucchi
b897a26f42 arm64 docker image build in release process 2021-11-13 21:08:59 -05:00
Alessandro (Ale) Segala
5ec7158b5d Merge branch 'main' into docker-distroless 2021-11-13 14:16:53 -08:00
Alessandro (Ale) Segala
7d77acd88e Docs for debug container 2021-11-13 22:16:37 +00:00
Alessandro (Ale) Segala
c0f16603c5 Copy headscale binary in /bin in the container
This way, we don't need to alter the PATH
2021-11-13 22:10:58 +00:00
Ward Vandewege
34dba0ade8 Fix missing error check. 2021-11-13 15:24:32 -05:00
Ward Vandewege
acf7e462ad Improvements for namespace deletion: add a confirmation prompt, and make
sure to also delete any associated preauthkeys.
2021-11-13 14:01:05 -05:00
Kristoffer Dalby
f94b0b54d8 Remove lint install, update go 2021-11-13 09:39:20 +00:00
Kristoffer Dalby
806f0d3e6c Format lint 2021-11-13 09:20:59 +00:00
Kristoffer Dalby
b653572272 Make format shuld format, not lint 2021-11-13 09:20:51 +00:00
Kristoffer Dalby
fa0922d5bb define proto dir for buf 2021-11-13 09:18:00 +00:00
Kristoffer Dalby
95b9f03fb3 update buf setup 2021-11-13 09:13:17 +00:00
Kristoffer Dalby
24e0c944b1 Align with update golangci-lint 2021-11-13 09:11:03 +00:00
Kristoffer Dalby
148437f716 Setup more linters and goals for golangci 2021-11-13 08:53:34 +00:00
Kristoffer Dalby
3ddd9962ce Add format make entry 2021-11-13 08:39:20 +00:00
Kristoffer Dalby
2634215f12 golangci-lint --fix 2021-11-13 08:39:04 +00:00
Kristoffer Dalby
dae34ca8c5 Proto format 2021-11-13 08:36:56 +00:00
Kristoffer Dalby
03b7ec62ca Go format with shorter lines 2021-11-13 08:36:45 +00:00
Kristoffer Dalby
edfcdc466c Update lint ci file with prettier and proto 2021-11-13 08:13:38 +00:00
Kristoffer Dalby
6b3114ad6f Run prettier 2021-11-13 08:11:55 +00:00
Kristoffer Dalby
ba65092926 Merge pull request #212 from kradalby/cli-grpc
Rework the CLI to use gRPC
2021-11-12 14:39:39 +00:00
Alessandro (Ale) Segala
f44138c944 Added debug container 2021-11-12 02:20:46 +00:00
Alessandro (Ale) Segala
c290ce4b91 Revert "Fixed integration tests"
This reverts commit 67953bfe2f.
2021-11-09 16:24:10 +00:00
Alessandro (Ale) Segala
3b34c7b89a Removed / from docker commands in docs
Essentially reverts 6076656373
2021-11-09 16:23:36 +00:00
Alessandro (Ale) Segala
83e72ec57d Allow running headscale without leading / 2021-11-09 16:20:58 +00:00
Kristoffer Dalby
49893305b4 Only turn on response log in grpc in trace mode 2021-11-08 22:06:25 +00:00
Kristoffer Dalby
0803c407a9 Fix Reusable typo, add tests for Augustines scenario 2021-11-08 20:49:03 +00:00
Kristoffer Dalby
6371135459 Try to address issue raised by cure 2021-11-08 20:48:20 +00:00
Kristoffer Dalby
43af11c46a Fix typo in generated code 2021-11-08 20:47:40 +00:00
Kristoffer Dalby
b210858dc5 Remove unused dep 2021-11-08 18:28:06 +00:00
Kristoffer Dalby
e1f45f9d07 Remove unused dep 2021-11-08 18:27:57 +00:00
Kristoffer Dalby
dce6b8d72e Add test case and fix nil pointer in preauthkeys command without expiration 2021-11-08 08:02:01 +00:00
Alessandro (Ale) Segala
67953bfe2f Fixed integration tests 2021-11-07 19:09:51 +00:00
Alessandro (Ale) Segala
6076656373 Updated docs 2021-11-07 18:57:37 +00:00
Kristoffer Dalby
9a26fa7989 Ensure logging is off for integration test commands 2021-11-07 10:40:05 +00:00
Kristoffer Dalby
d47b83f80b Unwrap grpc errors to make nicer user facing errors 2021-11-07 10:15:32 +00:00
Kristoffer Dalby
b11acad1c9 Fix typo 2021-11-07 09:57:39 +00:00
Kristoffer Dalby
b15efb5201 Ensure unix socket is removed before we startup 2021-11-07 09:55:32 +00:00
Kristoffer Dalby
2dfd42f80c Attempt to dry up CLI client, add proepr config
This commit is trying to DRY up the initiation of the gRPC client in
each command:

It renames the function to CLI instead of GRPC as it actually set up a
CLI client, not a generic grpc client

It also moves the configuration of address, timeout (which is now
consistent) and api to use Viper, allowing users to set it via env vars
and configuration file
2021-11-07 09:41:14 +00:00
Kristoffer Dalby
ce3f79a3bf Add yaml to output help 2021-11-07 08:58:45 +00:00
Kristoffer Dalby
a249d3fe39 Fix color for current namespace in nodes command 2021-11-07 08:58:03 +00:00
Alessandro (Ale) Segala
a6d487de00 Using debian11-based distroless image 2021-11-06 23:19:56 +00:00
Alessandro (Ale) Segala
3720da6386 Using distroless base image for Docker 2021-11-06 23:18:13 +00:00
Kristoffer Dalby
26718e8308 Revert gorm upgrade 2021-11-06 20:23:04 +00:00
Kristoffer Dalby
f5a196088a Merge branch 'main' into cli-grpc 2021-11-06 20:12:19 +00:00
Kristoffer Dalby
74f0d08f50 Merge pull request #199 from rcursaru/patch-1
update Running.md
2021-11-06 20:05:27 +00:00
Kristoffer Dalby
046681f4ef Merge branch 'main' into patch-1 2021-11-06 19:46:06 +00:00
Kristoffer Dalby
29531a5e90 Merge branch 'main' into cli-grpc 2021-11-06 19:29:00 +00:00
Ward Vandewege
137a9d6333 Merge pull request #213 from aberoham/patch-1
Typo in golang URL
2021-11-06 14:23:17 -04:00
Abraham Ingersoll
8115f50d03 Typo in golang URL 2021-11-06 07:43:41 +00:00
Kristoffer Dalby
b75e8ae2bd Merge branch 'main' into patch-1 2021-11-05 18:27:55 +00:00
Kristoffer Dalby
3ad2350c79 Fix new version of hujson 2021-11-05 07:24:00 +00:00
Kristoffer Dalby
204f99dd51 Add CLI integration tests
This PR adds a new part to the integration test suite which spins up a
new headscale and runs through a scenario of test cases for each
command.

The intent is to check that all commands work as intended and produce
the expected output.

I think they have been pretty well covered, but would appreciate
additional test cases if I have missed some.

Please note: headscale is set up, and teared down for _each_ "test
function" in this file, this means that its more suitable for specific
cases.
2021-11-04 22:45:15 +00:00
Kristoffer Dalby
8df41b069f Formatting 2021-11-04 22:45:08 +00:00
Kristoffer Dalby
be4256b1d0 Convert routes command to use gRPC 2021-11-04 22:44:59 +00:00
Kristoffer Dalby
77a973878c Convert preauthkeys command to use gRPC 2021-11-04 22:44:49 +00:00
Kristoffer Dalby
7b0d2dfb4a Convert nodes command to use gRPC 2021-11-04 22:44:35 +00:00
Kristoffer Dalby
79871d2463 Make namespace command use gRPC
This commit is a first in a series of commits migrating the command
interfaces to use the new gRPC client.

As a part of this commit, they have been streamlined and each command
_should_ be a bit more similar and use consistent output.

By using the new output function, we now make sure its always json
(errors and everything) if the user asks for JSON.
2021-11-04 22:42:21 +00:00
Kristoffer Dalby
dce82f4323 Use new json wrapper for version command 2021-11-04 22:41:55 +00:00
Kristoffer Dalby
9e9049307e Simplify loglevel parser, turn off logs when machine output is set 2021-11-04 22:32:13 +00:00
Kristoffer Dalby
cd34a5d6f3 Expand json output to support yaml, make more generic 2021-11-04 22:31:47 +00:00
Kristoffer Dalby
319237910b Resolve new dependencies 2021-11-04 22:28:35 +00:00
Kristoffer Dalby
3eed356d70 Implement rpc calls with new helper functions, implementing the proto spec 2021-11-04 22:19:27 +00:00
Kristoffer Dalby
706ff59d70 Clean pointer list in app.go, add grpc logging and simplify naming 2021-11-04 22:18:55 +00:00
Kristoffer Dalby
c2eb3f4d36 Use long command in example and remove pointerlist 2021-11-04 22:18:06 +00:00
Kristoffer Dalby
9acc3e0e73 Add a set of ip prefix convert helpers 2021-11-04 22:17:44 +00:00
Kristoffer Dalby
94dbaa6822 Clean up the return of "pointer list"
This commit is getting rid of a bunch of returned list pointers.
2021-11-04 22:16:56 +00:00
Kristoffer Dalby
5526ccc696 Namespaces are no longer a pointer 2021-11-04 22:15:46 +00:00
Kristoffer Dalby
95690e614e Simplify and streamline namespace functions for new cli/rpc/api 2021-11-04 22:15:17 +00:00
Kristoffer Dalby
77f5f8bd1c Simplify and streamline preauth commands for new cli/rpc/api 2021-11-04 22:14:39 +00:00
Kristoffer Dalby
787814ea89 Consolidate machine related lookups
This commit moves the routes lookup functions to be subcommands of
Machine, making them a lot simpler and more specific/composable.

It also moves the register command from cli.go into machine, so we can
clear out the extra file.

Finally a toProto function has been added to convert between the machine
database model and the proto/rpc model.
2021-11-04 22:11:38 +00:00
Kristoffer Dalby
67adea5cab Move common integration test commands into common file 2021-11-04 22:10:57 +00:00
Kristoffer Dalby
4226da3d6b Add "debug" command
This commit adds a debug command tree, intended to host commands used
for debugging and testing.

It adds a create node/machine command which will be used later to create
machines that can be used to test the registration command.
2021-11-04 22:08:45 +00:00
Kristoffer Dalby
5270361989 Add generated files from protobuf 2021-11-04 22:07:59 +00:00
Kristoffer Dalby
a6aa6a4f7b Add proto rpc interface for cli
This commit adds proto rpc definitions for the communication needed for
the CLI interface.
This will allow us to move the rest of the CLI interface over to gRPC
and in the future allow remote access
2021-11-04 22:02:10 +00:00
Kristoffer Dalby
1c530be66c Merge pull request #206 from kradalby/initial-api-cli-work 2021-11-04 14:09:06 +00:00
Kristoffer Dalby
7c774bc547 Remove flag that cant be trapped 2021-11-02 21:49:19 +00:00
Kristoffer Dalby
9954a3c599 Add handling for closing the socket 2021-11-02 21:46:15 +00:00
Kristoffer Dalby
b91c115ade Remove "auth skip" for socket traffic 2021-10-31 19:57:42 +00:00
Kristoffer Dalby
53df9afc2a Fix step naming error 2021-10-31 19:54:38 +00:00
Kristoffer Dalby
8db45a4e75 Setup a seperate, non-tls, no auth, socket grpc 2021-10-31 19:52:34 +00:00
Kristoffer Dalby
1c9b1ea91a Add todo 2021-10-31 16:34:20 +00:00
Kristoffer Dalby
12f2a7cee0 Move context per cure's suggestion 2021-10-31 16:26:51 +00:00
Kristoffer Dalby
3f30bf1e33 Ensure we set up TLS for http 2021-10-31 16:19:38 +00:00
Kristoffer Dalby
f968b0abdf Merge branch 'main' into initial-api-cli-work 2021-10-31 12:17:47 +00:00
Kristoffer Dalby
16ccbf4cdb Merge pull request #207 from juanfont/update-contributors 2021-10-31 12:17:31 +00:00
Kristoffer Dalby
d803fe6123 Merge branch 'main' into update-contributors 2021-10-31 09:58:17 +00:00
Kristoffer Dalby
ca15a53fad Add timeout to integration test for execCommand to fail faster 2021-10-31 09:58:01 +00:00
Kristoffer Dalby
264e5964f6 Resolve merge conflict 2021-10-31 09:40:43 +00:00
github-actions[bot]
223c611820 docs(README): update contributors 2021-10-31 09:34:07 +00:00
Kristoffer Dalby
fbdfa55629 Merge pull request #126 from unreality/main
Initial work on OIDC (SSO) integration
2021-10-31 09:33:35 +00:00
unreality
73d22cdf54 Merge pull request #2 from kradalby/oidc-1
Fix conflict, prepare for merge
2021-10-31 07:04:04 +08:00
Kristoffer Dalby
bac81176b2 Remove lint from generated testcode 2021-10-30 15:39:05 +00:00
Kristoffer Dalby
cd2914dbc9 Make note about oidc being experimental 2021-10-30 15:35:58 +00:00
Kristoffer Dalby
cbf3f5d640 Resolve merge conflict 2021-10-30 15:33:01 +00:00
Kristoffer Dalby
018e42acad Merge branch 'main' into initial-api-cli-work 2021-10-30 15:31:34 +01:00
Kristoffer Dalby
482a31b66b Setup swagger and swagger UI properly 2021-10-30 14:29:53 +00:00
Kristoffer Dalby
2b340e8fa4 Rename protofile 2021-10-30 14:29:41 +00:00
Kristoffer Dalby
434fac52b7 Fix lint error 2021-10-30 14:29:03 +00:00
Kristoffer Dalby
6aacada852 Switch from gRPC localhost to socket
This commit changes the way CLI and grpc-gateway communicates with the
gRPC backend to socket, instead of localhost. Unauthenticated access now
goes on the socket, while the network interface will require API key (in
the future).
2021-10-30 14:08:16 +00:00
Ward Vandewege
7301d7eb67 Merge pull request #200 from cure/cli-improvements
Cli improvements -- nodes subcommand
2021-10-29 17:36:54 -04:00
Ward Vandewege
b2d2d5653e Merge branch 'main' into cli-improvements 2021-10-29 17:20:05 -04:00
Kristoffer Dalby
72fd2a2780 Fix lint error 2021-10-29 17:36:11 +00:00
Kristoffer Dalby
9ef031f0f8 Port create, delete and list of namespace to grpc 2021-10-29 17:16:54 +00:00
Kristoffer Dalby
81b8610dff Add helper function to setup grpc client for cli 2021-10-29 17:15:52 +00:00
Kristoffer Dalby
eefd82a574 Move config loading out of the headscale app setup 2021-10-29 17:09:06 +00:00
Kristoffer Dalby
002b5c1dad Add grpc token auth struct 2021-10-29 17:08:21 +00:00
Kristoffer Dalby
68dab0fe7b Move localhost check to utils 2021-10-29 17:04:58 +00:00
Kristoffer Dalby
6d10be8fff Change order of print/nil check in integration test 2021-10-29 16:49:44 +00:00
Kristoffer Dalby
a23d82e33a Setup API and prepare for API keys
This commit sets up the API and gRPC endpoints and adds authentication
to them. Currently there is no actual authentication implemented but it
has been prepared for API keys.

In addition, there is a allow put in place for gRPC traffic over
localhost. This has two purposes:

1. grpc-gateway, which is the base of the API, connects to the gRPC
   service over localhost.
2. We do not want to break current "on server" behaviour which allows
   users to use the cli on the server without any fuzz
2021-10-29 16:45:06 +00:00
Kristoffer Dalby
c7fa9b6e4a Setup create, delete and list namespace over grpc 2021-10-29 16:44:32 +00:00
Kristoffer Dalby
07bbeafa3b Fix lint errors, add initial namespace rpc 2021-10-29 16:43:10 +00:00
Kristoffer Dalby
06700c1dc4 Setup proto linting 2021-10-29 16:42:56 +00:00
Raal Goff
2d252da221 suggested documentation and comments 2021-10-29 21:35:07 +08:00
Kristoffer Dalby
2c071a8a2d Merge pull request #204 from kradalby/api-playground 2021-10-28 20:17:20 +01:00
Ward Vandewege
f9187bdfc4 Switch to named arguments for all nodes subcommands. Update docs
accordingly. Fix integration test failure.
2021-10-28 09:31:15 -04:00
Ward Vandewege
25c67cf2aa Update integration_test.go
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-10-28 08:40:30 -04:00
Ward Vandewege
b00a2729e3 Update cmd/headscale/cli/nodes.go
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-10-28 08:39:42 -04:00
Ward Vandewege
6c01b86e4c Update cmd/headscale/cli/nodes.go
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-10-28 08:39:27 -04:00
Ward Vandewege
d086cf4691 Move the namespace argument back to a flag for the share and unshare
commands.
2021-10-27 17:51:42 -04:00
Kristoffer Dalby
5054ed41ac Make ci lint fix if it can 2021-10-27 07:10:32 +00:00
Kristoffer Dalby
e91174e83f Add gen explicitly to skip list 2021-10-27 07:08:24 +00:00
Kristoffer Dalby
c9bd25d05c Remove golint from github actions 2021-10-27 07:07:44 +00:00
Kristoffer Dalby
f779372154 Add golangcilint config 2021-10-27 07:07:19 +00:00
Kristoffer Dalby
acd9ebbdf8 Let lint ignore grpcv1.go as it is placeholder 2021-10-27 07:06:39 +00:00
Kristoffer Dalby
6369cea10e Remove golint, its deprecated
This commit removed `golint`, its deprecated:
https://github.com/golang/lint

and golangci-lint has overlapping features.
2021-10-27 06:58:16 +00:00
Kristoffer Dalby
2d92719095 Dont try to generate code on every make build 2021-10-27 06:48:30 +00:00
Kristoffer Dalby
d4265779ef Check in generated code
This does not have to be reviewed, here is some reasoning:

Go (and go mod) is designed for having code available and we need to
check in the generated code to make sure it is "go gettable". If we dont
we give ourselves a headache trying to setup all the ci, tests etc to
install and generate the code before it runs.

Because the code isnt there, the plugins needed to generate the code
fail to install...

I didnt find any good documentation for this, but there is this github
comment:
https://github.com/golang/go/issues/34514#issuecomment-535406759
2021-10-27 06:44:04 +00:00
Kristoffer Dalby
8f2ef6a57d Prepare for checking in generated code 2021-10-27 06:40:39 +00:00
Kristoffer Dalby
6e764942a2 Add grpc step to dockerfile 2021-10-26 21:35:18 +00:00
Kristoffer Dalby
11d987549f Ignore generated files for docker 2021-10-26 21:34:51 +00:00
Kristoffer Dalby
b8c89cd63c Add readme and makefile entry about code generation 2021-10-26 20:53:10 +00:00
Kristoffer Dalby
2f045b20fb Refactor tls and wire up grpc, grpc gateway/api
This commit moves the TLS configuration into a seperate function.

It also wires up the gRPC interface and prepares handing the API
endpoints to the grpc gateway.
2021-10-26 20:42:56 +00:00
Kristoffer Dalby
caa4d33cbd Add an initial grpcv1 service (implementing the proto generated service) 2021-10-26 20:42:20 +00:00
Kristoffer Dalby
a9da7c8fd9 Update go.mod 2021-10-26 20:41:35 +00:00
Kristoffer Dalby
b096a2e7e5 Create an initial gRPC service
This commit adds protobuf files and tooling surrounding generating APIs
and datatypes.
2021-10-26 20:37:37 +00:00
Ward Vandewege
f9ece0087d Make the cli help a little more explicit for the nodes subcommand. 2021-10-26 08:50:25 -04:00
Kristoffer Dalby
c76d3b53d9 Merge branch 'main' into cli-improvements 2021-10-25 22:11:02 +01:00
Kristoffer Dalby
e8277595f5 Merge pull request #202 from juanfont/kradalby-patch-1
Add note about main containing unreleased  changes
2021-10-25 20:14:36 +01:00
Kristoffer Dalby
4d3b638a3d Add note about main containing unreleased changes
#201
2021-10-25 19:38:11 +01:00
Ward Vandewege
1d9954d8e9 Fix integration test. 2021-10-24 20:11:47 -04:00
Ward Vandewege
dd7557850e cli changes for the nodes subcommand:
* when listing nodes, a namespace is now optional, when it is not
  provided, all nodes are shown
* when deleting, and sharing a node, remove the `namespace` flag, it was
  superfluous and unused
* when unsharing a node, specify the namespace as an argument not a
  flag, making the UX the same as for sharing.

Also refactor the share/unshare code to reuse the shared bits.
2021-10-24 17:50:28 -04:00
Ward Vandewege
c8e1afb14b When attempting to unshare a node from the primary namespace, return
errorMachineNotShared, not errorSameNamespace. Add test for same.
2021-10-24 17:50:21 -04:00
Kristoffer Dalby
6d162eeff9 Merge pull request #197 from kradalby/config-simplification 2021-10-24 22:27:18 +01:00
Kristoffer Dalby
746d4037da Fix config and tests 2021-10-24 21:30:51 +01:00
Kristoffer Dalby
1237e02f7c Merge branch 'config-simplification' of github.com:kradalby/headscale into config-simplification 2021-10-24 21:21:08 +01:00
Kristoffer Dalby
7da3d4ba50 Resolve merge conflict 2021-10-24 21:21:01 +01:00
Remus CURSARU
c22b93734e update Running.md
clarify setup steps; use a single directory to hold all configuration for docker container
2021-10-24 14:09:07 +02:00
Kristoffer Dalby
8853315dcc Update config-example.yaml
Co-authored-by: Juan Font <juanfontalonso@gmail.com>
2021-10-23 10:40:15 +01:00
Kristoffer Dalby
5aaffaaecb Merge pull request #196 from kradalby/derp-improvements
Add ability to fetch DERP from url and file
2021-10-23 09:20:27 +01:00
Kristoffer Dalby
389a8d47a3 Merge branch 'main' into derp-improvements 2021-10-22 23:58:48 +01:00
Kristoffer Dalby
a355769416 Update derp-example.yaml
Co-authored-by: Juan Font <juanfontalonso@gmail.com>
2021-10-22 23:58:27 +01:00
Juan Font
1a8c9216d6 Merge pull request #194 from juanfont/update-contributors
docs(README): update contributors
2021-10-23 00:11:22 +02:00
Juan Font
81316ef644 Merge branch 'main' into update-contributors 2021-10-22 21:28:27 +02:00
Kristoffer Dalby
4d4d0de356 Start adding comments to config 2021-10-22 18:27:11 +01:00
Kristoffer Dalby
b85adbc40a Remove the need for multiple config files
This commit removes the almost a 100% redundant tests (two fields were
checked differently) and makes a single example configuration for users.
2021-10-22 18:14:29 +01:00
Kristoffer Dalby
aefbd66317 Remove derpmap volume from integration tests 2021-10-22 16:57:51 +00:00
Kristoffer Dalby
d875cca69d move integration to yaml, add new derp configuration 2021-10-22 16:57:01 +00:00
Kristoffer Dalby
0e902fe949 Add certificates to docker image so we can get derpmap from tailscale 2021-10-22 16:56:23 +00:00
Kristoffer Dalby
582eb57a09 Use the new derp map 2021-10-22 16:56:00 +00:00
Kristoffer Dalby
177f1eca06 Add helper functions for building derp maps from urls and file 2021-10-22 16:55:35 +00:00
Kristoffer Dalby
57f46ded83 Split derp into its own config struct 2021-10-22 16:55:14 +00:00
Kristoffer Dalby
aa245c2d06 Remove derp.yaml, add selfhosted example
This PR will promote fetching the derpmap directly from tailscale, so we
will remove our example, as it might easily get outdated.

Add a derp-example that shows how a user can also add their own derp
server.
2021-10-22 16:52:39 +00:00
Kristoffer Dalby
e836db1ead Add config.yaml to gitignore 2021-10-22 16:51:19 +00:00
github-actions[bot]
5420347d24 docs(README): update contributors 2021-10-22 06:58:20 +00:00
Kristoffer Dalby
9e2637d65f Merge pull request #192 from derelm/patch-2 2021-10-22 07:57:48 +01:00
Juan Font
c6046597ed Merge branch 'main' into update-contributors 2021-10-22 00:01:18 +02:00
Juan Font
a46c8fe914 Merge branch 'main' into patch-2 2021-10-21 23:56:10 +02:00
Juan Font
f822816cdb Merge pull request #193 from juanfont/fix-again-contributors
Another fix for the contributors section in README
2021-10-21 23:55:41 +02:00
Juan Font Alonso
f3bf9b4bbb Contributors again fixed 2021-10-21 23:54:20 +02:00
Juan Font
9f02899261 Merge branch 'main' into patch-2 2021-10-21 23:41:52 +02:00
github-actions[bot]
75f3e1fb03 docs(README): update contributors 2021-10-21 21:38:02 +00:00
Juan Font
9fbfa7c1f5 Merge pull request #191 from juanfont/fix-contributors
Fix contributors
2021-10-21 23:32:43 +02:00
Juan Font Alonso
d5aef85bf2 Fix contributors 2021-10-21 23:21:38 +02:00
derelm
88b32e4b18 fix typo 2021-10-21 23:07:35 +02:00
Juan Font Alonso
e425e3ffd3 Fix contributors 2021-10-21 22:53:30 +02:00
Juan Font
355483fd86 Merge pull request #184 from juanfont/doc-reorg-v1
Move documentation away from README and use YAML everywhere
2021-10-21 22:38:59 +02:00
Juan Font Alonso
672d8474b9 PRettier on the yamls 2021-10-21 21:18:50 +02:00
Juan Font Alonso
73e4d38670 Merge branch 'doc-reorg-v1' of https://github.com/juanfont/headscale into doc-reorg-v1 2021-10-21 21:01:57 +02:00
Juan Font Alonso
561c15bbe8 Prettier 2021-10-21 21:01:52 +02:00
Juan Font Alonso
b93aa723cb Run contributors on merge to master 2021-10-21 20:58:30 +02:00
Juan Font Alonso
636943c715 Improved docker cmd 2021-10-21 20:57:18 +02:00
Juan Font
0a6a67da85 Merge branch 'main' into doc-reorg-v1 2021-10-21 20:55:48 +02:00
Juan Font Alonso
e9ffd366dd Improvements here and there 2021-10-21 20:54:41 +02:00
Juan Font Alonso
4be0b3f556 Mention disable check updates in the doc 2021-10-21 20:54:29 +02:00
Juan Font Alonso
a0bfad6d6e Headscale is not capitalized 2021-10-21 20:48:29 +02:00
Juan Font Alonso
bb1f17f5af Added glossary 2021-10-21 20:46:19 +02:00
Juan Font
95bc2ee241 Merge pull request #190 from juanfont/fix-arm64
Fix arm64 (now for good)
2021-10-21 20:40:17 +02:00
Juan Font Alonso
16a90e799c Contributors should be working 2021-10-21 20:36:26 +02:00
Juan Font Alonso
4c2f84b211 Add contributors Action 2021-10-21 20:33:58 +02:00
Juan Font Alonso
b799635fbb Merge branch 'fix-arm64' of https://github.com/juanfont/headscale into fix-arm64 2021-10-21 19:56:51 +02:00
Juan Font Alonso
bc145952d4 Finally fix arm64 build 2021-10-21 19:56:36 +02:00
Kristoffer Dalby
2c5701917d Merge branch 'main' into doc-reorg-v1 2021-10-21 18:46:29 +01:00
Juan Font
ed7b840fea Merge pull request #188 from juanfont/fix-arm64
Fixed ARM64 compiler name
2021-10-21 19:10:14 +02:00
Kristoffer Dalby
23372e29cd Merge branch 'main' into fix-arm64 2021-10-21 17:03:46 +01:00
Juan Font Alonso
fb569b0483 Fixed ARM64 compiler name 2021-10-21 17:47:10 +02:00
Juan Font Alonso
8f5a1dce3e Merge branch 'doc-reorg-v1' of https://github.com/juanfont/headscale into doc-reorg-v1 2021-10-20 23:34:27 +02:00
Juan Font Alonso
6b0f5da113 Separate config examples for sqlite and postgres for the time being 2021-10-20 23:27:59 +02:00
Juan Font
d1ebcb59f1 Merge branch 'main' into doc-reorg-v1 2021-10-20 00:19:21 +02:00
Juan Font Alonso
31344128a0 Switch json for yaml in README 2021-10-20 00:17:47 +02:00
Juan Font Alonso
86ecc2a234 Switch to YAML config 2021-10-20 00:17:08 +02:00
Juan Font Alonso
d1e8ac7ba5 Moved TLS config to another file 2021-10-20 00:07:05 +02:00
Juan Font Alonso
efe208fef5 Merge branch 'doc-reorg-v1' of https://github.com/juanfont/headscale into doc-reorg-v1 2021-10-19 23:54:32 +02:00
Juan Font Alonso
7b40e99aec Added notes on SQLite 2021-10-19 23:45:20 +02:00
Juan Font
06706aab9a Update docs/Running.md
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-10-19 23:41:08 +02:00
Juan Font
0318af5a33 Apply suggestions from code review
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-10-19 23:22:56 +02:00
Juan Font Alonso
995dcfc6ae Reference doc/ 2021-10-19 21:02:45 +02:00
Juan Font Alonso
2236cc8bf7 Improve wording here and there 2021-10-19 21:01:36 +02:00
Juan Font Alonso
7bb354117b Move README documentation to doc/ 2021-10-19 20:59:33 +02:00
Kristoffer Dalby
dbe193ad17 Fix up leftovers from kradalby PR 2021-10-19 18:25:59 +01:00
Kristoffer Dalby
e7424222db Merge branch 'main' into main 2021-10-19 17:47:09 +01:00
Kristoffer Dalby
da14750396 Merge branch 'main' into main 2021-10-19 15:26:18 +01:00
unreality
8fe72dcb74 Merge pull request #1 from kradalby/namespace-mappings
Implement namespace mappings
2021-10-19 09:33:54 +08:00
Kristoffer Dalby
677bd9b657 Implement namespace matching 2021-10-18 19:27:52 +00:00
Kristoffer Dalby
a347d276bd Fix broken machine test 2021-10-18 19:26:43 +00:00
Kristoffer Dalby
710616f118 Merge branch 'main' into main 2021-10-17 13:26:37 +01:00
Raal Goff
d0cd5af419 fix incorrect merge 2021-10-16 22:34:11 +08:00
unreality
afbfc1d370 Merge branch 'main' into main 2021-10-16 22:31:37 +08:00
Raal Goff
0603e29c46 add login details to RegisterResponse so GUI clients show login display name 2021-10-15 23:09:55 +08:00
Raal Goff
8843188b84 add notes to README.md about OIDC 2021-10-10 22:52:30 +08:00
Raal Goff
74e6c1479e updates from code review 2021-10-10 17:22:42 +08:00
Kristoffer Dalby
2997f4d251 Merge branch 'main' into main 2021-10-08 22:21:41 +01:00
Raal Goff
e407d423d4 updates from code review 2021-10-08 17:43:52 +08:00
unreality
35795c79c3 Handle trailing slash on uris
Co-authored-by: Kristoffer Dalby <kradalby@kradalby.no>
2021-10-08 15:26:31 +08:00
Raal Goff
c487591437 use go-oidc instead of verifying and extracting tokens ourselves, rename oidc_endpoint to oidc_issuer to be more inline with spec 2021-10-06 17:19:15 +08:00
Kristoffer Dalby
0393ab524c Merge branch 'main' into main 2021-09-28 11:20:31 +01:00
Kristoffer Dalby
cc054d71fe Merge branch 'main' into main 2021-09-26 21:35:26 +01:00
Kristoffer Dalby
8248b71153 Merge branch 'main' into main 2021-09-26 21:15:00 +01:00
Raal Goff
b22a9781a2 fix linter errors, error out if jwt does not contain a key id 2021-09-26 21:12:36 +08:00
Raal Goff
e7a2501fe8 initial work on OIDC (SSO) integration 2021-09-26 16:53:05 +08:00
161 changed files with 24296 additions and 4950 deletions

View File

@@ -3,6 +3,7 @@
// development
integration_test.go
integration_test/
!integration_test/etc_embedded_derp/tls/server.crt
Dockerfile*
docker-compose*
@@ -15,3 +16,4 @@ README.md
LICENSE
.vscode
*.sock

10
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,10 @@
* @juanfont @kradalby
*.md @ohdearaugustin
*.yml @ohdearaugustin
*.yaml @ohdearaugustin
Dockerfile* @ohdearaugustin
.goreleaser.yaml @ohdearaugustin
/docs/ @ohdearaugustin
/.github/workflows/ @ohdearaugustin
/.github/renovate.json @ohdearaugustin

2
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,2 @@
ko_fi: kradalby
github: [kradalby]

28
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,28 @@
---
name: "Bug report"
about: "Create a bug report to help us improve"
title: ""
labels: ["bug"]
assignees: ""
---
**Bug description**
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
and how it is currently different. If you are unsure if it is a bug, consider discussing
it on our Discord server first. -->
**To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->

11
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# Issues must have some content
blank_issues_enabled: false
# Contact links
contact_links:
- name: "headscale usage documentation"
url: "https://github.com/juanfont/headscale/blob/main/docs"
about: "Find documentation about how to configure and run headscale."
- name: "headscale Discord community"
url: "https://discord.com/invite/XcQxk2VHjx"
about: "Please ask and answer questions about usage of headscale here."

View File

@@ -0,0 +1,15 @@
---
name: "Feature request"
about: "Suggest an idea for headscale"
title: ""
labels: ["enhancement"]
assignees: ""
---
**Feature request**
<!-- A clear and precise description of what new or changed feature you want. -->
<!-- Please include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->

28
.github/ISSUE_TEMPLATE/other_issue.md vendored Normal file
View File

@@ -0,0 +1,28 @@
---
name: "Other issue"
about: "Report a different issue"
title: ""
labels: ["bug"]
assignees: ""
---
<!-- If you have a question, please consider using our Discord for asking questions -->
**Issue description**
<!-- Please add your issue description. -->
**To Reproduce**
<!-- Steps to reproduce the behavior. -->
**Context info**
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->

10
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,10 @@
<!-- Please tick if the following things apply. You… -->
- [ ] read the [CONTRIBUTING guidelines](README.md#user-content-contributing)
- [ ] raised a GitHub issue or discussed it on the projects chat beforehand
- [ ] added unit tests
- [ ] added integration tests
- [ ] updated documentation if needed
- [ ] updated CHANGELOG.md
<!-- If applicable, please reference the issue using `Fixes #XXX` and add tests to cover your new code. -->

38
.github/renovate.json vendored Normal file
View File

@@ -0,0 +1,38 @@
{
"baseBranches": ["main"],
"username": "renovate-release",
"gitAuthor": "Renovate Bot <bot@renovateapp.com>",
"branchPrefix": "renovateaction/",
"onboarding": false,
"extends": ["config:base", ":rebaseStalePrs"],
"ignorePresets": [":prHourlyLimit2"],
"enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
"includeForks": true,
"repositories": ["juanfont/headscale"],
"platform": "github",
"packageRules": [
{
"matchDatasources": ["go"],
"groupName": "Go modules",
"groupSlug": "gomod",
"separateMajorMinor": false
},
{
"matchDatasources": ["docker"],
"groupName": "Dockerfiles",
"groupSlug": "dockerfiles"
}
],
"regexManagers": [
{
"fileMatch": [
".github/workflows/.*.yml$"
],
"matchStrings": [
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
],
"datasourceTemplate": "golang-version",
"depNameTemplate": "actions/go-version"
}
]
}

View File

@@ -14,23 +14,38 @@ jobs:
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
files: |
go.*
**/*.go
integration_test/
config-example.yaml
- name: Setup Go
if: steps.changed-files.outputs.any_changed == 'true'
uses: actions/setup-go@v2
with:
go-version: "1.16.3"
go-version: "1.18.0"
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
run: |
go version
go install golang.org/x/lint/golint@latest
sudo apt update
sudo apt install -y make
- name: Run lint
- name: Run build
if: steps.changed-files.outputs.any_changed == 'true'
run: make build
- uses: actions/upload-artifact@v2
if: steps.changed-files.outputs.any_changed == 'true'
with:
name: headscale-linux
path: headscale
path: headscale

35
.github/workflows/contributors.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Contributors
on:
push:
branches:
- main
workflow_dispatch:
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Delete upstream contributor branch
# Allow continue on failure to account for when the
# upstream branch is deleted or does not exist.
continue-on-error: true
run: git push origin --delete update-contributors
- name: Create up-to-date contributors branch
run: git checkout -B update-contributors
- name: Push empty contributors branch
run: git push origin update-contributors
- name: Switch back to main
run: git checkout main
- uses: BobAnkh/add-contributors@v0.2.2
with:
CONTRIBUTOR: "## Contributors"
COLUMN_PER_ROW: "6"
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: "100"
FONT_SIZE: "14"
PATH: "/README.md"
COMMIT_MESSAGE: "docs(README): update contributors"
AVATAR_SHAPE: "round"
BRANCH: "update-contributors"
PULL_REQUEST: "main"

View File

@@ -1,39 +1,74 @@
---
name: CI
on: [push, pull_request]
jobs:
# The "build" workflow
lint:
# The type of runner that the job will run on
golangci-lint:
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
# Install and run golangci-lint as a separate step, it's much faster this
# way because this action has caching. It'll get run again in `make lint`
# below, but it's still much faster in the end than installing
# golangci-lint manually in the `Run lint` step.
- uses: golangci/golangci-lint-action@v2
with:
args: --timeout 5m
fetch-depth: 2
# Setup Go
- name: Setup Go
uses: actions/setup-go@v2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
go-version: "1.16.3" # The Go version to download (if necessary) and use.
files: |
go.*
**/*.go
integration_test/
config-example.yaml
# Install all the dependencies
- name: Install dependencies
run: |
go version
go install golang.org/x/lint/golint@latest
sudo apt update
sudo apt install -y make
- name: golangci-lint
if: steps.changed-files.outputs.any_changed == 'true'
uses: golangci/golangci-lint-action@v2
with:
version: latest
- name: Run lint
run: make lint
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
# blocked because new linters has appared and other
# parts of the code is affected.
only-new-issues: true
prettier-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
files: |
**/*.md
**/*.yml
**/*.yaml
**/*.ts
**/*.js
**/*.sass
**/*.css
**/*.scss
**/*.html
- name: Prettify code
if: steps.changed-files.outputs.any_changed == 'true'
uses: creyD/prettier_action@v4.0
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
only_changed: false
dry: true
proto-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: bufbuild/buf-setup-action@v0.7.0
- uses: bufbuild/buf-lint-action@v1
with:
input: "proto"

View File

@@ -4,25 +4,27 @@ name: release
on:
push:
tags:
- "*" # triggers only if push new tag version
- "*" # triggers only if push new tag version
workflow_dispatch:
jobs:
goreleaser:
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
steps:
-
name: Checkout
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Set up Go
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.16
-
name: Run GoReleaser
go-version: 1.18.0
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y gcc-aarch64-linux-gnu
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
with:
distribution: goreleaser
@@ -34,13 +36,24 @@ jobs:
docker-release:
runs-on: ubuntu-latest
steps:
-
name: Checkout
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Docker meta
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
@@ -52,22 +65,20 @@ jobs:
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,value=latest
type=sha
-
name: Login to DockerHub
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Login to GHCR
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
-
name: Build and push
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
@@ -75,3 +86,138 @@ jobs:
context: .
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
docker-debug-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-debug
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-debug-
- name: Docker meta
id: meta-debug
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
latest=false
tags: |
type=semver,pattern={{version}}-debug
type=semver,pattern={{major}}.{{minor}}-debug
type=semver,pattern={{major}}-debug
type=raw,value=latest-debug
type=sha,suffix=-debug
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.debug
tags: ${{ steps.meta-debug.outputs.tags }}
labels: ${{ steps.meta-debug.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-debug
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-debug
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
docker-alpine-release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Set up QEMU for multiple platforms
uses: docker/setup-qemu-action@master
with:
platforms: arm64,amd64
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache-alpine
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-alpine-
- name: Docker meta
id: meta-alpine
uses: docker/metadata-action@v3
with:
# list of Docker images to use as base name for tags
images: |
${{ secrets.DOCKERHUB_USERNAME }}/headscale
ghcr.io/${{ github.repository_owner }}/headscale
flavor: |
latest=false
tags: |
type=semver,pattern={{version}}-alpine
type=semver,pattern={{major}}.{{minor}}-alpine
type=semver,pattern={{major}}-alpine
type=raw,value=latest-alpine
type=sha,suffix=-alpine
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
push: true
context: .
file: Dockerfile.alpine
tags: ${{ steps.meta-alpine.outputs.tags }}
labels: ${{ steps.meta-alpine.outputs.labels }}
platforms: linux/amd64,linux/arm64
cache-from: type=local,src=/tmp/.buildx-cache-alpine
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
- name: Prepare cache for next build
run: |
rm -rf /tmp/.buildx-cache-alpine
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine

27
.github/workflows/renovatebot.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
---
name: Renovate
on:
schedule:
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
workflow_dispatch:
jobs:
renovate:
runs-on: ubuntu-latest
steps:
- name: Get token
id: get_token
uses: machine-learning-apps/actions-app-token@master
with:
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
- name: Checkout
uses: actions/checkout@v2.0.0
- name: Self-hosted Renovate
uses: renovatebot/github-action@v31.81.3
with:
configurationFile: .github/renovate.json
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
# env:
# LOG_LEVEL: "debug"

View File

@@ -3,21 +3,30 @@ name: CI
on: [pull_request]
jobs:
# The "build" workflow
integration-test:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
files: |
go.*
**/*.go
integration_test/
config-example.yaml
# Setup Go
- name: Setup Go
if: steps.changed-files.outputs.any_changed == 'true'
uses: actions/setup-go@v2
with:
go-version: "1.16.3"
go-version: "1.18.0"
- name: Run Integration tests
run: go test -tags integration -timeout 30m
if: steps.changed-files.outputs.any_changed == 'true'
run: make test_integration

View File

@@ -3,31 +3,41 @@ name: CI
on: [push, pull_request]
jobs:
# The "build" workflow
test:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v14.1
with:
files: |
go.*
**/*.go
integration_test/
config-example.yaml
# Setup Go
- name: Setup Go
if: steps.changed-files.outputs.any_changed == 'true'
uses: actions/setup-go@v2
with:
go-version: "1.16.3" # The Go version to download (if necessary) and use.
go-version: "1.18.0"
# Install all the dependencies
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
run: |
go version
sudo apt update
sudo apt install -y make
- name: Run tests
if: steps.changed-files.outputs.any_changed == 'true'
run: make test
- name: Run build
if: steps.changed-files.outputs.any_changed == 'true'
run: make

3
.gitignore vendored
View File

@@ -16,6 +16,9 @@
/headscale
config.json
config.yaml
derp.yaml
*.hujson
*.key
/db.sqlite
*.sqlite3

58
.golangci.yaml Normal file
View File

@@ -0,0 +1,58 @@
---
run:
timeout: 10m
issues:
skip-dirs:
- gen
linters:
enable-all: true
disable:
- exhaustivestruct
- revive
- lll
- interfacer
- scopelint
- maligned
- golint
- gofmt
- gochecknoglobals
- gochecknoinits
- gocognit
- funlen
- exhaustivestruct
- tagliatelle
- godox
- ireturn
# We should strive to enable these:
- wrapcheck
- dupl
- makezero
- maintidx
# We might want to enable this, but it might be a lot of work
- cyclop
- nestif
- wsl # might be incompatible with gofumpt
- testpackage
- paralleltest
linters-settings:
varnamelen:
ignore-type-assert-ok: true
ignore-map-index-ok: true
ignore-names:
- err
- db
- id
- ip
- ok
- c
- tt
gocritic:
disabled-checks:
- appendAssign
# TODO(kradalby): Remove this
- ifElseChain

View File

@@ -1,21 +1,21 @@
# This is an example .goreleaser.yml file with some sane defaults.
# Make sure to check the documentation at http://goreleaser.com
---
before:
hooks:
- go mod tidy
- go mod tidy -compat=1.17
release:
prerelease: auto
builds:
- id: darwin-amd64
main: ./cmd/headscale/headscale.go
mod_timestamp: '{{ .CommitTimestamp }}'
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- darwin
goarch:
- amd64
env:
- PKG_CONFIG_SYSROOT_DIR=/sysroot/macos/amd64
- PKG_CONFIG_PATH=/sysroot/macos/amd64/usr/local/lib/pkgconfig
- CC=o64-clang
- CXX=o64-clang++
flags:
- -mod=readonly
ldflags:
@@ -23,48 +23,41 @@ builds:
- id: linux-armhf
main: ./cmd/headscale/headscale.go
mod_timestamp: '{{ .CommitTimestamp }}'
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm
goarm:
- 7
env:
- CC=arm-linux-gnueabihf-gcc
- CXX=arm-linux-gnueabihf-g++
- CGO_FLAGS=--sysroot=/sysroot/linux/armhf
- CGO_LDFLAGS=--sysroot=/sysroot/linux/armhf
- PKG_CONFIG_SYSROOT_DIR=/sysroot/linux/armhf
- PKG_CONFIG_PATH=/sysroot/linux/armhf/opt/vc/lib/pkgconfig:/sysroot/linux/armhf/usr/lib/arm-linux-gnueabihf/pkgconfig:/sysroot/linux/armhf/usr/lib/pkgconfig:/sysroot/linux/armhf/usr/local/lib/pkgconfig
- "7"
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
- id: linux-amd64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=1
- CGO_ENABLED=0
goos:
- linux
goarch:
- amd64
main: ./cmd/headscale/headscale.go
mod_timestamp: '{{ .CommitTimestamp }}'
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
- id: linux-arm64
mod_timestamp: "{{ .CommitTimestamp }}"
env:
- CGO_ENABLED=0
goos:
- linux
goarch:
- arm64
env:
- CGO_ENABLED=1
- CC=aarch64-linux-gnu-gcc-9
main: ./cmd/headscale/headscale.go
mod_timestamp: '{{ .CommitTimestamp }}'
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
@@ -79,12 +72,12 @@ archives:
format: binary
checksum:
name_template: 'checksums.txt'
name_template: "checksums.txt"
snapshot:
name_template: "{{ .Tag }}-next"
changelog:
sort: asc
filters:
exclude:
- '^docs:'
- '^test:'
- "^docs:"
- "^test:"

136
CHANGELOG.md Normal file
View File

@@ -0,0 +1,136 @@
# CHANGELOG
## 0.16.0 (2022-xx-xx)
## 0.15.0 (2022-03-20)
**Note:** Take a backup of your database before upgrading.
### BREAKING
- Boundaries between Namespaces has been removed and all nodes can communicate by default [#357](https://github.com/juanfont/headscale/pull/357)
- To limit access between nodes, use [ACLs](./docs/acls.md).
- `/metrics` is now a configurable host:port endpoint: [#344](https://github.com/juanfont/headscale/pull/344). You must update your `config.yaml` file to include:
```yaml
metrics_listen_addr: 127.0.0.1:9090
```
### Features
- Add support for writing ACL files with YAML [#359](https://github.com/juanfont/headscale/pull/359)
- Users can now use emails in ACL's groups [#372](https://github.com/juanfont/headscale/issues/372)
- Add shorthand aliases for commands and subcommands [#376](https://github.com/juanfont/headscale/pull/376)
- Add `/windows` endpoint for Windows configuration instructions + registry file download [#392](https://github.com/juanfont/headscale/pull/392)
- Added embedded DERP (and STUN) server into Headscale [#388](https://github.com/juanfont/headscale/pull/388)
### Changes
- Fix a bug were the same IP could be assigned to multiple hosts if joined in quick succession [#346](https://github.com/juanfont/headscale/pull/346)
- Simplify the code behind registration of machines [#366](https://github.com/juanfont/headscale/pull/366)
- Nodes are now only written to database if they are registrated successfully
- Fix a limitation in the ACLs that prevented users to write rules with `*` as source [#374](https://github.com/juanfont/headscale/issues/374)
- Reduce the overhead of marshal/unmarshal for Hostinfo, routes and endpoints by using specific types in Machine [#371](https://github.com/juanfont/headscale/pull/371)
- Apply normalization function to FQDN on hostnames when hosts registers and retrieve informations [#363](https://github.com/juanfont/headscale/issues/363)
- Fix a bug that prevented the use of `tailscale logout` with OIDC [#508](https://github.com/juanfont/headscale/issues/508)
- Added Tailscale repo HEAD and unstable releases channel to the integration tests targets [#513](https://github.com/juanfont/headscale/pull/513)
## 0.14.0 (2022-02-24)
**UPCOMING ### BREAKING
From the **next\*\* version (`0.15.0`), all machines will be able to communicate regardless of
if they are in the same namespace. This means that the behaviour currently limited to ACLs
will become default. From version `0.15.0`, all limitation of communications must be done
with ACLs.
This is a part of aligning `headscale`'s behaviour with Tailscale's upstream behaviour.
### BREAKING
- ACLs have been rewritten to align with the bevaviour Tailscale Control Panel provides. **NOTE:** This is only active if you use ACLs
- Namespaces are now treated as Users
- All machines can communicate with all machines by default
- Tags should now work correctly and adding a host to Headscale should now reload the rules.
- The documentation have a [fictional example](docs/acls.md) that should cover some use cases of the ACLs features
### Features
- Add support for configurable mTLS [docs](docs/tls.md#configuring-mutual-tls-authentication-mtls) [#297](https://github.com/juanfont/headscale/pull/297)
### Changes
- Remove dependency on CGO (switch from CGO SQLite to pure Go) [#346](https://github.com/juanfont/headscale/pull/346)
**0.13.0 (2022-02-18):**
### Features
- Add IPv6 support to the prefix assigned to namespaces
- Add API Key support
- Enable remote control of `headscale` via CLI [docs](docs/remote-cli.md)
- Enable HTTP API (beta, subject to change)
- OpenID Connect users will be mapped per namespaces
- Each user will get its own namespace, created if it does not exist
- `oidc.domain_map` option has been removed
- `strip_email_domain` option has been added (see [config-example.yaml](./config_example.yaml))
### Changes
- `ip_prefix` is now superseded by `ip_prefixes` in the configuration [#208](https://github.com/juanfont/headscale/pull/208)
- Upgrade `tailscale` (1.20.4) and other dependencies to latest [#314](https://github.com/juanfont/headscale/pull/314)
- fix swapped machine<->namespace labels in `/metrics` [#312](https://github.com/juanfont/headscale/pull/312)
- remove key-value based update mechanism for namespace changes [#316](https://github.com/juanfont/headscale/pull/316)
**0.12.4 (2022-01-29):**
### Changes
- Make gRPC Unix Socket permissions configurable [#292](https://github.com/juanfont/headscale/pull/292)
- Trim whitespace before reading Private Key from file [#289](https://github.com/juanfont/headscale/pull/289)
- Add new command to generate a private key for `headscale` [#290](https://github.com/juanfont/headscale/pull/290)
- Fixed issue where hosts deleted from control server may be written back to the database, as long as they are connected to the control server [#278](https://github.com/juanfont/headscale/pull/278)
## 0.12.3 (2022-01-13)
### Changes
- Added Alpine container [#270](https://github.com/juanfont/headscale/pull/270)
- Minor updates in dependencies [#271](https://github.com/juanfont/headscale/pull/271)
## 0.12.2 (2022-01-11)
Happy New Year!
### Changes
- Fix Docker release [#258](https://github.com/juanfont/headscale/pull/258)
- Rewrite main docs [#262](https://github.com/juanfont/headscale/pull/262)
- Improve Docker docs [#263](https://github.com/juanfont/headscale/pull/263)
## 0.12.1 (2021-12-24)
(We are skipping 0.12.0 to correct a mishap done weeks ago with the version tagging)
### BREAKING
- Upgrade to Tailscale 1.18 [#229](https://github.com/juanfont/headscale/pull/229)
- This change requires a new format for private key, private keys are now generated automatically:
1. Delete your current key
2. Restart `headscale`, a new key will be generated.
3. Restart all Tailscale clients to fetch the new key
### Changes
- Unify configuration example [#197](https://github.com/juanfont/headscale/pull/197)
- Add stricter linting and formatting [#223](https://github.com/juanfont/headscale/pull/223)
### Features
- Add gRPC and HTTP API (HTTP API is currently disabled) [#204](https://github.com/juanfont/headscale/pull/204)
- Use gRPC between the CLI and the server [#206](https://github.com/juanfont/headscale/pull/206), [#212](https://github.com/juanfont/headscale/pull/212)
- Beta OpenID Connect support [#126](https://github.com/juanfont/headscale/pull/126), [#227](https://github.com/juanfont/headscale/pull/227)
## 0.11.0 (2021-10-25)
### BREAKING
- Make headscale fetch DERP map from URL and file [#196](https://github.com/juanfont/headscale/pull/196)

View File

@@ -1,18 +1,21 @@
FROM golang:1.17.1-bullseye AS build
# Builder image
FROM docker.io/golang:1.18.0-bullseye AS build
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
WORKDIR /go/src/headscale
RUN go mod download
COPY . /go/src/headscale
COPY . .
RUN go install -a -ldflags="-extldflags=-static" -tags netgo,sqlite_omit_load_extension ./cmd/headscale
RUN GGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale
FROM ubuntu:20.04
# Production image
FROM gcr.io/distroless/base-debian11
COPY --from=build /go/bin/headscale /usr/local/bin/headscale
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp

23
Dockerfile.alpine Normal file
View File

@@ -0,0 +1,23 @@
# Builder image
FROM docker.io/golang:1.18.0-alpine AS build
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN apk add gcc musl-dev
RUN go mod download
COPY . .
RUN GGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale
RUN strip /go/bin/headscale
RUN test -e /go/bin/headscale
# Production image
FROM docker.io/alpine:latest
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
EXPOSE 8080/tcp
CMD ["headscale"]

23
Dockerfile.debug Normal file
View File

@@ -0,0 +1,23 @@
# Builder image
FROM docker.io/golang:1.18.0-bullseye AS build
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN GGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale
RUN test -e /go/bin/headscale
# Debug image
FROM gcr.io/distroless/base-debian11:debug
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
# Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT []
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -1,11 +1,17 @@
FROM ubuntu:latest
ARG TAILSCALE_VERSION
ARG TAILSCALE_VERSION=*
ARG TAILSCALE_CHANNEL=stable
RUN apt-get update \
&& apt-get install -y gnupg curl \
&& curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.gpg | apt-key add - \
&& curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \
&& apt-get install -y tailscale=${TAILSCALE_VERSION} \
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
&& rm -rf /var/lib/apt/lists/*
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

21
Dockerfile.tailscale-HEAD Normal file
View File

@@ -0,0 +1,21 @@
FROM golang:latest
RUN apt-get update \
&& apt-get install -y ca-certificates dnsutils git iptables \
&& rm -rf /var/lib/apt/lists/*
RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR tailscale
RUN sh build_dist.sh tailscale.com/cmd/tailscale
RUN sh build_dist.sh tailscale.com/cmd/tailscaled
RUN cp tailscale /usr/local/bin/
RUN cp tailscaled /usr/local/bin/
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
RUN update-ca-certificates

View File

@@ -1,8 +1,16 @@
# Calculate version
version = $(shell ./scripts/version-at-commit.sh)
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
# GO_SOURCES = $(wildcard *.go)
# PROTO_SOURCES = $(wildcard **/*.proto)
GO_SOURCES = $(call rwildcard,,*.go)
PROTO_SOURCES = $(call rwildcard,,*.proto)
build:
go build -ldflags "-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$(version)" cmd/headscale/headscale.go
GGO_ENABLED=0 go build -ldflags "-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$(version)" cmd/headscale/headscale.go
dev: lint test build
@@ -10,7 +18,13 @@ test:
@go test -coverprofile=coverage.out ./...
test_integration:
go test -tags integration -timeout 30m ./...
go test -failfast -tags integration -timeout 30m -count=1 ./...
test_integration_cli:
go test -tags integration -v integration_cli_test.go integration_common_test.go
test_integration_derp:
go test -tags integration -v integration_embedded_derp_test.go integration_common_test.go
coverprofile_func:
go tool cover -func=coverage.out
@@ -19,9 +33,26 @@ coverprofile_html:
go tool cover -html=coverage.out
lint:
golint
golangci-lint run --timeout 5m
golangci-lint run --fix --timeout 10m
fmt:
prettier --write '**/**.{ts,js,md,yaml,yml,sass,css,scss,html}'
golines --max-len=88 --base-formatter=gofumpt -w $(GO_SOURCES)
clang-format -style="{BasedOnStyle: Google, IndentWidth: 4, AlignConsecutiveDeclarations: true, AlignConsecutiveAssignments: true, ColumnLimit: 0}" -i $(PROTO_SOURCES)
proto-lint:
cd proto/ && buf lint
compress: build
upx --brute headscale
generate:
rm -rf gen
buf generate proto
install-protobuf-plugins:
go install \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
google.golang.org/protobuf/cmd/protoc-gen-go \
google.golang.org/grpc/cmd/protoc-gen-go-grpc

736
README.md
View File

@@ -2,35 +2,68 @@
![ci](https://github.com/juanfont/headscale/actions/workflows/test.yml/badge.svg)
An open source, self-hosted implementation of the Tailscale coordination server.
An open source, self-hosted implementation of the Tailscale control server.
Join our [Discord](https://discord.gg/XcQxk2VHjx) server for a chat.
## Overview
**Note:** Always select the same GitHub tag as the released version you use
to ensure you have the correct example configuration and documentation.
The `main` branch might contain unreleased changes.
Tailscale is [a modern VPN](https://tailscale.com/) built on top of [Wireguard](https://www.wireguard.com/). It [works like an overlay network](https://tailscale.com/blog/how-tailscale-works/) between the computers of your networks - using all kinds of [NAT traversal sorcery](https://tailscale.com/blog/how-nat-traversal-works/).
## What is Tailscale
Everything in Tailscale is Open Source, except the GUI clients for proprietary OS (Windows and macOS/iOS), and the 'coordination/control server'.
Tailscale is [a modern VPN](https://tailscale.com/) built on top of
[Wireguard](https://www.wireguard.com/).
It [works like an overlay network](https://tailscale.com/blog/how-tailscale-works/)
between the computers of your networks - using
[NAT traversal](https://tailscale.com/blog/how-nat-traversal-works/).
The control server works as an exchange point of Wireguard public keys for the nodes in the Tailscale network. It also assigns the IP addresses of the clients, creates the boundaries between each user, enables sharing machines between users, and exposes the advertised routes of your nodes.
Everything in Tailscale is Open Source, except the GUI clients for proprietary OS
(Windows and macOS/iOS), and the control server.
Headscale implements this coordination server.
The control server works as an exchange point of Wireguard public keys for the
nodes in the Tailscale network. It assigns the IP addresses of the clients,
creates the boundaries between each user, enables sharing machines between users,
and exposes the advertised routes of your nodes.
## Status
A [Tailscale network (tailnet)](https://tailscale.com/kb/1136/tailnet/) is private
network which Tailscale assigns to a user in terms of private users or an
organisations.
- [x] Base functionality (nodes can communicate with each other)
- [x] Node registration through the web flow
- [x] Network changes are relayed to the nodes
- [x] Namespace support (~equivalent to multi-user in Tailscale.com)
- [x] Routing (advertise & accept, including exit nodes)
- [x] Node registration via pre-auth keys (including reusable keys, and ephemeral node support)
- [x] JSON-formatted output
- [x] ACLs
- [x] Taildrop (File Sharing)
- [x] Support for alternative IP ranges in the tailnets (default Tailscale's 100.64.0.0/10)
- [x] DNS (passing DNS servers to nodes)
- [x] Share nodes between ~~users~~ namespaces
- [x] MagicDNS (see `docs/`)
## Design goal
`headscale` aims to implement a self-hosted, open source alternative to the Tailscale
control server. `headscale` has a narrower scope and an instance of `headscale`
implements a _single_ Tailnet, which is typically what a single organisation, or
home/personal setup would use.
`headscale` uses terms that maps to Tailscale's control server, consult the
[glossary](./docs/glossary.md) for explainations.
## Support
If you like `headscale` and find it useful, there is a sponsorship and donation
buttons available in the repo.
If you would like to sponsor features, bugs or prioritisation, reach out to
one of the maintainers.
## Features
- Full "base" support of Tailscale's features
- Configurable DNS
- [Split DNS](https://tailscale.com/kb/1054/dns/#using-dns-settings-in-the-admin-console)
- Node registration
- Single-Sign-On (via Open ID Connect)
- Pre authenticated key
- Taildrop (File Sharing)
- [Access control lists](https://tailscale.com/kb/1018/acls/)
- [MagicDNS](https://tailscale.com/kb/1081/magicdns)
- Support for multiple IP ranges in the tailnet
- Dual stack (IPv4 and IPv6)
- Routing advertising (including exit nodes)
- Ephemeral nodes
- Embedded [DERP server](https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp)
## Client OS support
@@ -38,236 +71,455 @@ Headscale implements this coordination server.
| ------- | ----------------------------------------------------------------------------------------------------------------- |
| Linux | Yes |
| OpenBSD | Yes |
| FreeBSD | Yes |
| macOS | Yes (see `/apple` on your headscale for more information) |
| Windows | Yes |
| Windows | Yes [docs](./docs/windows-client.md) |
| Android | [You need to compile the client yourself](https://github.com/juanfont/headscale/issues/58#issuecomment-885255270) |
| iOS | Not yet |
## Roadmap 🤷
## Running headscale
Suggestions/PRs welcomed!
## Running it
1. Download the Headscale binary https://github.com/juanfont/headscale/releases, and place it somewhere in your PATH or use the docker container
```shell
docker pull headscale/headscale:x.x.x
```
<!--
or
```shell
docker pull ghrc.io/juanfont/headscale:x.x.x
``` -->
2. (Optional, you can also use SQLite) Get yourself a PostgreSQL DB running
```shell
docker run --name headscale -e POSTGRES_DB=headscale -e \
POSTGRES_USER=foo -e POSTGRES_PASSWORD=bar -p 5432:5432 -d postgres
```
3. Set some stuff up (headscale Wireguard keys & the config.json file)
```shell
wg genkey > private.key
wg pubkey < private.key > public.key # not needed
# Postgres
cp config.json.postgres.example config.json
# or
# SQLite
cp config.json.sqlite.example config.json
```
4. Create a namespace (a namespace is a 'tailnet', a group of Tailscale nodes that can talk to each other)
```shell
headscale namespaces create myfirstnamespace
```
or docker:
the db.sqlite mount is only needed if you use sqlite
```shell
touch db.sqlite
docker run -v $(pwd)/private.key:/private.key -v $(pwd)/config.json:/config.json -v $(pwd)/derp.yaml:/derp.yaml -v $(pwd)/db.sqlite:/db.sqlite -p 127.0.0.1:8080:8080 headscale/headscale:x.x.x headscale namespaces create myfirstnamespace
```
or if your server is already running in docker:
```shell
docker exec <container_name> headscale create myfirstnamespace
```
5. Run the server
```shell
headscale serve
```
or docker:
the db.sqlite mount is only needed if you use sqlite
```shell
docker run -v $(pwd)/private.key:/private.key -v $(pwd)/config.json:/config.json -v $(pwd)/derp.yaml:/derp.yaml -v $(pwd)/db.sqlite:/db.sqlite -p 127.0.0.1:8080:8080 headscale/headscale:x.x.x headscale serve
```
6. If you used tailscale.com before in your nodes, make sure you clear the tailscald data folder
```shell
systemctl stop tailscaled
rm -fr /var/lib/tailscale
systemctl start tailscaled
```
7. Add your first machine
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
```
8. Navigate to the URL you will get with `tailscale up`, where you'll find your machine key.
9. In the server, register your machine to a namespace with the CLI
```shell
headscale -n myfirstnamespace nodes register YOURMACHINEKEY
```
or docker:
```shell
docker run -v $(pwd)/private.key:/private.key -v $(pwd)/config.json:/config.json -v $(pwd)/derp.yaml:/derp.yaml headscale/headscale:x.x.x headscale -n myfirstnamespace nodes register YOURMACHINEKEY
```
or if your server is already running in docker:
```shell
docker exec <container_name> headscale -n myfirstnamespace nodes register YOURMACHINEKEY
```
Alternatively, you can use Auth Keys to register your machines:
1. Create an authkey
```shell
headscale -n myfirstnamespace preauthkeys create --reusable --expiration 24h
```
or docker:
```shell
docker run -v $(pwd)/private.key:/private.key -v $(pwd)/config.json:/config.json -v$(pwd)/derp.yaml:/derp.yaml -v $(pwd)/db.sqlite:/db.sqlite headscale/headscale:x.x.x headscale -n myfirstnamespace preauthkeys create --reusable --expiration 24h
```
or if your server is already running in docker:
```shell
docker exec <container_name> headscale -n myfirstnamespace preauthkeys create --reusable --expiration 24h
```
2. Use the authkey from your machine to register it
```shell
tailscale up --login-server YOUR_HEADSCALE_URL --authkey YOURAUTHKEY
```
If you create an authkey with the `--ephemeral` flag, that key will create ephemeral nodes. This implies that `--reusable` is true.
Please bear in mind that all the commands from headscale support adding `-o json` or `-o json-line` to get a nicely JSON-formatted output.
## Configuration reference
Headscale's configuration file is named `config.json` or `config.yaml`. Headscale will look for it in `/etc/headscale`, `~/.headscale` and finally the directory from where the Headscale binary is executed.
```
"server_url": "http://192.168.1.12:8080",
"listen_addr": "0.0.0.0:8080",
"ip_prefix": "100.64.0.0/10"
```
`server_url` is the external URL via which Headscale is reachable. `listen_addr` is the IP address and port the Headscale program should listen on. `ip_prefix` is the IP prefix (range) in which IP addresses for nodes will be allocated (default 100.64.0.0/10, e.g., 192.168.4.0/24, 10.0.0.0/8)
```
"log_level": "debug"
```
`log_level` can be used to set the Log level for Headscale, it defaults to `debug`, and the available levels are: `trace`, `debug`, `info`, `warn` and `error`.
```
"private_key_path": "private.key",
```
`private_key_path` is the path to the Wireguard private key. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from.
```
"derp_map_path": "derp.yaml",
```
`derp_map_path` is the path to the [DERP](https://pkg.go.dev/tailscale.com/derp) map file. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from.
```
"ephemeral_node_inactivity_timeout": "30m",
```
`ephemeral_node_inactivity_timeout` is the timeout after which inactive ephemeral node records will be deleted from the database. The default is 30 minutes. This value must be higher than 65 seconds (the keepalive timeout for the HTTP long poll is 60 seconds, plus a few seconds to avoid race conditions).
```
"db_host": "localhost",
"db_port": 5432,
"db_name": "headscale",
"db_user": "foo",
"db_pass": "bar",
```
The fields starting with `db_` are used for the PostgreSQL connection information.
### Running the service via TLS (optional)
```
"tls_cert_path": ""
"tls_key_path": ""
```
Headscale can be configured to expose its web service via TLS. To configure the certificate and key file manually, set the `tls_cert_path` and `tls_cert_path` configuration parameters. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from.
```
"tls_letsencrypt_hostname": "",
"tls_letsencrypt_listen": ":http",
"tls_letsencrypt_cache_dir": ".cache",
"tls_letsencrypt_challenge_type": "HTTP-01",
```
To get a certificate automatically via [Let's Encrypt](https://letsencrypt.org/), set `tls_letsencrypt_hostname` to the desired certificate hostname. This name must resolve to the IP address(es) Headscale is reachable on (i.e., it must correspond to the `server_url` configuration parameter). The certificate and Let's Encrypt account credentials will be stored in the directory configured in `tls_letsencrypt_cache_dir`. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from. The certificate will automatically be renewed as needed.
#### Challenge type HTTP-01
The default challenge type `HTTP-01` requires that Headscale is reachable on port 80 for the Let's Encrypt automated validation, in addition to whatever port is configured in `listen_addr`. By default, Headscale listens on port 80 on all local IPs for Let's Encrypt automated validation.
If you need to change the ip and/or port used by Headscale for the Let's Encrypt validation process, set `tls_letsencrypt_listen` to the appropriate value. This can be handy if you are running Headscale as a non-root user (or can't run `setcap`). Keep in mind, however, that Let's Encrypt will _only_ connect to port 80 for the validation callback, so if you change `tls_letsencrypt_listen` you will also need to configure something else (e.g. a firewall rule) to forward the traffic from port 80 to the ip:port combination specified in `tls_letsencrypt_listen`.
#### Challenge type TLS-ALPN-01
Alternatively, `tls_letsencrypt_challenge_type` can be set to `TLS-ALPN-01`. In this configuration, Headscale listens on the ip:port combination defined in `listen_addr`. Let's Encrypt will _only_ connect to port 443 for the validation callback, so if `listen_addr` is not set to port 443, something else (e.g. a firewall rule) will be required to forward the traffic from port 443 to the ip:port combination specified in `listen_addr`.
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use namespaces (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.
Please have a look at the documentation under [`docs/`](docs/).
## Disclaimer
1. We have nothing to do with Tailscale, or Tailscale Inc.
2. The purpose of writing this was to learn how Tailscale works.
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
## More on Tailscale
## Contributing
- https://tailscale.com/blog/how-tailscale-works/
- https://tailscale.com/blog/tailscale-key-management/
- https://tailscale.com/blog/an-unlikely-database-migration/
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
and [Buf](https://buf.build)(Protobuf generator).
PRs and suggestions are welcome.
### Code style
To ensure we have some consistency with a growing number of contributions,
this project has adopted linting and style/formatting rules:
The **Go** code is linted with [`golangci-lint`](https://golangci-lint.run) and
formatted with [`golines`](https://github.com/segmentio/golines) (width 88) and
[`gofumpt`](https://github.com/mvdan/gofumpt).
Please configure your editor to run the tools while developing and make sure to
run `make lint` and `make fmt` before committing any code.
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.
### Install development tools
- Go
- Buf
- Protobuf tools:
```shell
make install-protobuf-plugins
```
### Testing and building
Some parts of the project require the generation of Go code from Protobuf
(if changes are made in `proto/`) and it must be (re-)generated with:
```shell
make generate
```
**Note**: Please check in changes from `gen/` in a separate commit to make it easier to review.
To run the tests:
```shell
make test
```
To build the program:
```shell
make build
```
## Contributors
<table>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/kradalby>
<img src=https://avatars.githubusercontent.com/u/98431?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kristoffer Dalby/>
<br />
<sub style="font-size:14px"><b>Kristoffer Dalby</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/juanfont>
<img src=https://avatars.githubusercontent.com/u/181059?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Juan Font/>
<br />
<sub style="font-size:14px"><b>Juan Font</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/restanrm>
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
<br />
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cure>
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
<br />
<sub style="font-size:14px"><b>Ward Vandewege</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ohdearaugustin>
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
<br />
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/e-zk>
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
<br />
<sub style="font-size:14px"><b>e-zk</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/arch4ngel>
<img src=https://avatars.githubusercontent.com/u/11574161?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Justin Angel/>
<br />
<sub style="font-size:14px"><b>Justin Angel</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ItalyPaleAle>
<img src=https://avatars.githubusercontent.com/u/43508?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Alessandro (Ale) Segala/>
<br />
<sub style="font-size:14px"><b>Alessandro (Ale) Segala</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/unreality>
<img src=https://avatars.githubusercontent.com/u/352522?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=unreality/>
<br />
<sub style="font-size:14px"><b>unreality</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/reynico>
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
<br />
<sub style="font-size:14px"><b>Nico</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/negbie>
<img src=https://avatars.githubusercontent.com/u/20154956?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Eugen Biegler/>
<br />
<sub style="font-size:14px"><b>Eugen Biegler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/qbit>
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
<br />
<sub style="font-size:14px"><b>Aaron Bieber</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fdelucchijr>
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
<br />
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/hdhoang>
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Hoàng Đức Hiếu/>
<br />
<sub style="font-size:14px"><b>Hoàng Đức Hiếu</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/mevansam>
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
<br />
<sub style="font-size:14px"><b>Mevan Samaratunga</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/dragetd>
<img src=https://avatars.githubusercontent.com/u/3639577?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael G./>
<br />
<sub style="font-size:14px"><b>Michael G.</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ptman>
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
<br />
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/cmars>
<img src=https://avatars.githubusercontent.com/u/23741?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Casey Marshall/>
<br />
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/SilverBut>
<img src=https://avatars.githubusercontent.com/u/6560655?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Silver Bullet/>
<br />
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/majst01>
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
<br />
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lachy2849>
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
<br />
<sub style="font-size:14px"><b>lachy2849</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/t56k>
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
<br />
<sub style="font-size:14px"><b>thomas</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/aberoham>
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
<br />
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/artemklevtsov>
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
<br />
<sub style="font-size:14px"><b>Artem Klevtsov</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/awoimbee>
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
<br />
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/stensonb>
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
<br />
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/fkr>
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
<br />
<sub style="font-size:14px"><b>Felix Kronlage-Dammers</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/felixonmars>
<img src=https://avatars.githubusercontent.com/u/1006477?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Yan/>
<br />
<sub style="font-size:14px"><b>Felix Yan</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/JJGadgets>
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
<br />
<sub style="font-size:14px"><b>JJGadgets</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/madjam002>
<img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
<br />
<sub style="font-size:14px"><b>Jamie Greeff</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/jimt>
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
<br />
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/piec>
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
<br />
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/rcursaru>
<img src=https://avatars.githubusercontent.com/u/16259641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=rcursaru/>
<br />
<sub style="font-size:14px"><b>rcursaru</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/renovate-bot>
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=WhiteSource Renovate/>
<br />
<sub style="font-size:14px"><b>WhiteSource Renovate</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ryanfowler>
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
<br />
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/shaananc>
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
<br />
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/m-tanner-dev0>
<img src=https://avatars.githubusercontent.com/u/97977342?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tanner/>
<br />
<sub style="font-size:14px"><b>Tanner</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Teteros>
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
<br />
<sub style="font-size:14px"><b>Teteros</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/gitter-badger>
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
<br />
<sub style="font-size:14px"><b>The Gitter Badger</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/tianon>
<img src=https://avatars.githubusercontent.com/u/161631?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tianon Gravi/>
<br />
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/woudsma>
<img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/>
<br />
<sub style="font-size:14px"><b>Tjerk Woudsma</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/y0ngb1n>
<img src=https://avatars.githubusercontent.com/u/25719408?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yang Bin/>
<br />
<sub style="font-size:14px"><b>Yang Bin</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/zekker6>
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
<br />
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Bpazy>
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ZiYuan/>
<br />
<sub style="font-size:14px"><b>ZiYuan</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/bravechamp>
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
<br />
<sub style="font-size:14px"><b>bravechamp</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/derelm>
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
<br />
<sub style="font-size:14px"><b>derelm</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/ignoramous>
<img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/>
<br />
<sub style="font-size:14px"><b>ignoramous</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/lion24>
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lion24/>
<br />
<sub style="font-size:14px"><b>lion24</b></sub>
</a>
</td>
</tr>
<tr>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/pernila>
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
<br />
<sub style="font-size:14px"><b>pernila</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/Wakeful-Cloud>
<img src=https://avatars.githubusercontent.com/u/38930607?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Wakeful-Cloud/>
<br />
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
</a>
</td>
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
<a href=https://github.com/xpzouying>
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
<br />
<sub style="font-size:14px"><b>zy</b></sub>
</a>
</td>
</tr>
</table>

431
acls.go
View File

@@ -5,26 +5,43 @@ import (
"fmt"
"io"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/rs/zerolog/log"
"github.com/tailscale/hujson"
"gopkg.in/yaml.v3"
"inet.af/netaddr"
"tailscale.com/tailcfg"
)
const errorEmptyPolicy = Error("empty policy")
const errorInvalidAction = Error("invalid action")
const errorInvalidUserSection = Error("invalid user section")
const errorInvalidGroup = Error("invalid group")
const errorInvalidTag = Error("invalid tag")
const errorInvalidNamespace = Error("invalid namespace")
const errorInvalidPortFormat = Error("invalid port format")
const (
errEmptyPolicy = Error("empty policy")
errInvalidAction = Error("invalid action")
errInvalidGroup = Error("invalid group")
errInvalidTag = Error("invalid tag")
errInvalidPortFormat = Error("invalid port format")
)
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules
const (
Base8 = 8
Base10 = 10
BitSize16 = 16
BitSize32 = 32
BitSize64 = 64
portRangeBegin = 0
portRangeEnd = 65535
expectedTokenItems = 2
)
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
func (h *Headscale) LoadACLPolicy(path string) error {
log.Debug().
Str("func", "LoadACLPolicy").
Str("path", path).
Msg("Loading ACL policy from path")
policyFile, err := os.Open(path)
if err != nil {
return err
@@ -32,58 +49,100 @@ func (h *Headscale) LoadACLPolicy(path string) error {
defer policyFile.Close()
var policy ACLPolicy
b, err := io.ReadAll(policyFile)
policyBytes, err := io.ReadAll(policyFile)
if err != nil {
return err
}
err = hujson.Unmarshal(b, &policy)
if err != nil {
return err
switch filepath.Ext(path) {
case ".yml", ".yaml":
log.Debug().
Str("path", path).
Bytes("file", policyBytes).
Msg("Loading ACLs from YAML")
err := yaml.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
log.Trace().
Interface("policy", policy).
Msg("Loaded policy from YAML")
default:
ast, err := hujson.Parse(policyBytes)
if err != nil {
return err
}
ast.Standardize()
policyBytes = ast.Pack()
err = json.Unmarshal(policyBytes, &policy)
if err != nil {
return err
}
}
if policy.IsZero() {
return errorEmptyPolicy
return errEmptyPolicy
}
h.aclPolicy = &policy
return h.UpdateACLRules()
}
func (h *Headscale) UpdateACLRules() error {
rules, err := h.generateACLRules()
if err != nil {
return err
}
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
h.aclRules = rules
return nil
}
func (h *Headscale) generateACLRules() (*[]tailcfg.FilterRule, error) {
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
rules := []tailcfg.FilterRule{}
for i, a := range h.aclPolicy.ACLs {
if a.Action != "accept" {
return nil, errorInvalidAction
}
if h.aclPolicy == nil {
return nil, errEmptyPolicy
}
r := tailcfg.FilterRule{}
machines, err := h.ListMachines()
if err != nil {
return nil, err
}
for index, acl := range h.aclPolicy.ACLs {
if acl.Action != "accept" {
return nil, errInvalidAction
}
srcIPs := []string{}
for j, u := range a.Users {
srcs, err := h.generateACLPolicySrcIP(u)
for innerIndex, user := range acl.Users {
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, user)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, User %d", i, j)
Msgf("Error parsing ACL %d, User %d", index, innerIndex)
return nil, err
}
srcIPs = append(srcIPs, *srcs...)
srcIPs = append(srcIPs, srcs...)
}
r.SrcIPs = srcIPs
destPorts := []tailcfg.NetPortRange{}
for j, d := range a.Ports {
dests, err := h.generateACLPolicyDestPorts(d)
for innerIndex, ports := range acl.Ports {
dests, err := h.generateACLPolicyDestPorts(machines, *h.aclPolicy, ports)
if err != nil {
log.Error().
Msgf("Error parsing ACL %d, Port %d", i, j)
Msgf("Error parsing ACL %d, Port %d", index, innerIndex)
return nil, err
}
destPorts = append(destPorts, *dests...)
destPorts = append(destPorts, dests...)
}
rules = append(rules, tailcfg.FilterRule{
@@ -92,17 +151,25 @@ func (h *Headscale) generateACLRules() (*[]tailcfg.FilterRule, error) {
})
}
return &rules, nil
return rules, nil
}
func (h *Headscale) generateACLPolicySrcIP(u string) (*[]string, error) {
return h.expandAlias(u)
func (h *Headscale) generateACLPolicySrcIP(
machines []Machine,
aclPolicy ACLPolicy,
u string,
) ([]string, error) {
return expandAlias(machines, aclPolicy, u, h.cfg.OIDC.StripEmaildomain)
}
func (h *Headscale) generateACLPolicyDestPorts(d string) (*[]tailcfg.NetPortRange, error) {
func (h *Headscale) generateACLPolicyDestPorts(
machines []Machine,
aclPolicy ACLPolicy,
d string,
) ([]tailcfg.NetPortRange, error) {
tokens := strings.Split(d, ":")
if len(tokens) < 2 || len(tokens) > 3 {
return nil, errorInvalidPortFormat
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
return nil, errInvalidPortFormat
}
var alias string
@@ -112,23 +179,28 @@ func (h *Headscale) generateACLPolicyDestPorts(d string) (*[]tailcfg.NetPortRang
// tag:montreal-webserver:80,443
// tag:api-server:443
// example-host-1:*
if len(tokens) == 2 {
if len(tokens) == expectedTokenItems {
alias = tokens[0]
} else {
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
}
expanded, err := h.expandAlias(alias)
expanded, err := expandAlias(
machines,
aclPolicy,
alias,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
return nil, err
}
ports, err := h.expandPorts(tokens[len(tokens)-1])
ports, err := expandPorts(tokens[len(tokens)-1])
if err != nil {
return nil, err
}
dests := []tailcfg.NetPortRange{}
for _, d := range *expanded {
for _, d := range expanded {
for _, p := range *ports {
pr := tailcfg.NetPortRange{
IP: d,
@@ -137,120 +209,160 @@ func (h *Headscale) generateACLPolicyDestPorts(d string) (*[]tailcfg.NetPortRang
dests = append(dests, pr)
}
}
return &dests, nil
return dests, nil
}
func (h *Headscale) expandAlias(s string) (*[]string, error) {
if s == "*" {
return &[]string{"*"}, nil
// expandalias has an input of either
// - a namespace
// - a group
// - a tag
// and transform these in IPAddresses.
func expandAlias(
machines []Machine,
aclPolicy ACLPolicy,
alias string,
stripEmailDomain bool,
) ([]string, error) {
ips := []string{}
if alias == "*" {
return []string{"*"}, nil
}
if strings.HasPrefix(s, "group:") {
if _, ok := h.aclPolicy.Groups[s]; !ok {
return nil, errorInvalidGroup
log.Debug().
Str("alias", alias).
Msg("Expanding")
if strings.HasPrefix(alias, "group:") {
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
ips := []string{}
for _, n := range h.aclPolicy.Groups[s] {
nodes, err := h.ListMachinesInNamespace(n)
if err != nil {
return nil, errorInvalidNamespace
}
for _, node := range *nodes {
ips = append(ips, node.IPAddress)
for _, n := range namespaces {
nodes := filterMachinesByNamespace(machines, n)
for _, node := range nodes {
ips = append(ips, node.IPAddresses.ToStringSlice()...)
}
}
return &ips, nil
return ips, nil
}
if strings.HasPrefix(s, "tag:") {
if _, ok := h.aclPolicy.TagOwners[s]; !ok {
return nil, errorInvalidTag
if strings.HasPrefix(alias, "tag:") {
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
if err != nil {
return ips, err
}
// This will have HORRIBLE performance.
// We need to change the data model to better store tags
machines := []Machine{}
if err := h.db.Where("registered").Find(&machines).Error; err != nil {
return nil, err
}
ips := []string{}
for _, m := range machines {
hostinfo := tailcfg.Hostinfo{}
if len(m.HostInfo) != 0 {
hi, err := m.HostInfo.MarshalJSON()
if err != nil {
return nil, err
}
err = json.Unmarshal(hi, &hostinfo)
if err != nil {
return nil, err
}
// FIXME: Check TagOwners allows this
for _, t := range hostinfo.RequestTags {
if s[4:] == t {
ips = append(ips, m.IPAddress)
break
for _, namespace := range owners {
machines := filterMachinesByNamespace(machines, namespace)
for _, machine := range machines {
hi := machine.GetHostInfo()
for _, t := range hi.RequestTags {
if alias == t {
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
}
}
}
}
return &ips, nil
return ips, nil
}
n, err := h.GetNamespace(s)
// if alias is a namespace
nodes := filterMachinesByNamespace(machines, alias)
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias)
for _, n := range nodes {
ips = append(ips, n.IPAddresses.ToStringSlice()...)
}
if len(ips) > 0 {
return ips, nil
}
// if alias is an host
if h, ok := aclPolicy.Hosts[alias]; ok {
return []string{h.String()}, nil
}
// if alias is an IP
ip, err := netaddr.ParseIP(alias)
if err == nil {
nodes, err := h.ListMachinesInNamespace(n.Name)
if err != nil {
return nil, err
}
ips := []string{}
for _, n := range *nodes {
ips = append(ips, n.IPAddress)
}
return &ips, nil
return []string{ip.String()}, nil
}
if h, ok := h.aclPolicy.Hosts[s]; ok {
return &[]string{h.String()}, nil
}
ip, err := netaddr.ParseIP(s)
// if alias is an CIDR
cidr, err := netaddr.ParseIPPrefix(alias)
if err == nil {
return &[]string{ip.String()}, nil
return []string{cidr.String()}, nil
}
cidr, err := netaddr.ParseIPPrefix(s)
if err == nil {
return &[]string{cidr.String()}, nil
}
log.Warn().Msgf("No IPs found with the alias %v", alias)
return nil, errorInvalidUserSection
return ips, nil
}
func (h *Headscale) expandPorts(s string) (*[]tailcfg.PortRange, error) {
if s == "*" {
return &[]tailcfg.PortRange{{First: 0, Last: 65535}}, nil
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
// that are correctly tagged since they should not be listed as being in the namespace
// we assume in this function that we only have nodes from 1 namespace.
func excludeCorrectlyTaggedNodes(
aclPolicy ACLPolicy,
nodes []Machine,
namespace string,
) []Machine {
out := []Machine{}
tags := []string{}
for tag, ns := range aclPolicy.TagOwners {
if containsString(ns, namespace) {
tags = append(tags, tag)
}
}
// for each machine if tag is in tags list, don't append it.
for _, machine := range nodes {
hi := machine.GetHostInfo()
found := false
for _, t := range hi.RequestTags {
if containsString(tags, t) {
found = true
break
}
}
if !found {
out = append(out, machine)
}
}
return out
}
func expandPorts(portsStr string) (*[]tailcfg.PortRange, error) {
if portsStr == "*" {
return &[]tailcfg.PortRange{
{First: portRangeBegin, Last: portRangeEnd},
}, nil
}
ports := []tailcfg.PortRange{}
for _, p := range strings.Split(s, ",") {
rang := strings.Split(p, "-")
if len(rang) == 1 {
pi, err := strconv.ParseUint(rang[0], 10, 16)
for _, portStr := range strings.Split(portsStr, ",") {
rang := strings.Split(portStr, "-")
switch len(rang) {
case 1:
port, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
ports = append(ports, tailcfg.PortRange{
First: uint16(pi),
Last: uint16(pi),
First: uint16(port),
Last: uint16(port),
})
} else if len(rang) == 2 {
start, err := strconv.ParseUint(rang[0], 10, 16)
case expectedTokenItems:
start, err := strconv.ParseUint(rang[0], Base10, BitSize16)
if err != nil {
return nil, err
}
last, err := strconv.ParseUint(rang[1], 10, 16)
last, err := strconv.ParseUint(rang[1], Base10, BitSize16)
if err != nil {
return nil, err
}
@@ -258,9 +370,90 @@ func (h *Headscale) expandPorts(s string) (*[]tailcfg.PortRange, error) {
First: uint16(start),
Last: uint16(last),
})
} else {
return nil, errorInvalidPortFormat
default:
return nil, errInvalidPortFormat
}
}
return &ports, nil
}
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
out := []Machine{}
for _, machine := range machines {
if machine.Namespace.Name == namespace {
out = append(out, machine)
}
}
return out
}
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
// a group cannot be composed of groups.
func expandTagOwners(
aclPolicy ACLPolicy,
tag string,
stripEmailDomain bool,
) ([]string, error) {
var owners []string
ows, ok := aclPolicy.TagOwners[tag]
if !ok {
return []string{}, fmt.Errorf(
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
errInvalidTag,
tag,
)
}
for _, owner := range ows {
if strings.HasPrefix(owner, "group:") {
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
if err != nil {
return []string{}, err
}
owners = append(owners, gs...)
} else {
owners = append(owners, owner)
}
}
return owners, nil
}
// expandGroup will return the list of namespace inside the group
// after some validation.
func expandGroup(
aclPolicy ACLPolicy,
group string,
stripEmailDomain bool,
) ([]string, error) {
outGroups := []string{}
aclGroups, ok := aclPolicy.Groups[group]
if !ok {
return []string{}, fmt.Errorf(
"group %v isn't registered. %w",
group,
errInvalidGroup,
)
}
for _, group := range aclGroups {
if strings.HasPrefix(group, "group:") {
return []string{}, fmt.Errorf(
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
errInvalidGroup,
)
}
grp, err := NormalizeToFQDNRules(group, stripEmailDomain)
if err != nil {
return []string{}, fmt.Errorf(
"failed to normalize group %q, err: %w",
group,
errInvalidGroup,
)
}
outGroups = append(outGroups, grp)
}
return outGroups, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,70 +1,101 @@
package headscale
import (
"encoding/json"
"strings"
"github.com/tailscale/hujson"
"gopkg.in/yaml.v3"
"inet.af/netaddr"
)
// ACLPolicy represents a Tailscale ACL Policy
// ACLPolicy represents a Tailscale ACL Policy.
type ACLPolicy struct {
Groups Groups `json:"Groups"`
Hosts Hosts `json:"Hosts"`
TagOwners TagOwners `json:"TagOwners"`
ACLs []ACL `json:"ACLs"`
Tests []ACLTest `json:"Tests"`
Groups Groups `json:"Groups" yaml:"Groups"`
Hosts Hosts `json:"Hosts" yaml:"Hosts"`
TagOwners TagOwners `json:"TagOwners" yaml:"TagOwners"`
ACLs []ACL `json:"ACLs" yaml:"ACLs"`
Tests []ACLTest `json:"Tests" yaml:"Tests"`
}
// ACL is a basic rule for the ACL Policy
// ACL is a basic rule for the ACL Policy.
type ACL struct {
Action string `json:"Action"`
Users []string `json:"Users"`
Ports []string `json:"Ports"`
Action string `json:"Action" yaml:"Action"`
Users []string `json:"Users" yaml:"Users"`
Ports []string `json:"Ports" yaml:"Ports"`
}
// Groups references a series of alias in the ACL rules
// Groups references a series of alias in the ACL rules.
type Groups map[string][]string
// Hosts are alias for IP addresses or subnets
// Hosts are alias for IP addresses or subnets.
type Hosts map[string]netaddr.IPPrefix
// TagOwners specify what users (namespaces?) are allow to use certain tags
// TagOwners specify what users (namespaces?) are allow to use certain tags.
type TagOwners map[string][]string
// ACLTest is not implemented, but should be use to check if a certain rule is allowed
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
type ACLTest struct {
User string `json:"User"`
Allow []string `json:"Allow"`
Deny []string `json:"Deny,omitempty"`
User string `json:"User" yaml:"User"`
Allow []string `json:"Allow" yaml:"Allow"`
Deny []string `json:"Deny,omitempty" yaml:"Deny,omitempty"`
}
// UnmarshalJSON allows to parse the Hosts directly into netaddr objects
func (h *Hosts) UnmarshalJSON(data []byte) error {
hosts := Hosts{}
hs := make(map[string]string)
err := hujson.Unmarshal(data, &hs)
// UnmarshalJSON allows to parse the Hosts directly into netaddr objects.
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
newHosts := Hosts{}
hostIPPrefixMap := make(map[string]string)
ast, err := hujson.Parse(data)
if err != nil {
return err
}
for k, v := range hs {
if !strings.Contains(v, "/") {
v = v + "/32"
ast.Standardize()
data = ast.Pack()
err = json.Unmarshal(data, &hostIPPrefixMap)
if err != nil {
return err
}
for host, prefixStr := range hostIPPrefixMap {
if !strings.Contains(prefixStr, "/") {
prefixStr += "/32"
}
prefix, err := netaddr.ParseIPPrefix(v)
prefix, err := netaddr.ParseIPPrefix(prefixStr)
if err != nil {
return err
}
hosts[k] = prefix
newHosts[host] = prefix
}
*h = hosts
*hosts = newHosts
return nil
}
// IsZero is perhaps a bit naive here
func (p ACLPolicy) IsZero() bool {
if len(p.Groups) == 0 && len(p.Hosts) == 0 && len(p.ACLs) == 0 {
// UnmarshalYAML allows to parse the Hosts directly into netaddr objects.
func (hosts *Hosts) UnmarshalYAML(data []byte) error {
newHosts := Hosts{}
hostIPPrefixMap := make(map[string]string)
err := yaml.Unmarshal(data, &hostIPPrefixMap)
if err != nil {
return err
}
for host, prefixStr := range hostIPPrefixMap {
prefix, err := netaddr.ParseIPPrefix(prefixStr)
if err != nil {
return err
}
newHosts[host] = prefix
}
*hosts = newHosts
return nil
}
// IsZero is perhaps a bit naive here.
func (policy ACLPolicy) IsZero() bool {
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
return true
}
return false
}

723
api.go
View File

@@ -1,267 +1,274 @@
package headscale
import (
"bytes"
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"html/template"
"io"
"net/http"
"strings"
"time"
"github.com/rs/zerolog/log"
"github.com/gin-gonic/gin"
"github.com/klauspost/compress/zstd"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
"tailscale.com/tailcfg"
"tailscale.com/types/wgkey"
"tailscale.com/types/key"
)
const (
reservedResponseHeaderSize = 4
RegisterMethodAuthKey = "authkey"
RegisterMethodOIDC = "oidc"
RegisterMethodCLI = "cli"
ErrRegisterMethodCLIDoesNotSupportExpire = Error(
"machines registered with CLI does not support expire",
)
)
// KeyHandler provides the Headscale pub key
// Listens in /key
func (h *Headscale) KeyHandler(c *gin.Context) {
c.Data(200, "text/plain; charset=utf-8", []byte(h.publicKey.HexString()))
// Listens in /key.
func (h *Headscale) KeyHandler(ctx *gin.Context) {
ctx.Data(
http.StatusOK,
"text/plain; charset=utf-8",
[]byte(MachinePublicKeyStripPrefix(h.privateKey.Public())),
)
}
type registerWebAPITemplateConfig struct {
Key string
}
var registerWebAPITemplate = template.Must(
template.New("registerweb").Parse(`
<html>
<head>
<title>Registration - Headscale</title>
</head>
<body>
<h1>headscale</h1>
<h2>Machine registration</h2>
<p>
Run the command below in the headscale server to add this machine to your network:
</p>
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
</body>
</html>
`))
// RegisterWebAPI shows a simple message in the browser to point to the CLI
// Listens in /register
func (h *Headscale) RegisterWebAPI(c *gin.Context) {
mKeyStr := c.Query("key")
if mKeyStr == "" {
c.String(http.StatusBadRequest, "Wrong params")
// Listens in /register.
func (h *Headscale) RegisterWebAPI(ctx *gin.Context) {
machineKeyStr := ctx.Query("key")
if machineKeyStr == "" {
ctx.String(http.StatusBadRequest, "Wrong params")
return
}
c.Data(http.StatusOK, "text/html; charset=utf-8", []byte(fmt.Sprintf(`
<html>
<body>
<h1>headscale</h1>
<p>
Run the command below in the headscale server to add this machine to your network:
</p>
var content bytes.Buffer
if err := registerWebAPITemplate.Execute(&content, registerWebAPITemplateConfig{
Key: machineKeyStr,
}); err != nil {
log.Error().
Str("func", "RegisterWebAPI").
Err(err).
Msg("Could not render register web API template")
ctx.Data(
http.StatusInternalServerError,
"text/html; charset=utf-8",
[]byte("Could not render register web API template"),
)
}
<p>
<code>
<b>headscale -n NAMESPACE nodes register %s</b>
</code>
</p>
</body>
</html>
`, mKeyStr)))
ctx.Data(http.StatusOK, "text/html; charset=utf-8", content.Bytes())
}
// RegistrationHandler handles the actual registration process of a machine
// Endpoint /machine/:id
func (h *Headscale) RegistrationHandler(c *gin.Context) {
body, _ := io.ReadAll(c.Request.Body)
mKeyStr := c.Param("id")
mKey, err := wgkey.ParseHex(mKeyStr)
// Endpoint /machine/:id.
func (h *Headscale) RegistrationHandler(ctx *gin.Context) {
body, _ := io.ReadAll(ctx.Request.Body)
machineKeyStr := ctx.Param("id")
var machineKey key.MachinePublic
err := machineKey.UnmarshalText([]byte(MachinePublicKeyEnsurePrefix(machineKeyStr)))
if err != nil {
log.Error().
Str("handler", "Registration").
Caller().
Err(err).
Msg("Cannot parse machine key")
machineRegistrations.WithLabelValues("unkown", "web", "error", "unknown").Inc()
c.String(http.StatusInternalServerError, "Sad!")
machineRegistrations.WithLabelValues("unknown", "web", "error", "unknown").Inc()
ctx.String(http.StatusInternalServerError, "Sad!")
return
}
req := tailcfg.RegisterRequest{}
err = decode(body, &req, &mKey, h.privateKey)
err = decode(body, &req, &machineKey, h.privateKey)
if err != nil {
log.Error().
Str("handler", "Registration").
Caller().
Err(err).
Msg("Cannot decode message")
machineRegistrations.WithLabelValues("unkown", "web", "error", "unknown").Inc()
c.String(http.StatusInternalServerError, "Very sad!")
machineRegistrations.WithLabelValues("unknown", "web", "error", "unknown").Inc()
ctx.String(http.StatusInternalServerError, "Very sad!")
return
}
now := time.Now().UTC()
var m Machine
if result := h.db.Preload("Namespace").First(&m, "machine_key = ?", mKey.HexString()); errors.Is(result.Error, gorm.ErrRecordNotFound) {
machine, err := h.GetMachineByMachineKey(machineKey)
if errors.Is(err, gorm.ErrRecordNotFound) {
log.Info().Str("machine", req.Hostinfo.Hostname).Msg("New machine")
m = Machine{
Expiry: &req.Expiry,
MachineKey: mKey.HexString(),
Name: req.Hostinfo.Hostname,
NodeKey: wgkey.Key(req.NodeKey).HexString(),
LastSuccessfulUpdate: &now,
}
if err := h.db.Create(&m).Error; err != nil {
log.Error().
Str("handler", "Registration").
Err(err).
Msg("Could not create row")
machineRegistrations.WithLabelValues("unkown", "web", "error", m.Namespace.Name).Inc()
machineKeyStr := MachinePublicKeyStripPrefix(machineKey)
// If the machine has AuthKey set, handle registration via PreAuthKeys
if req.Auth.AuthKey != "" {
h.handleAuthKey(ctx, machineKey, req)
return
}
}
hname, err := NormalizeToFQDNRules(
req.Hostinfo.Hostname,
h.cfg.OIDC.StripEmaildomain,
)
if err != nil {
log.Error().
Caller().
Str("func", "RegistrationHandler").
Str("hostinfo.name", req.Hostinfo.Hostname).
Err(err)
return
}
// The machine did not have a key to authenticate, which means
// that we rely on a method that calls back some how (OpenID or CLI)
// We create the machine and then keep it around until a callback
// happens
newMachine := Machine{
MachineKey: machineKeyStr,
Name: hname,
NodeKey: NodePublicKeyStripPrefix(req.NodeKey),
LastSeen: &now,
Expiry: &time.Time{},
}
if !req.Expiry.IsZero() {
log.Trace().
Caller().
Str("machine", req.Hostinfo.Hostname).
Time("expiry", req.Expiry).
Msg("Non-zero expiry time requested")
newMachine.Expiry = &req.Expiry
}
h.registrationCache.Set(
machineKeyStr,
newMachine,
registerCacheExpiration,
)
h.handleMachineRegistrationNew(ctx, machineKey, req)
if !m.Registered && req.Auth.AuthKey != "" {
h.handleAuthKey(c, h.db, mKey, req, m)
return
}
resp := tailcfg.RegisterResponse{}
// The machine is already registered, so we need to pass through reauth or key update.
if machine != nil {
// If the NodeKey stored in headscale is the same as the key presented in a registration
// request, then we have a node that is either:
// - Trying to log out (sending a expiry in the past)
// - A valid, registered machine, looking for the node map
// - Expired machine wanting to reauthenticate
if machine.NodeKey == NodePublicKeyStripPrefix(req.NodeKey) {
// The client sends an Expiry in the past if the client is requesting to expire the key (aka logout)
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L648
if !req.Expiry.IsZero() && req.Expiry.UTC().Before(now) {
h.handleMachineLogOut(ctx, machineKey, *machine)
// We have the updated key!
if m.NodeKey == wgkey.Key(req.NodeKey).HexString() {
if m.Registered {
log.Debug().
Str("handler", "Registration").
Str("machine", m.Name).
Msg("Client is registered and we have the current NodeKey. All clear to /map")
resp.AuthURL = ""
resp.MachineAuthorized = true
resp.User = *m.Namespace.toUser()
respBody, err := encode(resp, &mKey, h.privateKey)
if err != nil {
log.Error().
Str("handler", "Registration").
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("update", "web", "error", m.Namespace.Name).Inc()
c.String(http.StatusInternalServerError, "")
return
}
machineRegistrations.WithLabelValues("update", "web", "success", m.Namespace.Name).Inc()
c.Data(200, "application/json; charset=utf-8", respBody)
// If machine is not expired, and is register, we have a already accepted this machine,
// let it proceed with a valid registration
if !machine.isExpired() {
h.handleMachineValidRegistration(ctx, machineKey, *machine)
return
}
}
// The NodeKey we have matches OldNodeKey, which means this is a refresh after a key expiration
if machine.NodeKey == NodePublicKeyStripPrefix(req.OldNodeKey) &&
!machine.isExpired() {
h.handleMachineRefreshKey(ctx, machineKey, req, *machine)
return
}
log.Debug().
Str("handler", "Registration").
Str("machine", m.Name).
Msg("Not registered and not NodeKey rotation. Sending a authurl to register")
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
h.cfg.ServerURL, mKey.HexString())
respBody, err := encode(resp, &mKey, h.privateKey)
if err != nil {
log.Error().
Str("handler", "Registration").
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("new", "web", "error", m.Namespace.Name).Inc()
c.String(http.StatusInternalServerError, "")
return
}
machineRegistrations.WithLabelValues("new", "web", "success", m.Namespace.Name).Inc()
c.Data(200, "application/json; charset=utf-8", respBody)
// The machine has expired
h.handleMachineExpired(ctx, machineKey, req, *machine)
return
}
// The NodeKey we have matches OldNodeKey, which means this is a refresh after an key expiration
if m.NodeKey == wgkey.Key(req.OldNodeKey).HexString() {
log.Debug().
Str("handler", "Registration").
Str("machine", m.Name).
Msg("We have the OldNodeKey in the database. This is a key refresh")
m.NodeKey = wgkey.Key(req.NodeKey).HexString()
h.db.Save(&m)
resp.AuthURL = ""
resp.User = *m.Namespace.toUser()
respBody, err := encode(resp, &mKey, h.privateKey)
if err != nil {
log.Error().
Str("handler", "Registration").
Err(err).
Msg("Cannot encode message")
c.String(http.StatusInternalServerError, "Extremely sad!")
return
}
c.Data(200, "application/json; charset=utf-8", respBody)
return
}
// We arrive here after a client is restarted without finalizing the authentication flow or
// when headscale is stopped in the middle of the auth process.
if m.Registered {
log.Debug().
Str("handler", "Registration").
Str("machine", m.Name).
Msg("The node is sending us a new NodeKey, but machine is registered. All clear for /map")
resp.AuthURL = ""
resp.MachineAuthorized = true
resp.User = *m.Namespace.toUser()
respBody, err := encode(resp, &mKey, h.privateKey)
if err != nil {
log.Error().
Str("handler", "Registration").
Err(err).
Msg("Cannot encode message")
c.String(http.StatusInternalServerError, "")
return
}
c.Data(200, "application/json; charset=utf-8", respBody)
return
}
log.Debug().
Str("handler", "Registration").
Str("machine", m.Name).
Msg("The node is sending us a new NodeKey, sending auth url")
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
h.cfg.ServerURL, mKey.HexString())
respBody, err := encode(resp, &mKey, h.privateKey)
if err != nil {
log.Error().
Str("handler", "Registration").
Err(err).
Msg("Cannot encode message")
c.String(http.StatusInternalServerError, "")
return
}
c.Data(200, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) getMapResponse(mKey wgkey.Key, req tailcfg.MapRequest, m *Machine) ([]byte, error) {
func (h *Headscale) getMapResponse(
machineKey key.MachinePublic,
req tailcfg.MapRequest,
machine *Machine,
) ([]byte, error) {
log.Trace().
Str("func", "getMapResponse").
Str("machine", req.Hostinfo.Hostname).
Msg("Creating Map response")
node, err := m.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Cannot convert to node")
return nil, err
}
peers, err := h.getPeers(m)
peers, err := h.getValidPeers(machine)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Cannot fetch peers")
return nil, err
}
profiles := getMapResponseUserProfiles(*m, peers)
profiles := getMapResponseUserProfiles(*machine, peers)
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Failed to convert peers to Tailscale nodes")
return nil, err
}
dnsConfig, err := getMapResponseDNSConfig(h.cfg.DNSConfig, h.cfg.BaseDomain, *m, peers)
if err != nil {
log.Error().
Str("func", "getMapResponse").
Err(err).
Msg("Failed generate the DNSConfig")
return nil, err
}
dnsConfig := getMapResponseDNSConfig(
h.cfg.DNSConfig,
h.cfg.BaseDomain,
*machine,
peers,
)
resp := tailcfg.MapResponse{
KeepAlive: false,
@@ -269,8 +276,8 @@ func (h *Headscale) getMapResponse(mKey wgkey.Key, req tailcfg.MapRequest, m *Ma
Peers: nodePeers,
DNSConfig: dnsConfig,
Domain: h.cfg.BaseDomain,
PacketFilter: *h.aclRules,
DERPMap: h.cfg.DerpMap,
PacketFilter: h.aclRules,
DERPMap: h.DERPMap,
UserProfiles: profiles,
}
@@ -282,135 +289,373 @@ func (h *Headscale) getMapResponse(mKey wgkey.Key, req tailcfg.MapRequest, m *Ma
var respBody []byte
if req.Compress == "zstd" {
src, _ := json.Marshal(resp)
src, err := json.Marshal(resp)
if err != nil {
log.Error().
Caller().
Str("func", "getMapResponse").
Err(err).
Msg("Failed to marshal response for the client")
return nil, err
}
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody, err = encodeMsg(srcCompressed, &mKey, h.privateKey)
if err != nil {
return nil, err
}
respBody = h.privateKey.SealTo(machineKey, srcCompressed)
} else {
respBody, err = encode(resp, &mKey, h.privateKey)
respBody, err = encode(resp, &machineKey, h.privateKey)
if err != nil {
return nil, err
}
}
// declare the incoming size on the first 4 bytes
data := make([]byte, 4)
data := make([]byte, reservedResponseHeaderSize)
binary.LittleEndian.PutUint32(data, uint32(len(respBody)))
data = append(data, respBody...)
return data, nil
}
func (h *Headscale) getMapKeepAliveResponse(mKey wgkey.Key, req tailcfg.MapRequest, m *Machine) ([]byte, error) {
resp := tailcfg.MapResponse{
func (h *Headscale) getMapKeepAliveResponse(
machineKey key.MachinePublic,
mapRequest tailcfg.MapRequest,
) ([]byte, error) {
mapResponse := tailcfg.MapResponse{
KeepAlive: true,
}
var respBody []byte
var err error
if req.Compress == "zstd" {
src, _ := json.Marshal(resp)
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody, err = encodeMsg(srcCompressed, &mKey, h.privateKey)
if mapRequest.Compress == "zstd" {
src, err := json.Marshal(mapResponse)
if err != nil {
log.Error().
Caller().
Str("func", "getMapKeepAliveResponse").
Err(err).
Msg("Failed to marshal keepalive response for the client")
return nil, err
}
encoder, _ := zstd.NewWriter(nil)
srcCompressed := encoder.EncodeAll(src, nil)
respBody = h.privateKey.SealTo(machineKey, srcCompressed)
} else {
respBody, err = encode(resp, &mKey, h.privateKey)
respBody, err = encode(mapResponse, &machineKey, h.privateKey)
if err != nil {
return nil, err
}
}
data := make([]byte, 4)
data := make([]byte, reservedResponseHeaderSize)
binary.LittleEndian.PutUint32(data, uint32(len(respBody)))
data = append(data, respBody...)
return data, nil
}
func (h *Headscale) handleAuthKey(c *gin.Context, db *gorm.DB, idKey wgkey.Key, req tailcfg.RegisterRequest, m Machine) {
log.Debug().
Str("func", "handleAuthKey").
Str("machine", req.Hostinfo.Hostname).
Msgf("Processing auth key for %s", req.Hostinfo.Hostname)
func (h *Headscale) handleMachineLogOut(
ctx *gin.Context,
machineKey key.MachinePublic,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
pak, err := h.checkKeyValidity(req.Auth.AuthKey)
log.Info().
Str("machine", machine.Name).
Msg("Client requested logout")
h.ExpireMachine(&machine)
resp.AuthURL = ""
resp.MachineAuthorized = false
resp.User = *machine.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "")
return
}
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineValidRegistration(
ctx *gin.Context,
machineKey key.MachinePublic,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
// The machine registration is valid, respond with redirect to /map
log.Debug().
Str("machine", machine.Name).
Msg("Client is registered and we have the current NodeKey. All clear to /map")
resp.AuthURL = ""
resp.MachineAuthorized = true
resp.User = *machine.Namespace.toUser()
resp.Login = *machine.Namespace.toLogin()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("update", "web", "error", machine.Namespace.Name).
Inc()
ctx.String(http.StatusInternalServerError, "")
return
}
machineRegistrations.WithLabelValues("update", "web", "success", machine.Namespace.Name).
Inc()
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineExpired(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
// The client has registered before, but has expired
log.Debug().
Str("machine", machine.Name).
Msg("Machine registration has expired. Sending a authurl to register")
if registerRequest.Auth.AuthKey != "" {
h.handleAuthKey(ctx, machineKey, registerRequest)
return
}
if h.cfg.OIDC.Issuer != "" {
resp.AuthURL = fmt.Sprintf("%s/oidc/register/%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), machineKey.String())
} else {
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), machineKey.String())
}
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("reauth", "web", "error", machine.Namespace.Name).
Inc()
ctx.String(http.StatusInternalServerError, "")
return
}
machineRegistrations.WithLabelValues("reauth", "web", "success", machine.Namespace.Name).
Inc()
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineRefreshKey(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
machine Machine,
) {
resp := tailcfg.RegisterResponse{}
log.Debug().
Str("machine", machine.Name).
Msg("We have the OldNodeKey in the database. This is a key refresh")
machine.NodeKey = NodePublicKeyStripPrefix(registerRequest.NodeKey)
h.db.Save(&machine)
resp.AuthURL = ""
resp.User = *machine.Namespace.toUser()
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "Extremely sad!")
return
}
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
func (h *Headscale) handleMachineRegistrationNew(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
) {
resp := tailcfg.RegisterResponse{}
// The machine registration is new, redirect the client to the registration URL
log.Debug().
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("The node is sending us a new NodeKey, sending auth url")
if h.cfg.OIDC.Issuer != "" {
resp.AuthURL = fmt.Sprintf(
"%s/oidc/register/%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"),
machineKey.String(),
)
} else {
resp.AuthURL = fmt.Sprintf("%s/register?key=%s",
strings.TrimSuffix(h.cfg.ServerURL, "/"), MachinePublicKeyStripPrefix(machineKey))
}
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Cannot encode message")
ctx.String(http.StatusInternalServerError, "")
return
}
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
}
// TODO: check if any locks are needed around IP allocation.
func (h *Headscale) handleAuthKey(
ctx *gin.Context,
machineKey key.MachinePublic,
registerRequest tailcfg.RegisterRequest,
) {
machineKeyStr := MachinePublicKeyStripPrefix(machineKey)
log.Debug().
Str("func", "handleAuthKey").
Str("machine", registerRequest.Hostinfo.Hostname).
Msgf("Processing auth key for %s", registerRequest.Hostinfo.Hostname)
resp := tailcfg.RegisterResponse{}
pak, err := h.checkKeyValidity(registerRequest.Auth.AuthKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Failed authentication via AuthKey")
resp.MachineAuthorized = false
respBody, err := encode(resp, &idKey, h.privateKey)
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Cannot encode message")
c.String(http.StatusInternalServerError, "")
machineRegistrations.WithLabelValues("new", "authkey", "error", m.Namespace.Name).Inc()
ctx.String(http.StatusInternalServerError, "")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
return
}
c.Data(401, "application/json; charset=utf-8", respBody)
ctx.Data(http.StatusUnauthorized, "application/json; charset=utf-8", respBody)
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("Failed authentication via AuthKey")
machineRegistrations.WithLabelValues("new", "authkey", "error", m.Namespace.Name).Inc()
if pak != nil {
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
} else {
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error").Inc()
}
return
}
log.Debug().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Msg("Authentication key was valid, proceeding to acquire an IP address")
ip, err := h.getAvailableIP()
if err != nil {
log.Error().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Msg("Failed to find an available IP")
machineRegistrations.WithLabelValues("new", "authkey", "error", m.Namespace.Name).Inc()
return
Str("machine", registerRequest.Hostinfo.Hostname).
Msg("Authentication key was valid, proceeding to acquire IP addresses")
nodeKey := NodePublicKeyStripPrefix(registerRequest.NodeKey)
// retrieve machine information if it exist
// The error is not important, because if it does not
// exist, then this is a new machine and we will move
// on to registration.
machine, _ := h.GetMachineByMachineKey(machineKey)
if machine != nil {
log.Trace().
Caller().
Str("machine", machine.Name).
Msg("machine already registered, refreshing with new auth key")
machine.NodeKey = nodeKey
machine.AuthKeyID = uint(pak.ID)
h.RefreshMachine(machine, registerRequest.Expiry)
} else {
now := time.Now().UTC()
machineToRegister := Machine{
Name: registerRequest.Hostinfo.Hostname,
NamespaceID: pak.Namespace.ID,
MachineKey: machineKeyStr,
RegisterMethod: RegisterMethodAuthKey,
Expiry: &registerRequest.Expiry,
NodeKey: nodeKey,
LastSeen: &now,
AuthKeyID: uint(pak.ID),
}
machine, err = h.RegisterMachine(
machineToRegister,
)
if err != nil {
log.Error().
Caller().
Err(err).
Msg("could not register machine")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
ctx.String(
http.StatusInternalServerError,
"could not register machine",
)
return
}
}
log.Info().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Str("ip", ip.String()).
Msgf("Assigning %s to %s", ip, m.Name)
m.AuthKeyID = uint(pak.ID)
m.IPAddress = ip.String()
m.NamespaceID = pak.NamespaceID
m.NodeKey = wgkey.Key(req.NodeKey).HexString() // we update it just in case
m.Registered = true
m.RegisterMethod = "authKey"
db.Save(&m)
pak.Used = true
db.Save(&pak)
h.UsePreAuthKey(pak)
resp.MachineAuthorized = true
resp.User = *pak.Namespace.toUser()
respBody, err := encode(resp, &idKey, h.privateKey)
respBody, err := encode(resp, &machineKey, h.privateKey)
if err != nil {
log.Error().
Caller().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Str("machine", registerRequest.Hostinfo.Hostname).
Err(err).
Msg("Cannot encode message")
machineRegistrations.WithLabelValues("new", "authkey", "error", m.Namespace.Name).Inc()
c.String(http.StatusInternalServerError, "Extremely sad!")
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "error", pak.Namespace.Name).
Inc()
ctx.String(http.StatusInternalServerError, "Extremely sad!")
return
}
machineRegistrations.WithLabelValues("new", "authkey", "success", m.Namespace.Name).Inc()
c.Data(200, "application/json; charset=utf-8", respBody)
machineRegistrations.WithLabelValues("new", RegisterMethodAuthKey, "success", pak.Namespace.Name).
Inc()
ctx.Data(http.StatusOK, "application/json; charset=utf-8", respBody)
log.Info().
Str("func", "handleAuthKey").
Str("machine", m.Name).
Str("ip", ip.String()).
Str("machine", registerRequest.Hostinfo.Hostname).
Str("ips", strings.Join(machine.IPAddresses.ToStringSlice(), ", ")).
Msg("Successfully authenticated via AuthKey")
}

164
api_key.go Normal file
View File

@@ -0,0 +1,164 @@
package headscale
import (
"fmt"
"strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"golang.org/x/crypto/bcrypt"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
apiPrefixLength = 7
apiKeyLength = 32
apiKeyParts = 2
errAPIKeyFailedToParse = Error("Failed to parse ApiKey")
)
// APIKey describes the datamodel for API keys used to remotely authenticate with
// headscale.
type APIKey struct {
ID uint64 `gorm:"primary_key"`
Prefix string `gorm:"uniqueIndex"`
Hash []byte
CreatedAt *time.Time
Expiration *time.Time
LastSeen *time.Time
}
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
func (h *Headscale) CreateAPIKey(
expiration *time.Time,
) (string, *APIKey, error) {
prefix, err := GenerateRandomStringURLSafe(apiPrefixLength)
if err != nil {
return "", nil, err
}
toBeHashed, err := GenerateRandomStringURLSafe(apiKeyLength)
if err != nil {
return "", nil, err
}
// Key to return to user, this will only be visible _once_
keyStr := prefix + "." + toBeHashed
hash, err := bcrypt.GenerateFromPassword([]byte(toBeHashed), bcrypt.DefaultCost)
if err != nil {
return "", nil, err
}
key := APIKey{
Prefix: prefix,
Hash: hash,
Expiration: expiration,
}
h.db.Save(&key)
return keyStr, &key, nil
}
// ListAPIKeys returns the list of ApiKeys for a namespace.
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
keys := []APIKey{}
if err := h.db.Find(&keys).Error; err != nil {
return nil, err
}
return keys, nil
}
// GetAPIKey returns a ApiKey for a given key.
func (h *Headscale) GetAPIKey(prefix string) (*APIKey, error) {
key := APIKey{}
if result := h.db.First(&key, "prefix = ?", prefix); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// GetAPIKeyByID returns a ApiKey for a given id.
func (h *Headscale) GetAPIKeyByID(id uint64) (*APIKey, error) {
key := APIKey{}
if result := h.db.Find(&APIKey{ID: id}).First(&key); result.Error != nil {
return nil, result.Error
}
return &key, nil
}
// DestroyAPIKey destroys a ApiKey. Returns error if the ApiKey
// does not exist.
func (h *Headscale) DestroyAPIKey(key APIKey) error {
if result := h.db.Unscoped().Delete(key); result.Error != nil {
return result.Error
}
return nil
}
// ExpireAPIKey marks a ApiKey as expired.
func (h *Headscale) ExpireAPIKey(key *APIKey) error {
if err := h.db.Model(&key).Update("Expiration", time.Now()).Error; err != nil {
return err
}
return nil
}
func (h *Headscale) ValidateAPIKey(keyStr string) (bool, error) {
prefix, hash, err := splitAPIKey(keyStr)
if err != nil {
return false, fmt.Errorf("failed to validate api key: %w", err)
}
key, err := h.GetAPIKey(prefix)
if err != nil {
return false, fmt.Errorf("failed to validate api key: %w", err)
}
if key.Expiration.Before(time.Now()) {
return false, nil
}
if err := bcrypt.CompareHashAndPassword(key.Hash, []byte(hash)); err != nil {
return false, err
}
return true, nil
}
func splitAPIKey(key string) (string, string, error) {
parts := strings.Split(key, ".")
if len(parts) != apiKeyParts {
return "", "", errAPIKeyFailedToParse
}
return parts[0], parts[1], nil
}
func (key *APIKey) toProto() *v1.ApiKey {
protoKey := v1.ApiKey{
Id: key.ID,
Prefix: key.Prefix,
}
if key.Expiration != nil {
protoKey.Expiration = timestamppb.New(*key.Expiration)
}
if key.CreatedAt != nil {
protoKey.CreatedAt = timestamppb.New(*key.CreatedAt)
}
if key.LastSeen != nil {
protoKey.LastSeen = timestamppb.New(*key.LastSeen)
}
return &protoKey
}

89
api_key_test.go Normal file
View File

@@ -0,0 +1,89 @@
package headscale
import (
"time"
"gopkg.in/check.v1"
)
func (*Suite) TestCreateAPIKey(c *check.C) {
apiKeyStr, apiKey, err := app.CreateAPIKey(nil)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
// Did we get a valid key?
c.Assert(apiKey.Prefix, check.NotNil)
c.Assert(apiKey.Hash, check.NotNil)
c.Assert(apiKeyStr, check.Not(check.Equals), "")
_, err = app.ListAPIKeys()
c.Assert(err, check.IsNil)
keys, err := app.ListAPIKeys()
c.Assert(err, check.IsNil)
c.Assert(len(keys), check.Equals, 1)
}
func (*Suite) TestAPIKeyDoesNotExist(c *check.C) {
key, err := app.GetAPIKey("does-not-exist")
c.Assert(err, check.NotNil)
c.Assert(key, check.IsNil)
}
func (*Suite) TestValidateAPIKeyOk(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true)
}
func (*Suite) TestValidateAPIKeyNotOk(c *check.C) {
nowMinus2 := time.Now().Add(time.Duration(-2) * time.Hour)
apiKeyStr, apiKey, err := app.CreateAPIKey(&nowMinus2)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, false)
now := time.Now()
apiKeyStrNow, apiKey, err := app.CreateAPIKey(&now)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
validNow, err := app.ValidateAPIKey(apiKeyStrNow)
c.Assert(err, check.IsNil)
c.Assert(validNow, check.Equals, false)
validSilly, err := app.ValidateAPIKey("nota.validkey")
c.Assert(err, check.NotNil)
c.Assert(validSilly, check.Equals, false)
validWithErr, err := app.ValidateAPIKey("produceerrorkey")
c.Assert(err, check.NotNil)
c.Assert(validWithErr, check.Equals, false)
}
func (*Suite) TestExpireAPIKey(c *check.C) {
nowPlus2 := time.Now().Add(2 * time.Hour)
apiKeyStr, apiKey, err := app.CreateAPIKey(&nowPlus2)
c.Assert(err, check.IsNil)
c.Assert(apiKey, check.NotNil)
valid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(valid, check.Equals, true)
err = app.ExpireAPIKey(apiKey)
c.Assert(err, check.IsNil)
c.Assert(apiKey.Expiration, check.NotNil)
notValid, err := app.ValidateAPIKey(apiKeyStr)
c.Assert(err, check.IsNil)
c.Assert(notValid, check.Equals, false)
}

764
app.go
View File

@@ -1,38 +1,90 @@
package headscale
import (
"context"
"crypto/tls"
"errors"
"fmt"
"io"
"io/fs"
"net"
"net/http"
"net/url"
"os"
"os/signal"
"sort"
"strings"
"sync"
"syscall"
"time"
"github.com/rs/zerolog/log"
"github.com/coreos/go-oidc/v3/oidc"
"github.com/gin-gonic/gin"
grpc_middleware "github.com/grpc-ecosystem/go-grpc-middleware"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/patrickmn/go-cache"
zerolog "github.com/philip-bui/grpc-zerolog"
zl "github.com/rs/zerolog"
"github.com/rs/zerolog/log"
ginprometheus "github.com/zsais/go-gin-prometheus"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
"golang.org/x/oauth2"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/peer"
"google.golang.org/grpc/reflection"
"google.golang.org/grpc/status"
"gorm.io/gorm"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
"tailscale.com/types/wgkey"
"tailscale.com/types/key"
)
// Config contains the initial Headscale configuration
const (
errSTUNAddressNotSet = Error("STUN address not set")
errUnsupportedDatabase = Error("unsupported DB")
errUnsupportedLetsEncryptChallengeType = Error(
"unknown value for Lets Encrypt challenge type",
)
)
const (
AuthPrefix = "Bearer "
Postgres = "postgres"
Sqlite = "sqlite3"
updateInterval = 5000
HTTPReadTimeout = 30 * time.Second
privateKeyFileMode = 0o600
registerCacheExpiration = time.Minute * 15
registerCacheCleanup = time.Minute * 20
DisabledClientAuth = "disabled"
RelaxedClientAuth = "relaxed"
EnforcedClientAuth = "enforced"
)
// Config contains the initial Headscale configuration.
type Config struct {
ServerURL string
Addr string
PrivateKeyPath string
DerpMap *tailcfg.DERPMap
MetricsAddr string
GRPCAddr string
GRPCAllowInsecure bool
EphemeralNodeInactivityTimeout time.Duration
IPPrefix netaddr.IPPrefix
IPPrefixes []netaddr.IPPrefix
PrivateKeyPath string
BaseDomain string
DERP DERPConfig
DBtype string
DBpath string
DBhost string
@@ -46,93 +98,174 @@ type Config struct {
TLSLetsEncryptCacheDir string
TLSLetsEncryptChallengeType string
TLSCertPath string
TLSKeyPath string
TLSCertPath string
TLSKeyPath string
TLSClientAuthMode tls.ClientAuthType
ACMEURL string
ACMEEmail string
DNSConfig *tailcfg.DNSConfig
UnixSocket string
UnixSocketPermission fs.FileMode
OIDC OIDCConfig
CLI CLIConfig
}
// Headscale represents the base app of the service
type OIDCConfig struct {
Issuer string
ClientID string
ClientSecret string
StripEmaildomain bool
}
type DERPConfig struct {
ServerEnabled bool
ServerRegionID int
ServerRegionCode string
ServerRegionName string
STUNAddr string
URLs []url.URL
Paths []string
AutoUpdate bool
UpdateFrequency time.Duration
}
type CLIConfig struct {
Address string
APIKey string
Timeout time.Duration
Insecure bool
}
// Headscale represents the base app of the service.
type Headscale struct {
cfg Config
db *gorm.DB
dbString string
dbType string
dbDebug bool
publicKey *wgkey.Key
privateKey *wgkey.Private
privateKey *key.MachinePrivate
DERPMap *tailcfg.DERPMap
DERPServer *DERPServer
aclPolicy *ACLPolicy
aclRules *[]tailcfg.FilterRule
aclRules []tailcfg.FilterRule
lastStateChange sync.Map
oidcProvider *oidc.Provider
oauth2Config *oauth2.Config
registrationCache *cache.Cache
ipAllocationMutex sync.Mutex
}
// Look up the TLS constant relative to user-supplied TLS client
// authentication mode. If an unknown mode is supplied, the default
// value, tls.RequireAnyClientCert, is returned. The returned boolean
// indicates if the supplied mode was valid.
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
switch mode {
case DisabledClientAuth:
// Client cert is _not_ required.
return tls.NoClientCert, true
case RelaxedClientAuth:
// Client cert required, but _not verified_.
return tls.RequireAnyClientCert, true
case EnforcedClientAuth:
// Client cert is _required and verified_.
return tls.RequireAndVerifyClientCert, true
default:
// Return the default when an unknown value is supplied.
return tls.RequireAnyClientCert, false
}
}
// NewHeadscale returns the Headscale app
func NewHeadscale(cfg Config) (*Headscale, error) {
content, err := os.ReadFile(cfg.PrivateKeyPath)
privKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
if err != nil {
return nil, err
return nil, fmt.Errorf("failed to read or create private key: %w", err)
}
privKey, err := wgkey.ParsePrivate(string(content))
if err != nil {
return nil, err
}
pubKey := privKey.Public()
var dbString string
switch cfg.DBtype {
case "postgres":
dbString = fmt.Sprintf("host=%s port=%d dbname=%s user=%s password=%s sslmode=disable", cfg.DBhost,
cfg.DBport, cfg.DBname, cfg.DBuser, cfg.DBpass)
case "sqlite3":
case Postgres:
dbString = fmt.Sprintf(
"host=%s port=%d dbname=%s user=%s password=%s sslmode=disable",
cfg.DBhost,
cfg.DBport,
cfg.DBname,
cfg.DBuser,
cfg.DBpass,
)
case Sqlite:
dbString = cfg.DBpath
default:
return nil, errors.New("unsupported DB")
return nil, errUnsupportedDatabase
}
h := Headscale{
cfg: cfg,
dbType: cfg.DBtype,
dbString: dbString,
privateKey: privKey,
publicKey: &pubKey,
aclRules: &tailcfg.FilterAllowAll, // default allowall
registrationCache := cache.New(
registerCacheExpiration,
registerCacheCleanup,
)
app := Headscale{
cfg: cfg,
dbType: cfg.DBtype,
dbString: dbString,
privateKey: privKey,
aclRules: tailcfg.FilterAllowAll, // default allowall
registrationCache: registrationCache,
}
err = h.initDB()
err = app.initDB()
if err != nil {
return nil, err
}
if h.cfg.DNSConfig != nil && h.cfg.DNSConfig.Proxied { // if MagicDNS
magicDNSDomains, err := generateMagicDNSRootDomains(h.cfg.IPPrefix, h.cfg.BaseDomain)
if cfg.OIDC.Issuer != "" {
err = app.initOIDC()
if err != nil {
return nil, err
}
}
if app.cfg.DNSConfig != nil && app.cfg.DNSConfig.Proxied { // if MagicDNS
magicDNSDomains := generateMagicDNSRootDomains(app.cfg.IPPrefixes)
// we might have routes already from Split DNS
if h.cfg.DNSConfig.Routes == nil {
h.cfg.DNSConfig.Routes = make(map[string][]dnstype.Resolver)
if app.cfg.DNSConfig.Routes == nil {
app.cfg.DNSConfig.Routes = make(map[string][]dnstype.Resolver)
}
for _, d := range magicDNSDomains {
h.cfg.DNSConfig.Routes[d.WithoutTrailingDot()] = nil
app.cfg.DNSConfig.Routes[d.WithoutTrailingDot()] = nil
}
}
return &h, nil
if cfg.DERP.ServerEnabled {
embeddedDERPServer, err := app.NewDERPServer()
if err != nil {
return nil, err
}
app.DERPServer = embeddedDERPServer
}
return &app, nil
}
// Redirect to our TLS url
// Redirect to our TLS url.
func (h *Headscale) redirect(w http.ResponseWriter, req *http.Request) {
target := h.cfg.ServerURL + req.URL.RequestURI()
http.Redirect(w, req, target, http.StatusFound)
}
// expireEphemeralNodes deletes ephemeral machine records that have not been
// seen for longer than h.cfg.EphemeralNodeInactivityTimeout
// seen for longer than h.cfg.EphemeralNodeInactivityTimeout.
func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
@@ -144,64 +277,394 @@ func (h *Headscale) expireEphemeralNodesWorker() {
namespaces, err := h.ListNamespaces()
if err != nil {
log.Error().Err(err).Msg("Error listing namespaces")
return
}
for _, ns := range *namespaces {
machines, err := h.ListMachinesInNamespace(ns.Name)
for _, namespace := range namespaces {
machines, err := h.ListMachinesInNamespace(namespace.Name)
if err != nil {
log.Error().Err(err).Str("namespace", ns.Name).Msg("Error listing machines in namespace")
log.Error().
Err(err).
Str("namespace", namespace.Name).
Msg("Error listing machines in namespace")
return
}
for _, m := range *machines {
if m.AuthKey != nil && m.LastSeen != nil && m.AuthKey.Ephemeral && time.Now().After(m.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
log.Info().Str("machine", m.Name).Msg("Ephemeral client removed from database")
err = h.db.Unscoped().Delete(m).Error
for _, machine := range machines {
if machine.AuthKey != nil && machine.LastSeen != nil &&
machine.AuthKey.Ephemeral &&
time.Now().
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
log.Info().
Str("machine", machine.Name).
Msg("Ephemeral client removed from database")
err = h.db.Unscoped().Delete(machine).Error
if err != nil {
log.Error().Err(err).Str("machine", m.Name).Msg("🤮 Cannot delete ephemeral machine from the database")
log.Error().
Err(err).
Str("machine", machine.Name).
Msg("🤮 Cannot delete ephemeral machine from the database")
}
}
}
h.setLastStateChangeToNow(ns.Name)
h.setLastStateChangeToNow(namespace.Name)
}
}
// WatchForKVUpdates checks the KV DB table for requests to perform tailnet upgrades
// This is a way to communitate the CLI with the headscale server
func (h *Headscale) watchForKVUpdates(milliSeconds int64) {
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
for range ticker.C {
h.watchForKVUpdatesWorker()
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler) (interface{}, error) {
// Check if the request is coming from the on-server client.
// This is not secure, but it is to maintain maintainability
// with the "legacy" database-based client
// It is also neede for grpc-gateway to be able to connect to
// the server
client, _ := peer.FromContext(ctx)
log.Trace().
Caller().
Str("client_address", client.Addr.String()).
Msg("Client is trying to authenticate")
meta, ok := metadata.FromIncomingContext(ctx)
if !ok {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg("Retrieving metadata is failed")
return ctx, status.Errorf(
codes.InvalidArgument,
"Retrieving metadata is failed",
)
}
authHeader, ok := meta["authorization"]
if !ok {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg("Authorization token is not supplied")
return ctx, status.Errorf(
codes.Unauthenticated,
"Authorization token is not supplied",
)
}
token := authHeader[0]
if !strings.HasPrefix(token, AuthPrefix) {
log.Error().
Caller().
Str("client_address", client.Addr.String()).
Msg(`missing "Bearer " prefix in "Authorization" header`)
return ctx, status.Error(
codes.Unauthenticated,
`missing "Bearer " prefix in "Authorization" header`,
)
}
valid, err := h.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix))
if err != nil {
log.Error().
Caller().
Err(err).
Str("client_address", client.Addr.String()).
Msg("failed to validate token")
return ctx, status.Error(codes.Internal, "failed to validate token")
}
if !valid {
log.Info().
Str("client_address", client.Addr.String()).
Msg("invalid token")
return ctx, status.Error(codes.Unauthenticated, "invalid token")
}
return handler(ctx, req)
}
func (h *Headscale) watchForKVUpdatesWorker() {
h.checkForNamespacesPendingUpdates()
// more functions will come here in the future
func (h *Headscale) httpAuthenticationMiddleware(ctx *gin.Context) {
log.Trace().
Caller().
Str("client_address", ctx.ClientIP()).
Msg("HTTP authentication invoked")
authHeader := ctx.GetHeader("authorization")
if !strings.HasPrefix(authHeader, AuthPrefix) {
log.Error().
Caller().
Str("client_address", ctx.ClientIP()).
Msg(`missing "Bearer " prefix in "Authorization" header`)
ctx.AbortWithStatus(http.StatusUnauthorized)
return
}
valid, err := h.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix))
if err != nil {
log.Error().
Caller().
Err(err).
Str("client_address", ctx.ClientIP()).
Msg("failed to validate token")
ctx.AbortWithStatus(http.StatusInternalServerError)
return
}
if !valid {
log.Info().
Str("client_address", ctx.ClientIP()).
Msg("invalid token")
ctx.AbortWithStatus(http.StatusUnauthorized)
return
}
ctx.Next()
}
// Serve launches a GIN server with the Headscale API
// ensureUnixSocketIsAbsent will check if the given path for headscales unix socket is clear
// and will remove it if it is not.
func (h *Headscale) ensureUnixSocketIsAbsent() error {
// File does not exist, all fine
if _, err := os.Stat(h.cfg.UnixSocket); errors.Is(err, os.ErrNotExist) {
return nil
}
return os.Remove(h.cfg.UnixSocket)
}
func (h *Headscale) createPrometheusRouter() *gin.Engine {
promRouter := gin.Default()
prometheus := ginprometheus.NewPrometheus("gin")
prometheus.Use(promRouter)
return promRouter
}
func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *gin.Engine {
router := gin.Default()
router.GET(
"/health",
func(c *gin.Context) { c.JSON(http.StatusOK, gin.H{"healthy": "ok"}) },
)
router.GET("/key", h.KeyHandler)
router.GET("/register", h.RegisterWebAPI)
router.POST("/machine/:id/map", h.PollNetMapHandler)
router.POST("/machine/:id", h.RegistrationHandler)
router.GET("/oidc/register/:mkey", h.RegisterOIDC)
router.GET("/oidc/callback", h.OIDCCallback)
router.GET("/apple", h.AppleConfigMessage)
router.GET("/apple/:platform", h.ApplePlatformConfig)
router.GET("/windows", h.WindowsConfigMessage)
router.GET("/windows/tailscale.reg", h.WindowsRegConfig)
router.GET("/swagger", SwaggerUI)
router.GET("/swagger/v1/openapiv2.json", SwaggerAPIv1)
if h.cfg.DERP.ServerEnabled {
router.Any("/derp", h.DERPHandler)
router.Any("/derp/probe", h.DERPProbeHandler)
router.Any("/bootstrap-dns", h.DERPBootstrapDNSHandler)
}
api := router.Group("/api")
api.Use(h.httpAuthenticationMiddleware)
{
api.Any("/v1/*any", gin.WrapF(grpcMux.ServeHTTP))
}
router.NoRoute(stdoutHandler)
return router
}
// Serve launches a GIN server with the Headscale API.
func (h *Headscale) Serve() error {
r := gin.Default()
p := ginprometheus.NewPrometheus("gin")
p.Use(r)
r.GET("/health", func(c *gin.Context) { c.JSON(200, gin.H{"healthy": "ok"}) })
r.GET("/key", h.KeyHandler)
r.GET("/register", h.RegisterWebAPI)
r.POST("/machine/:id/map", h.PollNetMapHandler)
r.POST("/machine/:id", h.RegistrationHandler)
r.GET("/apple", h.AppleMobileConfig)
r.GET("/apple/:platform", h.ApplePlatformConfig)
var err error
go h.watchForKVUpdates(5000)
go h.expireEphemeralNodes(5000)
// Fetch an initial DERP Map before we start serving
h.DERPMap = GetDERPMap(h.cfg.DERP)
s := &http.Server{
if h.cfg.DERP.ServerEnabled {
// When embedded DERP is enabled we always need a STUN server
if h.cfg.DERP.STUNAddr == "" {
return errSTUNAddressNotSet
}
h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
go h.ServeSTUN()
}
if h.cfg.DERP.AutoUpdate {
derpMapCancelChannel := make(chan struct{})
defer func() { derpMapCancelChannel <- struct{}{} }()
go h.scheduledDERPMapUpdateWorker(derpMapCancelChannel)
}
go h.expireEphemeralNodes(updateInterval)
if zl.GlobalLevel() == zl.TraceLevel {
zerolog.RespLog = true
} else {
zerolog.RespLog = false
}
// Prepare group for running listeners
errorGroup := new(errgroup.Group)
ctx := context.Background()
ctx, cancel := context.WithCancel(ctx)
defer cancel()
//
//
// Set up LOCAL listeners
//
err = h.ensureUnixSocketIsAbsent()
if err != nil {
return fmt.Errorf("unable to remove old socket file: %w", err)
}
socketListener, err := net.Listen("unix", h.cfg.UnixSocket)
if err != nil {
return fmt.Errorf("failed to set up gRPC socket: %w", err)
}
// Change socket permissions
if err := os.Chmod(h.cfg.UnixSocket, h.cfg.UnixSocketPermission); err != nil {
return fmt.Errorf("failed change permission of gRPC socket: %w", err)
}
// Handle common process-killing signals so we can gracefully shut down:
sigc := make(chan os.Signal, 1)
signal.Notify(sigc, os.Interrupt, syscall.SIGTERM)
go func(c chan os.Signal) {
// Wait for a SIGINT or SIGKILL:
sig := <-c
log.Printf("Caught signal %s: shutting down.", sig)
// Stop listening (and unlink the socket if unix type):
socketListener.Close()
// And we're done:
os.Exit(0)
}(sigc)
grpcGatewayMux := runtime.NewServeMux()
// Make the grpc-gateway connect to grpc over socket
grpcGatewayConn, err := grpc.Dial(
h.cfg.UnixSocket,
[]grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(GrpcSocketDialer),
}...,
)
if err != nil {
return err
}
// Connect to the gRPC server over localhost to skip
// the authentication.
err = v1.RegisterHeadscaleServiceHandler(ctx, grpcGatewayMux, grpcGatewayConn)
if err != nil {
return err
}
// Start the local gRPC server without TLS and without authentication
grpcSocket := grpc.NewServer(zerolog.UnaryInterceptor())
v1.RegisterHeadscaleServiceServer(grpcSocket, newHeadscaleV1APIServer(h))
reflection.Register(grpcSocket)
errorGroup.Go(func() error { return grpcSocket.Serve(socketListener) })
//
//
// Set up REMOTE listeners
//
tlsConfig, err := h.getTLSSettings()
if err != nil {
log.Error().Err(err).Msg("Failed to set up TLS configuration")
return err
}
//
//
// gRPC setup
//
// We are sadly not able to run gRPC and HTTPS (2.0) on the same
// port because the connection mux does not support matching them
// since they are so similar. There is multiple issues open and we
// can revisit this if changes:
// https://github.com/soheilhy/cmux/issues/68
// https://github.com/soheilhy/cmux/issues/91
if tlsConfig != nil || h.cfg.GRPCAllowInsecure {
log.Info().Msgf("Enabling remote gRPC at %s", h.cfg.GRPCAddr)
grpcOptions := []grpc.ServerOption{
grpc.UnaryInterceptor(
grpc_middleware.ChainUnaryServer(
h.grpcAuthenticationInterceptor,
zerolog.NewUnaryServerInterceptor(),
),
),
}
if tlsConfig != nil {
grpcOptions = append(grpcOptions,
grpc.Creds(credentials.NewTLS(tlsConfig)),
)
} else {
log.Warn().Msg("gRPC is running without security")
}
grpcServer := grpc.NewServer(grpcOptions...)
v1.RegisterHeadscaleServiceServer(grpcServer, newHeadscaleV1APIServer(h))
reflection.Register(grpcServer)
grpcListener, err := net.Listen("tcp", h.cfg.GRPCAddr)
if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err)
}
errorGroup.Go(func() error { return grpcServer.Serve(grpcListener) })
log.Info().
Msgf("listening and serving gRPC on: %s", h.cfg.GRPCAddr)
}
//
//
// HTTP setup
//
router := h.createRouter(grpcGatewayMux)
httpServer := &http.Server{
Addr: h.cfg.Addr,
Handler: r,
ReadTimeout: 30 * time.Second,
Handler: router,
ReadTimeout: HTTPReadTimeout,
// Go does not handle timeouts in HTTP very well, and there is
// no good way to handle streaming timeouts, therefore we need to
// keep this at unlimited and be careful to clean up connections
@@ -209,12 +672,55 @@ func (h *Headscale) Serve() error {
WriteTimeout: 0,
}
var httpListener net.Listener
if tlsConfig != nil {
httpServer.TLSConfig = tlsConfig
httpListener, err = tls.Listen("tcp", h.cfg.Addr, tlsConfig)
} else {
httpListener, err = net.Listen("tcp", h.cfg.Addr)
}
if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err)
}
errorGroup.Go(func() error { return httpServer.Serve(httpListener) })
log.Info().
Msgf("listening and serving HTTP on: %s", h.cfg.Addr)
promRouter := h.createPrometheusRouter()
promHTTPServer := &http.Server{
Addr: h.cfg.MetricsAddr,
Handler: promRouter,
ReadTimeout: HTTPReadTimeout,
WriteTimeout: 0,
}
var promHTTPListener net.Listener
promHTTPListener, err = net.Listen("tcp", h.cfg.MetricsAddr)
if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err)
}
errorGroup.Go(func() error { return promHTTPServer.Serve(promHTTPListener) })
log.Info().
Msgf("listening and serving metrics on: %s", h.cfg.MetricsAddr)
return errorGroup.Wait()
}
func (h *Headscale) getTLSSettings() (*tls.Config, error) {
var err error
if h.cfg.TLSLetsEncryptHostname != "" {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
log.Warn().
Msg("Listening with TLS but ServerURL does not start with https://")
}
m := autocert.Manager{
certManager := autocert.Manager{
Prompt: autocert.AcceptTOS,
HostPolicy: autocert.HostWhitelist(h.cfg.TLSLetsEncryptHostname),
Cache: autocert.DirCache(h.cfg.TLSLetsEncryptCacheDir),
@@ -224,38 +730,55 @@ func (h *Headscale) Serve() error {
Email: h.cfg.ACMEEmail,
}
s.TLSConfig = m.TLSConfig()
if h.cfg.TLSLetsEncryptChallengeType == "TLS-ALPN-01" {
switch h.cfg.TLSLetsEncryptChallengeType {
case "TLS-ALPN-01":
// Configuration via autocert with TLS-ALPN-01 (https://tools.ietf.org/html/rfc8737)
// The RFC requires that the validation is done on port 443; in other words, headscale
// must be reachable on port 443.
err = s.ListenAndServeTLS("", "")
} else if h.cfg.TLSLetsEncryptChallengeType == "HTTP-01" {
return certManager.TLSConfig(), nil
case "HTTP-01":
// Configuration via autocert with HTTP-01. This requires listening on
// port 80 for the certificate validation in addition to the headscale
// service, which can be configured to run on any other port.
go func() {
log.Fatal().
Err(http.ListenAndServe(h.cfg.TLSLetsEncryptListen, m.HTTPHandler(http.HandlerFunc(h.redirect)))).
Caller().
Err(http.ListenAndServe(h.cfg.TLSLetsEncryptListen, certManager.HTTPHandler(http.HandlerFunc(h.redirect)))).
Msg("failed to set up a HTTP server")
}()
err = s.ListenAndServeTLS("", "")
} else {
return errors.New("unknown value for TLSLetsEncryptChallengeType")
return certManager.TLSConfig(), nil
default:
return nil, errUnsupportedLetsEncryptChallengeType
}
} else if h.cfg.TLSCertPath == "" {
if !strings.HasPrefix(h.cfg.ServerURL, "http://") {
log.Warn().Msg("Listening without TLS but ServerURL does not start with http://")
}
err = s.ListenAndServe()
return nil, err
} else {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
}
err = s.ListenAndServeTLS(h.cfg.TLSCertPath, h.cfg.TLSKeyPath)
log.Info().Msg(fmt.Sprintf(
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
h.cfg.TLSClientAuthMode))
tlsConfig := &tls.Config{
ClientAuth: h.cfg.TLSClientAuthMode,
NextProtos: []string{"http/1.1"},
Certificates: make([]tls.Certificate, 1),
MinVersion: tls.VersionTLS12,
}
tlsConfig.Certificates[0], err = tls.LoadX509KeyPair(h.cfg.TLSCertPath, h.cfg.TLSKeyPath)
return tlsConfig, err
}
return err
}
func (h *Headscale) setLastStateChangeToNow(namespace string) {
@@ -273,7 +796,6 @@ func (h *Headscale) getLastStateChange(namespaces ...string) time.Time {
times = append(times, lastChange)
}
}
sort.Slice(times, func(i, j int) bool {
@@ -284,8 +806,62 @@ func (h *Headscale) getLastStateChange(namespaces ...string) time.Time {
if len(times) == 0 {
return time.Now().UTC()
} else {
return times[0]
}
}
func stdoutHandler(ctx *gin.Context) {
body, _ := io.ReadAll(ctx.Request.Body)
log.Trace().
Interface("header", ctx.Request.Header).
Interface("proto", ctx.Request.Proto).
Interface("url", ctx.Request.URL).
Bytes("body", body).
Msg("Request did not match")
}
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
privateKey, err := os.ReadFile(path)
if errors.Is(err, os.ErrNotExist) {
log.Info().Str("path", path).Msg("No private key file at path, creating...")
machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText()
if err != nil {
return nil, fmt.Errorf(
"failed to convert private key to string for saving: %w",
err,
)
}
err = os.WriteFile(path, machineKeyStr, privateKeyFileMode)
if err != nil {
return nil, fmt.Errorf(
"failed to save private key to disk: %w",
err,
)
}
return &machineKey, nil
} else if err != nil {
return nil, fmt.Errorf("failed to read private key file: %w", err)
}
trimmedPrivateKey := strings.TrimSpace(string(privateKey))
privateKeyEnsurePrefix := PrivateKeyEnsurePrefix(trimmedPrivateKey)
var machineKey key.MachinePrivate
if err = machineKey.UnmarshalText([]byte(privateKeyEnsurePrefix)); err != nil {
log.Info().
Str("path", path).
Msg("This might be due to a legacy (headscale pre-0.12) private key. " +
"If the key is in WireGuard format, delete the key and restart headscale. " +
"A new key will automatically be generated. All Tailscale clients will have to be restarted")
return nil, fmt.Errorf("failed to parse private key: %w", err)
}
return &machineKey, nil
}

View File

@@ -17,8 +17,10 @@ var _ = check.Suite(&Suite{})
type Suite struct{}
var tmpDir string
var h Headscale
var (
tmpDir string
app Headscale
)
func (s *Suite) SetUpTest(c *check.C) {
s.ResetDB(c)
@@ -38,21 +40,40 @@ func (s *Suite) ResetDB(c *check.C) {
c.Fatal(err)
}
cfg := Config{
IPPrefix: netaddr.MustParseIPPrefix("10.27.0.0/23"),
IPPrefixes: []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("10.27.0.0/23"),
},
}
h = Headscale{
app = Headscale{
cfg: cfg,
dbType: "sqlite3",
dbString: tmpDir + "/headscale_test.db",
}
err = h.initDB()
err = app.initDB()
if err != nil {
c.Fatal(err)
}
db, err := h.openDB()
db, err := app.openDB()
if err != nil {
c.Fatal(err)
}
h.db = db
app.db = db
}
// Enusre an error is returned when an invalid auth mode
// is supplied.
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
_, isValid := LookupTLSClientAuthMode("invalid")
c.Assert(isValid, check.Equals, false)
}
// Ensure that all client auth modes return a nil error.
func (s *Suite) TestAuthModes(c *check.C) {
modes := []string{"disabled", "relaxed", "enforced"}
for _, v := range modes {
_, isValid := LookupTLSClientAuthMode(v)
c.Assert(isValid, check.Equals, true)
}
}

View File

@@ -1,226 +0,0 @@
package headscale
import (
"bytes"
"net/http"
"text/template"
"github.com/rs/zerolog/log"
"github.com/gin-gonic/gin"
"github.com/gofrs/uuid"
)
// AppleMobileConfig shows a simple message in the browser to point to the CLI
// Listens in /register
func (h *Headscale) AppleMobileConfig(c *gin.Context) {
t := template.Must(template.New("apple").Parse(`
<html>
<body>
<h1>Apple configuration profiles</h1>
<p>
This page provides <a href="https://support.apple.com/guide/mdm/mdm-overview-mdmbf9e668/web">configuration profiles</a> for the official Tailscale clients for <a href="https://apps.apple.com/us/app/tailscale/id1470499037?ls=1">iOS</a> and <a href="https://apps.apple.com/ca/app/tailscale/id1475387142?mt=12">macOS</a>.
</p>
<p>
The profiles will configure Tailscale.app to use {{.Url}} as its control server.
</p>
<h3>Caution</h3>
<p>You should always inspect the profile before installing it:</p>
<!--
<p><code>curl {{.Url}}/apple/ios</code></p>
-->
<p><code>curl {{.Url}}/apple/macos</code></p>
<h2>Profiles</h2>
<!--
<h3>iOS</h3>
<p>
<a href="/apple/ios" download="headscale_ios.mobileconfig">iOS profile</a>
</p>
-->
<h3>macOS</h3>
<p>Headscale can be set to the default server by installing a Headscale configuration profile:</p>
<p>
<a href="/apple/macos" download="headscale_macos.mobileconfig">macOS profile</a>
</p>
<ol>
<li>Download the profile, then open it. When it has been opened, there should be a notification that a profile can be installed</li>
<li>Open System Preferences and go to "Profiles"</li>
<li>Find and install the Headscale profile</li>
<li>Restart Tailscale.app and log in</li>
</ol>
<p>Or</p>
<p>Use your terminal to configure the default setting for Tailscale by issuing:</p>
<code>defaults write io.tailscale.ipn.macos ControlURL {{.Url}}</code>
<p>Restart Tailscale.app and log in.</p>
</body>
</html>`))
config := map[string]interface{}{
"Url": h.cfg.ServerURL,
}
var payload bytes.Buffer
if err := t.Execute(&payload, config); err != nil {
log.Error().
Str("handler", "AppleMobileConfig").
Err(err).
Msg("Could not render Apple index template")
c.Data(http.StatusInternalServerError, "text/html; charset=utf-8", []byte("Could not render Apple index template"))
return
}
c.Data(http.StatusOK, "text/html; charset=utf-8", payload.Bytes())
}
func (h *Headscale) ApplePlatformConfig(c *gin.Context) {
platform := c.Param("platform")
id, err := uuid.NewV4()
if err != nil {
log.Error().
Str("handler", "ApplePlatformConfig").
Err(err).
Msg("Failed not create UUID")
c.Data(http.StatusInternalServerError, "text/html; charset=utf-8", []byte("Failed to create UUID"))
return
}
contentId, err := uuid.NewV4()
if err != nil {
log.Error().
Str("handler", "ApplePlatformConfig").
Err(err).
Msg("Failed not create UUID")
c.Data(http.StatusInternalServerError, "text/html; charset=utf-8", []byte("Failed to create UUID"))
return
}
platformConfig := AppleMobilePlatformConfig{
UUID: contentId,
Url: h.cfg.ServerURL,
}
var payload bytes.Buffer
switch platform {
case "macos":
if err := macosTemplate.Execute(&payload, platformConfig); err != nil {
log.Error().
Str("handler", "ApplePlatformConfig").
Err(err).
Msg("Could not render Apple macOS template")
c.Data(http.StatusInternalServerError, "text/html; charset=utf-8", []byte("Could not render Apple macOS template"))
return
}
case "ios":
if err := iosTemplate.Execute(&payload, platformConfig); err != nil {
log.Error().
Str("handler", "ApplePlatformConfig").
Err(err).
Msg("Could not render Apple iOS template")
c.Data(http.StatusInternalServerError, "text/html; charset=utf-8", []byte("Could not render Apple iOS template"))
return
}
default:
c.Data(http.StatusOK, "text/html; charset=utf-8", []byte("Invalid platform, only ios and macos is supported"))
return
}
config := AppleMobileConfig{
UUID: id,
Url: h.cfg.ServerURL,
Payload: payload.String(),
}
var content bytes.Buffer
if err := commonTemplate.Execute(&content, config); err != nil {
log.Error().
Str("handler", "ApplePlatformConfig").
Err(err).
Msg("Could not render Apple platform template")
c.Data(http.StatusInternalServerError, "text/html; charset=utf-8", []byte("Could not render Apple platform template"))
return
}
c.Data(http.StatusOK, "application/x-apple-aspen-config; charset=utf-8", content.Bytes())
}
type AppleMobileConfig struct {
UUID uuid.UUID
Url string
Payload string
}
type AppleMobilePlatformConfig struct {
UUID uuid.UUID
Url string
}
var commonTemplate = template.Must(template.New("mobileconfig").Parse(`<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>PayloadUUID</key>
<string>{{.UUID}}</string>
<key>PayloadDisplayName</key>
<string>Headscale</string>
<key>PayloadDescription</key>
<string>Configure Tailscale login server to: {{.Url}}</string>
<key>PayloadIdentifier</key>
<string>com.github.juanfont.headscale</string>
<key>PayloadRemovalDisallowed</key>
<false/>
<key>PayloadType</key>
<string>Configuration</string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadContent</key>
<array>
{{.Payload}}
</array>
</dict>
</plist>`))
var iosTemplate = template.Must(template.New("iosTemplate").Parse(`
<dict>
<key>PayloadType</key>
<string>io.tailscale.ipn.ios</string>
<key>PayloadUUID</key>
<string>{{.UUID}}</string>
<key>PayloadIdentifier</key>
<string>com.github.juanfont.headscale</string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadEnabled</key>
<true/>
<key>ControlURL</key>
<string>{{.Url}}</string>
</dict>
`))
var macosTemplate = template.Must(template.New("macosTemplate").Parse(`
<dict>
<key>PayloadType</key>
<string>io.tailscale.ipn.macos</string>
<key>PayloadUUID</key>
<string>{{.UUID}}</string>
<key>PayloadIdentifier</key>
<string>com.github.juanfont.headscale</string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadEnabled</key>
<true/>
<key>ControlURL</key>
<string>{{.Url}}</string>
</dict>
`))

21
buf.gen.yaml Normal file
View File

@@ -0,0 +1,21 @@
version: v1
plugins:
- name: go
out: gen/go
opt:
- paths=source_relative
- name: go-grpc
out: gen/go
opt:
- paths=source_relative
- name: grpc-gateway
out: gen/go
opt:
- paths=source_relative
- generate_unbound_methods=true
# - name: gorm
# out: gen/go
# opt:
# - paths=source_relative,enums=string,gateway=true
- name: openapiv2
out: gen/openapiv2

40
cli.go
View File

@@ -1,40 +0,0 @@
package headscale
import (
"errors"
"gorm.io/gorm"
"tailscale.com/types/wgkey"
)
// RegisterMachine is executed from the CLI to register a new Machine using its MachineKey
func (h *Headscale) RegisterMachine(key string, namespace string) (*Machine, error) {
ns, err := h.GetNamespace(namespace)
if err != nil {
return nil, err
}
mKey, err := wgkey.ParseHex(key)
if err != nil {
return nil, err
}
m := Machine{}
if result := h.db.First(&m, "machine_key = ?", mKey.HexString()); errors.Is(result.Error, gorm.ErrRecordNotFound) {
return nil, errors.New("Machine not found")
}
if m.isAlreadyRegistered() {
return nil, errors.New("Machine already registered")
}
ip, err := h.getAvailableIP()
if err != nil {
return nil, err
}
m.IPAddress = ip.String()
m.NamespaceID = ns.ID
m.Registered = true
m.RegisterMethod = "cli"
h.db.Save(&m)
return &m, nil
}

View File

@@ -1,31 +0,0 @@
package headscale
import (
"gopkg.in/check.v1"
)
func (s *Suite) TestRegisterMachine(c *check.C) {
n, err := h.CreateNamespace("test")
c.Assert(err, check.IsNil)
m := Machine{
ID: 0,
MachineKey: "8ce002a935f8c394e55e78fbbb410576575ff8ec5cfa2e627e4b807f1be15b0e",
NodeKey: "bar",
DiscoKey: "faa",
Name: "testmachine",
NamespaceID: n.ID,
IPAddress: "10.0.0.1",
}
h.db.Save(&m)
_, err = h.GetMachine("test", "testmachine")
c.Assert(err, check.IsNil)
m2, err := h.RegisterMachine("8ce002a935f8c394e55e78fbbb410576575ff8ec5cfa2e627e4b807f1be15b0e", n.Name)
c.Assert(err, check.IsNil)
c.Assert(m2.Registered, check.Equals, true)
_, err = m2.GetHostInfo()
c.Assert(err, check.IsNil)
}

View File

@@ -0,0 +1,186 @@
package cli
import (
"fmt"
"strconv"
"time"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
// 90 days.
DefaultAPIKeyExpiry = 90 * 24 * time.Hour
)
func init() {
rootCmd.AddCommand(apiKeysCmd)
apiKeysCmd.AddCommand(listAPIKeys)
createAPIKeyCmd.Flags().
DurationP("expiration", "e", DefaultAPIKeyExpiry, "Human-readable expiration of the key (30m, 24h, 365d...)")
apiKeysCmd.AddCommand(createAPIKeyCmd)
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
err := expireAPIKeyCmd.MarkFlagRequired("prefix")
if err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(expireAPIKeyCmd)
}
var apiKeysCmd = &cobra.Command{
Use: "apikeys",
Short: "Handle the Api keys in Headscale",
Aliases: []string{"apikey", "api"},
}
var listAPIKeys = &cobra.Command{
Use: "list",
Short: "List the Api keys for headscale",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ListApiKeysRequest{}
response, err := client.ListApiKeys(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
return
}
if output != "" {
SuccessOutput(response.ApiKeys, "", output)
return
}
tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"},
}
for _, key := range response.ApiKeys {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.Expiration.AsTime())
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), headscale.Base10),
key.GetPrefix(),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
})
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var createAPIKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new Api key",
Long: `
Creates a new Api key, the Api key is only visible on creation
and cannot be retrieved again.
If you loose a key, create a new one and revoke (expire) the old one.`,
Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
log.Trace().
Msg("Preparing to create ApiKey")
request := &v1.CreateApiKeyRequest{}
duration, _ := cmd.Flags().GetDuration("expiration")
expiration := time.Now().UTC().Add(duration)
log.Trace().Dur("expiration", duration).Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.CreateApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response.ApiKey, response.ApiKey, output)
},
}
var expireAPIKeyCmd = &cobra.Command{
Use: "expire",
Short: "Expire an ApiKey",
Aliases: []string{"revoke", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
prefix, err := cmd.Flags().GetString("prefix")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ExpireApiKeyRequest{
Prefix: prefix,
}
response, err := client.ExpireApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response, "Key expired", output)
},
}

132
cmd/headscale/cli/debug.go Normal file
View File

@@ -0,0 +1,132 @@
package cli
import (
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
const (
keyLength = 64
errPreAuthKeyTooShort = Error("key too short, must be 64 hexadecimal characters")
)
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
type Error string
func (e Error) Error() string { return string(e) }
func init() {
rootCmd.AddCommand(debugCmd)
createNodeCmd.Flags().StringP("name", "", "", "Name")
err := createNodeCmd.MarkFlagRequired("name")
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err = createNodeCmd.MarkFlagRequired("namespace")
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("key", "k", "", "Key")
err = createNodeCmd.MarkFlagRequired("key")
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
debugCmd.AddCommand(createNodeCmd)
}
var debugCmd = &cobra.Command{
Use: "debug",
Short: "debug and testing commands",
Long: "debug contains extra commands used for debugging and testing headscale",
}
var createNodeCmd = &cobra.Command{
Use: "create-node",
Short: "Create a node (machine) that can be registered with `nodes register <>` command",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
name, err := cmd.Flags().GetString("name")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting node from flag: %s", err),
output,
)
return
}
machineKey, err := cmd.Flags().GetString("key")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting key from flag: %s", err),
output,
)
return
}
if len(machineKey) != keyLength {
err = errPreAuthKeyTooShort
ErrorOutput(
err,
fmt.Sprintf("Error: %s", err),
output,
)
return
}
routes, err := cmd.Flags().GetStringSlice("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
return
}
request := &v1.DebugCreateMachineRequest{
Key: machineKey,
Name: name,
Namespace: namespace,
Routes: routes,
}
response, err := client.DebugCreateMachine(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create machine: %s", status.Convert(err).Message()),
output,
)
return
}
SuccessOutput(response.Machine, "Machine created", output)
},
}

View File

@@ -0,0 +1,42 @@
package cli
import (
"fmt"
"github.com/spf13/cobra"
"tailscale.com/types/key"
)
func init() {
rootCmd.AddCommand(generateCmd)
generateCmd.AddCommand(generatePrivateKeyCmd)
}
var generateCmd = &cobra.Command{
Use: "generate",
Short: "Generate commands",
Aliases: []string{"gen"},
}
var generatePrivateKeyCmd = &cobra.Command{
Use: "private-key",
Short: "Generate a private key for the headscale server",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine key from flag: %s", err),
output,
)
}
SuccessOutput(map[string]string{
"private_key": string(machineKeyStr),
},
string(machineKeyStr), output)
},
}

View File

@@ -2,12 +2,14 @@ package cli
import (
"fmt"
"log"
"strconv"
"strings"
survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
func init() {
@@ -18,121 +20,224 @@ func init() {
namespaceCmd.AddCommand(renameNamespaceCmd)
}
const (
errMissingParameter = headscale.Error("missing parameters")
)
var namespaceCmd = &cobra.Command{
Use: "namespaces",
Short: "Manage the namespaces of Headscale",
Use: "namespaces",
Short: "Manage the namespaces of Headscale",
Aliases: []string{"namespace", "ns", "user", "users"},
}
var createNamespaceCmd = &cobra.Command{
Use: "create NAME",
Short: "Creates a new namespace",
Use: "create NAME",
Short: "Creates a new namespace",
Aliases: []string{"c", "new"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("Missing parameters")
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
o, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
output, _ := cmd.Flags().GetString("output")
namespaceName := args[0]
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
request := &v1.CreateNamespaceRequest{Name: namespaceName}
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
response, err := client.CreateNamespace(ctx, request)
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
namespace, err := h.CreateNamespace(args[0])
if strings.HasPrefix(o, "json") {
JsonOutput(namespace, err, o)
ErrorOutput(
err,
fmt.Sprintf(
"Cannot create namespace: %s",
status.Convert(err).Message(),
),
output,
)
return
}
if err != nil {
fmt.Printf("Error creating namespace: %s\n", err)
return
}
fmt.Printf("Namespace created\n")
SuccessOutput(response.Namespace, "Namespace created", output)
},
}
var destroyNamespaceCmd = &cobra.Command{
Use: "destroy NAME",
Short: "Destroys a namespace",
Use: "destroy NAME",
Short: "Destroys a namespace",
Aliases: []string{"delete"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("Missing parameters")
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
o, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
output, _ := cmd.Flags().GetString("output")
namespaceName := args[0]
request := &v1.GetNamespaceRequest{
Name: namespaceName,
}
err = h.DestroyNamespace(args[0])
if strings.HasPrefix(o, "json") {
JsonOutput(map[string]string{"Result": "Namespace destroyed"}, err, o)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
_, err := client.GetNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error: %s", status.Convert(err).Message()),
output,
)
return
}
if err != nil {
fmt.Printf("Error destroying namespace: %s\n", err)
return
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
namespaceName,
),
}
err := survey.AskOne(prompt, &confirm)
if err != nil {
return
}
}
if confirm || force {
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
response, err := client.DeleteNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot destroy namespace: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(response, "Namespace destroyed", output)
} else {
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
}
fmt.Printf("Namespace destroyed\n")
},
}
var listNamespacesCmd = &cobra.Command{
Use: "list",
Short: "List all the namespaces",
Use: "list",
Short: "List all the namespaces",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
o, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ListNamespacesRequest{}
response, err := client.ListNamespaces(ctx, request)
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
namespaces, err := h.ListNamespaces()
if strings.HasPrefix(o, "json") {
JsonOutput(namespaces, err, o)
return
}
if err != nil {
fmt.Println(err)
ErrorOutput(
err,
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
output,
)
return
}
d := pterm.TableData{{"ID", "Name", "Created"}}
for _, n := range *namespaces {
d = append(d, []string{strconv.FormatUint(uint64(n.ID), 10), n.Name, n.CreatedAt.Format("2006-01-02 15:04:05")})
if output != "" {
SuccessOutput(response.Namespaces, "", output)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(d).Render()
tableData := pterm.TableData{{"ID", "Name", "Created"}}
for _, namespace := range response.GetNamespaces() {
tableData = append(
tableData,
[]string{
namespace.GetId(),
namespace.GetName(),
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
},
)
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
log.Fatal(err)
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var renameNamespaceCmd = &cobra.Command{
Use: "rename OLD_NAME NEW_NAME",
Short: "Renames a namespace",
Use: "rename OLD_NAME NEW_NAME",
Short: "Renames a namespace",
Aliases: []string{"mv"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
return fmt.Errorf("Missing parameters")
expectedArguments := 2
if len(args) < expectedArguments {
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
o, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.RenameNamespaceRequest{
OldName: args[0],
NewName: args[1],
}
err = h.RenameNamespace(args[0], args[1])
if strings.HasPrefix(o, "json") {
JsonOutput(map[string]string{"Result": "Namespace renamed"}, err, o)
response, err := client.RenameNamespace(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot rename namespace: %s",
status.Convert(err).Message(),
),
output,
)
return
}
if err != nil {
fmt.Printf("Error renaming namespace: %s\n", err)
return
}
fmt.Printf("Namespace renamed\n")
SuccessOutput(response.Namespace, "Namespace renamed", output)
},
}

View File

@@ -9,146 +9,256 @@ import (
survey "github.com/AlecAivazis/survey/v2"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
"tailscale.com/tailcfg"
"tailscale.com/types/wgkey"
"google.golang.org/grpc/status"
"tailscale.com/types/key"
)
func init() {
rootCmd.AddCommand(nodeCmd)
nodeCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
err := nodeCmd.MarkPersistentFlagRequired("namespace")
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
nodeCmd.AddCommand(listNodesCmd)
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
err := registerNodeCmd.MarkFlagRequired("namespace")
if err != nil {
log.Fatalf(err.Error())
}
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
err = registerNodeCmd.MarkFlagRequired("key")
if err != nil {
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(listNodesCmd)
nodeCmd.AddCommand(registerNodeCmd)
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = expireNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(expireNodeCmd)
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = deleteNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(deleteNodeCmd)
nodeCmd.AddCommand(shareMachineCmd)
nodeCmd.AddCommand(unshareMachineCmd)
}
var nodeCmd = &cobra.Command{
Use: "nodes",
Short: "Manage the nodes of Headscale",
Use: "nodes",
Short: "Manage the nodes of Headscale",
Aliases: []string{"node", "machine", "machines"},
}
var registerNodeCmd = &cobra.Command{
Use: "register machineID",
Use: "register",
Short: "Registers a machine to your network",
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("missing parameters")
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
o, _ := cmd.Flags().GetString("output")
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
m, err := h.RegisterMachine(args[0], n)
if strings.HasPrefix(o, "json") {
JsonOutput(m, err, o)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
machineKey, err := cmd.Flags().GetString("key")
if err != nil {
fmt.Printf("Cannot register machine: %s\n", err)
ErrorOutput(
err,
fmt.Sprintf("Error getting machine key from flag: %s", err),
output,
)
return
}
fmt.Printf("Machine registered\n")
request := &v1.RegisterMachineRequest{
Key: machineKey,
Namespace: namespace,
}
response, err := client.RegisterMachine(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot register machine: %s\n",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(response.Machine, "Machine register", output)
},
}
var listNodesCmd = &cobra.Command{
Use: "list",
Short: "List the nodes in a given namespace",
Use: "list",
Short: "List nodes",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
o, _ := cmd.Flags().GetString("output")
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
namespace, err := h.GetNamespace(n)
if err != nil {
log.Fatalf("Error fetching namespace: %s", err)
}
machines, err := h.ListMachinesInNamespace(n)
if err != nil {
log.Fatalf("Error fetching machines: %s", err)
}
sharedMachines, err := h.ListSharedMachinesInNamespace(n)
if err != nil {
log.Fatalf("Error fetching shared machines: %s", err)
}
allMachines := append(*machines, *sharedMachines...)
if strings.HasPrefix(o, "json") {
JsonOutput(allMachines, err, o)
return
}
if err != nil {
log.Fatalf("Error getting nodes: %s", err)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ListMachinesRequest{
Namespace: namespace,
}
d, err := nodesToPtables(*namespace, allMachines)
response, err := client.ListMachines(ctx, request)
if err != nil {
log.Fatalf("Error converting to table: %s", err)
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(d).Render()
if output != "" {
SuccessOutput(response.Machines, "", output)
return
}
tableData, err := nodesToPtables(namespace, response.Machines)
if err != nil {
log.Fatal(err)
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var deleteNodeCmd = &cobra.Command{
Use: "delete ID",
Short: "Delete a node",
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("missing parameters")
}
return nil
},
var expireNodeCmd = &cobra.Command{
Use: "expire",
Short: "Expire (log out) a machine in your network",
Long: "Expiring a node will keep the node in the database and force it to reauthenticate.",
Aliases: []string{"logout", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
log.Fatalf("Error initializing: %s", err)
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
id, err := strconv.Atoi(args[0])
if err != nil {
log.Fatalf("Error converting ID to integer: %s", err)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ExpireMachineRequest{
MachineId: identifier,
}
m, err := h.GetMachineByID(uint64(id))
response, err := client.ExpireMachine(ctx, request)
if err != nil {
log.Fatalf("Error getting node: %s", err)
ErrorOutput(
err,
fmt.Sprintf(
"Cannot expire machine: %s\n",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(response.Machine, "Machine expired", output)
},
}
var deleteNodeCmd = &cobra.Command{
Use: "delete",
Short: "Delete a node",
Aliases: []string{"del"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
getRequest := &v1.GetMachineRequest{
MachineId: identifier,
}
getResponse, err := client.GetMachine(ctx, getRequest)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Error getting node node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
deleteRequest := &v1.DeleteMachineRequest{
MachineId: identifier,
}
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
prompt := &survey.Confirm{
Message: fmt.Sprintf("Do you want to remove the node %s?", m.Name),
Message: fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetMachine().Name,
),
}
err = survey.AskOne(prompt, &confirm)
if err != nil {
@@ -157,162 +267,117 @@ var deleteNodeCmd = &cobra.Command{
}
if confirm || force {
err = h.DeleteMachine(m)
if strings.HasPrefix(output, "json") {
JsonOutput(map[string]string{"Result": "Node deleted"}, err, output)
response, err := client.DeleteMachine(ctx, deleteRequest)
if output != "" {
SuccessOutput(response, "", output)
return
}
if err != nil {
log.Fatalf("Error deleting node: %s", err)
}
fmt.Printf("Node deleted\n")
} else {
if strings.HasPrefix(output, "json") {
JsonOutput(map[string]string{"Result": "Node not deleted"}, err, output)
ErrorOutput(
err,
fmt.Sprintf(
"Error deleting node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
fmt.Printf("Node not deleted\n")
SuccessOutput(
map[string]string{"Result": "Node deleted"},
"Node deleted",
output,
)
} else {
SuccessOutput(map[string]string{"Result": "Node not deleted"}, "Node not deleted", output)
}
},
}
var shareMachineCmd = &cobra.Command{
Use: "share ID namespace",
Short: "Shares a node from the current namespace to the specified one",
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 2 {
return fmt.Errorf("missing parameters")
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
output, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
_, err = h.GetNamespace(namespace)
if err != nil {
log.Fatalf("Error fetching origin namespace: %s", err)
}
destinationNamespace, err := h.GetNamespace(args[1])
if err != nil {
log.Fatalf("Error fetching destination namespace: %s", err)
}
id, err := strconv.Atoi(args[0])
if err != nil {
log.Fatalf("Error converting ID to integer: %s", err)
}
machine, err := h.GetMachineByID(uint64(id))
if err != nil {
log.Fatalf("Error getting node: %s", err)
}
err = h.AddSharedMachineToNamespace(machine, destinationNamespace)
if strings.HasPrefix(output, "json") {
JsonOutput(map[string]string{"Result": "Node shared"}, err, output)
return
}
if err != nil {
fmt.Printf("Error sharing node: %s\n", err)
return
}
fmt.Println("Node shared!")
},
}
var unshareMachineCmd = &cobra.Command{
Use: "unshare ID",
Short: "Unshares a node from the specified namespace",
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("missing parameters")
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
output, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
n, err := h.GetNamespace(namespace)
if err != nil {
log.Fatalf("Error fetching namespace: %s", err)
}
id, err := strconv.Atoi(args[0])
if err != nil {
log.Fatalf("Error converting ID to integer: %s", err)
}
machine, err := h.GetMachineByID(uint64(id))
if err != nil {
log.Fatalf("Error getting node: %s", err)
}
err = h.RemoveSharedMachineFromNamespace(machine, n)
if strings.HasPrefix(output, "json") {
JsonOutput(map[string]string{"Result": "Node unshared"}, err, output)
return
}
if err != nil {
fmt.Printf("Error unsharing node: %s\n", err)
return
}
fmt.Println("Node unshared!")
},
}
func nodesToPtables(currentNamespace headscale.Namespace, machines []headscale.Machine) (pterm.TableData, error) {
d := pterm.TableData{{"ID", "Name", "NodeKey", "Namespace", "IP address", "Ephemeral", "Last seen", "Online"}}
func nodesToPtables(
currentNamespace string,
machines []*v1.Machine,
) (pterm.TableData, error) {
tableData := pterm.TableData{
{
"ID",
"Name",
"NodeKey",
"Namespace",
"IP addresses",
"Ephemeral",
"Last seen",
"Online",
"Expired",
},
}
for _, machine := range machines {
var ephemeral bool
if machine.AuthKey != nil && machine.AuthKey.Ephemeral {
if machine.PreAuthKey != nil && machine.PreAuthKey.Ephemeral {
ephemeral = true
}
var lastSeen time.Time
var lastSeenTime string
if machine.LastSeen != nil {
lastSeen = *machine.LastSeen
lastSeen = machine.LastSeen.AsTime()
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05")
}
nKey, err := wgkey.ParseHex(machine.NodeKey)
var expiry time.Time
if machine.Expiry != nil {
expiry = machine.Expiry.AsTime()
}
var nodeKey key.NodePublic
err := nodeKey.UnmarshalText(
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
)
if err != nil {
return nil, err
}
nodeKey := tailcfg.NodeKey(nKey)
var online string
if lastSeen.After(time.Now().Add(-5 * time.Minute)) { // TODO: Find a better way to reliably show if online
online = pterm.LightGreen("true")
if lastSeen.After(
time.Now().Add(-5 * time.Minute),
) { // TODO: Find a better way to reliably show if online
online = pterm.LightGreen("online")
} else {
online = pterm.LightRed("false")
online = pterm.LightRed("offline")
}
var expired string
if expiry.IsZero() || expiry.After(time.Now()) {
expired = pterm.LightGreen("no")
} else {
expired = pterm.LightRed("yes")
}
var namespace string
if currentNamespace.ID == machine.NamespaceID {
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
namespace = pterm.LightMagenta(machine.Namespace.Name)
} else {
// Shared into this namespace
namespace = pterm.LightYellow(machine.Namespace.Name)
}
d = append(d, []string{strconv.FormatUint(machine.ID, 10), machine.Name, nodeKey.ShortString(), namespace, machine.IPAddress, strconv.FormatBool(ephemeral), lastSeenTime, online})
tableData = append(
tableData,
[]string{
strconv.FormatUint(machine.Id, headscale.Base10),
machine.Name,
nodeKey.ShortString(),
namespace,
strings.Join(machine.IpAddresses, ", "),
strconv.FormatBool(ephemeral),
lastSeenTime,
online,
expired,
},
)
}
return d, nil
return tableData, nil
}

View File

@@ -2,14 +2,18 @@ package cli
import (
"fmt"
"log"
"strconv"
"strings"
"time"
"github.com/hako/durafmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
DefaultPreAuthKeyExpiry = 1 * time.Hour
)
func init() {
@@ -17,158 +21,199 @@ func init() {
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
if err != nil {
log.Fatalf(err.Error())
log.Fatal().Err(err).Msg("")
}
preauthkeysCmd.AddCommand(listPreAuthKeys)
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
createPreAuthKeyCmd.PersistentFlags().Bool("reusable", false, "Make the preauthkey reusable")
createPreAuthKeyCmd.PersistentFlags().Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
createPreAuthKeyCmd.Flags().StringP("expiration", "e", "", "Human-readable expiration of the key (30m, 24h, 365d...)")
createPreAuthKeyCmd.PersistentFlags().
Bool("reusable", false, "Make the preauthkey reusable")
createPreAuthKeyCmd.PersistentFlags().
Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
createPreAuthKeyCmd.Flags().
DurationP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (30m, 24h, 365d...)")
}
var preauthkeysCmd = &cobra.Command{
Use: "preauthkeys",
Short: "Handle the preauthkeys in Headscale",
Use: "preauthkeys",
Short: "Handle the preauthkeys in Headscale",
Aliases: []string{"preauthkey", "authkey", "pre"},
}
var listPreAuthKeys = &cobra.Command{
Use: "list",
Short: "List the preauthkeys for this namespace",
Use: "list",
Short: "List the preauthkeys for this namespace",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
o, _ := cmd.Flags().GetString("output")
output, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
keys, err := h.GetPreAuthKeys(n)
if strings.HasPrefix(o, "json") {
JsonOutput(keys, err, o)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ListPreAuthKeysRequest{
Namespace: namespace,
}
response, err := client.ListPreAuthKeys(ctx, request)
if err != nil {
fmt.Printf("Error getting the list of keys: %s\n", err)
ErrorOutput(
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
return
}
d := pterm.TableData{{"ID", "Key", "Reusable", "Ephemeral", "Used", "Expiration", "Created"}}
for _, k := range *keys {
if output != "" {
SuccessOutput(response.PreAuthKeys, "", output)
return
}
tableData := pterm.TableData{
{"ID", "Key", "Reusable", "Ephemeral", "Used", "Expiration", "Created"},
}
for _, key := range response.PreAuthKeys {
expiration := "-"
if k.Expiration != nil {
expiration = k.Expiration.Format("2006-01-02 15:04:05")
if key.GetExpiration() != nil {
expiration = ColourTime(key.Expiration.AsTime())
}
var reusable string
if k.Ephemeral {
if key.GetEphemeral() {
reusable = "N/A"
} else {
reusable = fmt.Sprintf("%v", k.Reusable)
reusable = fmt.Sprintf("%v", key.GetReusable())
}
d = append(d, []string{
strconv.FormatUint(k.ID, 10),
k.Key,
tableData = append(tableData, []string{
key.GetId(),
key.GetKey(),
reusable,
strconv.FormatBool(k.Ephemeral),
fmt.Sprintf("%v", k.Used),
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
k.CreatedAt.Format("2006-01-02 15:04:05"),
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
})
}
err = pterm.DefaultTable.WithHasHeader().WithData(d).Render()
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
log.Fatal(err)
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var createPreAuthKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new preauthkey in the specified namespace",
Use: "create",
Short: "Creates a new preauthkey in the specified namespace",
Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
o, _ := cmd.Flags().GetString("output")
output, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error initializing: %s", err)
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
return
}
reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
e, _ := cmd.Flags().GetString("expiration")
var expiration *time.Time
if e != "" {
duration, err := durafmt.ParseStringShort(e)
if err != nil {
log.Fatalf("Error parsing expiration: %s", err)
}
exp := time.Now().UTC().Add(duration.Duration())
expiration = &exp
log.Trace().
Bool("reusable", reusable).
Bool("ephemeral", ephemeral).
Str("namespace", namespace).
Msg("Preparing to create preauthkey")
request := &v1.CreatePreAuthKeyRequest{
Namespace: namespace,
Reusable: reusable,
Ephemeral: ephemeral,
}
k, err := h.CreatePreAuthKey(n, reusable, ephemeral, expiration)
if strings.HasPrefix(o, "json") {
JsonOutput(k, err, o)
return
}
duration, _ := cmd.Flags().GetDuration("expiration")
expiration := time.Now().UTC().Add(duration)
log.Trace().Dur("expiration", duration).Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.CreatePreAuthKey(ctx, request)
if err != nil {
fmt.Println(err)
ErrorOutput(
err,
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
output,
)
return
}
fmt.Printf("%s\n", k.Key)
SuccessOutput(response.PreAuthKey, response.PreAuthKey.Key, output)
},
}
var expirePreAuthKeyCmd = &cobra.Command{
Use: "expire KEY",
Short: "Expire a preauthkey",
Use: "expire KEY",
Short: "Expire a preauthkey",
Aliases: []string{"revoke", "exp", "e"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("missing parameters")
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
output, _ := cmd.Flags().GetString("output")
namespace, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
o, _ := cmd.Flags().GetString("output")
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
k, err := h.GetPreAuthKey(n, args[0])
if err != nil {
if strings.HasPrefix(o, "json") {
JsonOutput(k, err, o)
return
}
log.Fatalf("Error getting the key: %s", err)
}
err = h.MarkExpirePreAuthKey(k)
if strings.HasPrefix(o, "json") {
JsonOutput(k, err, o)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ExpirePreAuthKeyRequest{
Namespace: namespace,
Key: args[0],
}
response, err := client.ExpirePreAuthKey(ctx, request)
if err != nil {
fmt.Println(err)
ErrorOutput(
err,
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
output,
)
return
}
fmt.Println("Expired")
SuccessOutput(response, "Key expired", output)
},
}

View File

@@ -0,0 +1,19 @@
package cli
import (
"time"
"github.com/pterm/pterm"
)
func ColourTime(date time.Time) string {
dateStr := date.Format("2006-01-02 15:04:05")
if date.After(time.Now()) {
dateStr = pterm.LightGreen(dateStr)
} else {
dateStr = pterm.LightRed(dateStr)
}
return dateStr
}

View File

@@ -8,8 +8,10 @@ import (
)
func init() {
rootCmd.PersistentFlags().StringP("output", "o", "", "Output format. Empty for human-readable, 'json' or 'json-line'")
rootCmd.PersistentFlags().Bool("force", false, "Disable prompts and forces the execution")
rootCmd.PersistentFlags().
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
rootCmd.PersistentFlags().
Bool("force", false, "Disable prompts and forces the execution")
}
var rootCmd = &cobra.Command{

View File

@@ -3,145 +3,207 @@ package cli
import (
"fmt"
"log"
"strings"
"strconv"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
func init() {
rootCmd.AddCommand(routesCmd)
routesCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
err := routesCmd.MarkPersistentFlagRequired("namespace")
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err := listRoutesCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(listRoutesCmd)
enableRouteCmd.Flags().
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = enableRouteCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
enableRouteCmd.Flags().BoolP("all", "a", false, "Enable all routes advertised by the node")
routesCmd.AddCommand(listRoutesCmd)
routesCmd.AddCommand(enableRouteCmd)
nodeCmd.AddCommand(routesCmd)
}
var routesCmd = &cobra.Command{
Use: "routes",
Short: "Manage the routes of Headscale",
Use: "routes",
Short: "Manage the routes of Headscale",
Aliases: []string{"r", "route"},
}
var listRoutesCmd = &cobra.Command{
Use: "list NODE",
Short: "List the routes exposed by this node",
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return fmt.Errorf("Missing parameters")
}
return nil
},
Use: "list",
Short: "List routes advertised and enabled by a given node",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
o, _ := cmd.Flags().GetString("output")
output, _ := cmd.Flags().GetString("output")
h, err := getHeadscaleApp()
machineID, err := cmd.Flags().GetUint64("identifier")
if err != nil {
log.Fatalf("Error initializing: %s", err)
}
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
availableRoutes, err := h.GetAdvertisedNodeRoutes(n, args[0])
if err != nil {
fmt.Println(err)
return
}
if strings.HasPrefix(o, "json") {
// TODO: Add enable/disabled information to this interface
JsonOutput(availableRoutes, err, o)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.GetMachineRouteRequest{
MachineId: machineID,
}
response, err := client.GetMachineRoute(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
d := h.RoutesToPtables(n, args[0], *availableRoutes)
if output != "" {
SuccessOutput(response.Routes, "", output)
err = pterm.DefaultTable.WithHasHeader().WithData(d).Render()
return
}
tableData := routesToPtables(response.Routes)
if err != nil {
log.Fatal(err)
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var enableRouteCmd = &cobra.Command{
Use: "enable node-name route",
Short: "Allows exposing a route declared by this node to the rest of the nodes",
Args: func(cmd *cobra.Command, args []string) error {
all, err := cmd.Flags().GetBool("all")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
}
if all {
if len(args) < 1 {
return fmt.Errorf("Missing parameters")
}
return nil
} else {
if len(args) < 2 {
return fmt.Errorf("Missing parameters")
}
return nil
}
},
Use: "enable",
Short: "Set the enabled routes for a given node",
Long: `This command will take a list of routes that will _replace_
the current set of routes on a given node.
If you would like to disable a route, simply run the command again, but
omit the route you do not want to enable.
`,
Run: func(cmd *cobra.Command, args []string) {
n, err := cmd.Flags().GetString("namespace")
output, _ := cmd.Flags().GetString("output")
machineID, err := cmd.Flags().GetUint64("identifier")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
o, _ := cmd.Flags().GetString("output")
all, err := cmd.Flags().GetBool("all")
routes, err := cmd.Flags().GetStringSlice("route")
if err != nil {
log.Fatalf("Error getting namespace: %s", err)
ErrorOutput(
err,
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
return
}
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.EnableMachineRoutesRequest{
MachineId: machineID,
Routes: routes,
}
if all {
availableRoutes, err := h.GetAdvertisedNodeRoutes(n, args[0])
if err != nil {
fmt.Println(err)
return
}
response, err := client.EnableMachineRoutes(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Cannot register machine: %s\n",
status.Convert(err).Message(),
),
output,
)
for _, availableRoute := range *availableRoutes {
err = h.EnableNodeRoute(n, args[0], availableRoute.String())
if err != nil {
fmt.Println(err)
return
}
return
}
if strings.HasPrefix(o, "json") {
JsonOutput(availableRoute, err, o)
} else {
fmt.Printf("Enabled route %s\n", availableRoute)
}
}
} else {
err = h.EnableNodeRoute(n, args[0], args[1])
if output != "" {
SuccessOutput(response.Routes, "", output)
if strings.HasPrefix(o, "json") {
JsonOutput(args[1], err, o)
return
}
return
}
if err != nil {
fmt.Println(err)
return
}
fmt.Printf("Enabled route %s\n", args[1])
tableData := routesToPtables(response.Routes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
// routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes *v1.Routes) pterm.TableData {
tableData := pterm.TableData{{"Route", "Enabled"}}
for _, route := range routes.GetAdvertisedRoutes() {
enabled := isStringInSlice(routes.EnabledRoutes, route)
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
}
return tableData
}
func isStringInSlice(strs []string, s string) bool {
for _, s2 := range strs {
if s == s2 {
return true
}
}
return false
}

View File

@@ -1,8 +1,7 @@
package cli
import (
"log"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
)
@@ -19,12 +18,12 @@ var serveCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
h, err := getHeadscaleApp()
if err != nil {
log.Fatalf("Error initializing: %s", err)
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
err = h.Serve()
if err != nil {
log.Fatalf("Error initializing: %s", err)
log.Fatal().Caller().Err(err).Msg("Error starting server")
}
},
}

View File

@@ -1,27 +1,36 @@
package cli
import (
"context"
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"io/fs"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/rs/zerolog/log"
"github.com/spf13/viper"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"gopkg.in/yaml.v2"
"inet.af/netaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
)
type ErrorOutput struct {
Error string
}
const (
PermissionFallback = 0o700
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
)
func LoadConfig(path string) error {
viper.SetConfigName("config")
@@ -33,47 +42,125 @@ func LoadConfig(path string) error {
// For testing
viper.AddConfigPath(path)
}
viper.SetEnvPrefix("headscale")
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
viper.AutomaticEnv()
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
viper.SetDefault("tls_letsencrypt_challenge_type", "HTTP-01")
viper.SetDefault("ip_prefix", "100.64.0.0/10")
viper.SetDefault("tls_client_auth_mode", "relaxed")
viper.SetDefault("log_level", "info")
viper.SetDefault("dns_config", nil)
err := viper.ReadInConfig()
if err != nil {
return fmt.Errorf("Fatal error reading config file: %s \n", err)
viper.SetDefault("derp.server.enabled", false)
viper.SetDefault("derp.server.stun.enabled", true)
viper.SetDefault("unix_socket", "/var/run/headscale.sock")
viper.SetDefault("unix_socket_permission", "0o770")
viper.SetDefault("grpc_listen_addr", ":50443")
viper.SetDefault("grpc_allow_insecure", false)
viper.SetDefault("cli.timeout", "5s")
viper.SetDefault("cli.insecure", false)
viper.SetDefault("oidc.strip_email_domain", true)
if err := viper.ReadInConfig(); err != nil {
return fmt.Errorf("fatal error reading config file: %w", err)
}
// Collect any validation errors and return them all at once
var errorText string
if (viper.GetString("tls_letsencrypt_hostname") != "") && ((viper.GetString("tls_cert_path") != "") || (viper.GetString("tls_key_path") != "")) {
if (viper.GetString("tls_letsencrypt_hostname") != "") &&
((viper.GetString("tls_cert_path") != "") || (viper.GetString("tls_key_path") != "")) {
errorText += "Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both\n"
}
if (viper.GetString("tls_letsencrypt_hostname") != "") && (viper.GetString("tls_letsencrypt_challenge_type") == "TLS-ALPN-01") && (!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) {
if (viper.GetString("tls_letsencrypt_hostname") != "") &&
(viper.GetString("tls_letsencrypt_challenge_type") == "TLS-ALPN-01") &&
(!strings.HasSuffix(viper.GetString("listen_addr"), ":443")) {
// this is only a warning because there could be something sitting in front of headscale that redirects the traffic (e.g. an iptables rule)
log.Warn().
Msg("Warning: when using tls_letsencrypt_hostname with TLS-ALPN-01 as challenge type, headscale must be reachable on port 443, i.e. listen_addr should probably end in :443")
}
if (viper.GetString("tls_letsencrypt_challenge_type") != "HTTP-01") && (viper.GetString("tls_letsencrypt_challenge_type") != "TLS-ALPN-01") {
if (viper.GetString("tls_letsencrypt_challenge_type") != "HTTP-01") &&
(viper.GetString("tls_letsencrypt_challenge_type") != "TLS-ALPN-01") {
errorText += "Fatal config error: the only supported values for tls_letsencrypt_challenge_type are HTTP-01 and TLS-ALPN-01\n"
}
if !strings.HasPrefix(viper.GetString("server_url"), "http://") && !strings.HasPrefix(viper.GetString("server_url"), "https://") {
if !strings.HasPrefix(viper.GetString("server_url"), "http://") &&
!strings.HasPrefix(viper.GetString("server_url"), "https://") {
errorText += "Fatal config error: server_url must start with https:// or http://\n"
}
_, authModeValid := headscale.LookupTLSClientAuthMode(
viper.GetString("tls_client_auth_mode"),
)
if !authModeValid {
errorText += fmt.Sprintf(
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
viper.GetString("tls_client_auth_mode"),
headscale.DisabledClientAuth,
headscale.RelaxedClientAuth,
headscale.EnforcedClientAuth)
}
if errorText != "" {
//nolint
return errors.New(strings.TrimSuffix(errorText, "\n"))
} else {
return nil
}
}
func GetDERPConfig() headscale.DERPConfig {
serverEnabled := viper.GetBool("derp.server.enabled")
serverRegionID := viper.GetInt("derp.server.region_id")
serverRegionCode := viper.GetString("derp.server.region_code")
serverRegionName := viper.GetString("derp.server.region_name")
stunAddr := viper.GetString("derp.server.stun_listen_addr")
if serverEnabled && stunAddr == "" {
log.Fatal().Msg("derp.server.stun_listen_addr must be set if derp.server.enabled is true")
}
urlStrs := viper.GetStringSlice("derp.urls")
urls := make([]url.URL, len(urlStrs))
for index, urlStr := range urlStrs {
urlAddr, err := url.Parse(urlStr)
if err != nil {
log.Error().
Str("url", urlStr).
Err(err).
Msg("Failed to parse url, ignoring...")
}
urls[index] = *urlAddr
}
paths := viper.GetStringSlice("derp.paths")
autoUpdate := viper.GetBool("derp.auto_update_enabled")
updateFrequency := viper.GetDuration("derp.update_frequency")
return headscale.DERPConfig{
ServerEnabled: serverEnabled,
ServerRegionID: serverRegionID,
ServerRegionCode: serverRegionCode,
ServerRegionName: serverRegionName,
STUNAddr: stunAddr,
URLs: urls,
Paths: paths,
AutoUpdate: autoUpdate,
UpdateFrequency: updateFrequency,
}
}
func GetDNSConfig() (*tailcfg.DNSConfig, string) {
@@ -108,9 +195,14 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
if viper.IsSet("dns_config.restricted_nameservers") {
if len(dnsConfig.Nameservers) > 0 {
dnsConfig.Routes = make(map[string][]dnstype.Resolver)
restrictedDNS := viper.GetStringMapStringSlice("dns_config.restricted_nameservers")
restrictedDNS := viper.GetStringMapStringSlice(
"dns_config.restricted_nameservers",
)
for domain, restrictedNameservers := range restrictedDNS {
restrictedResolvers := make([]dnstype.Resolver, len(restrictedNameservers))
restrictedResolvers := make(
[]dnstype.Resolver,
len(restrictedNameservers),
)
for index, nameserverStr := range restrictedNameservers {
nameserver, err := netaddr.ParseIP(nameserverStr)
if err != nil {
@@ -167,38 +259,82 @@ func absPath(path string) string {
path = filepath.Join(dir, path)
}
}
return path
}
func getHeadscaleApp() (*headscale.Headscale, error) {
derpPath := absPath(viper.GetString("derp_map_path"))
derpMap, err := loadDerpMap(derpPath)
if err != nil {
log.Error().
Str("path", derpPath).
Err(err).
Msg("Could not load DERP servers map file")
}
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
// to avoid races
minInactivityTimeout, _ := time.ParseDuration("65s")
if viper.GetDuration("ephemeral_node_inactivity_timeout") <= minInactivityTimeout {
err = fmt.Errorf("ephemeral_node_inactivity_timeout (%s) is set too low, must be more than %s\n", viper.GetString("ephemeral_node_inactivity_timeout"), minInactivityTimeout)
return nil, err
}
func getHeadscaleConfig() headscale.Config {
dnsConfig, baseDomain := GetDNSConfig()
derpConfig := GetDERPConfig()
cfg := headscale.Config{
ServerURL: viper.GetString("server_url"),
Addr: viper.GetString("listen_addr"),
configuredPrefixes := viper.GetStringSlice("ip_prefixes")
parsedPrefixes := make([]netaddr.IPPrefix, 0, len(configuredPrefixes)+1)
legacyPrefixField := viper.GetString("ip_prefix")
if len(legacyPrefixField) > 0 {
log.
Warn().
Msgf(
"%s, %s",
"use of 'ip_prefix' for configuration is deprecated",
"please see 'ip_prefixes' in the shipped example.",
)
legacyPrefix, err := netaddr.ParseIPPrefix(legacyPrefixField)
if err != nil {
panic(fmt.Errorf("failed to parse ip_prefix: %w", err))
}
parsedPrefixes = append(parsedPrefixes, legacyPrefix)
}
for i, prefixInConfig := range configuredPrefixes {
prefix, err := netaddr.ParseIPPrefix(prefixInConfig)
if err != nil {
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
}
parsedPrefixes = append(parsedPrefixes, prefix)
}
prefixes := make([]netaddr.IPPrefix, 0, len(parsedPrefixes))
{
// dedup
normalizedPrefixes := make(map[string]int, len(parsedPrefixes))
for i, p := range parsedPrefixes {
normalized, _ := p.Range().Prefix()
normalizedPrefixes[normalized.String()] = i
}
// convert back to list
for _, i := range normalizedPrefixes {
prefixes = append(prefixes, parsedPrefixes[i])
}
}
if len(prefixes) < 1 {
prefixes = append(prefixes, netaddr.MustParseIPPrefix("100.64.0.0/10"))
log.Warn().
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
}
tlsClientAuthMode, _ := headscale.LookupTLSClientAuthMode(
viper.GetString("tls_client_auth_mode"),
)
return headscale.Config{
ServerURL: viper.GetString("server_url"),
Addr: viper.GetString("listen_addr"),
MetricsAddr: viper.GetString("metrics_listen_addr"),
GRPCAddr: viper.GetString("grpc_listen_addr"),
GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"),
IPPrefixes: prefixes,
PrivateKeyPath: absPath(viper.GetString("private_key_path")),
DerpMap: derpMap,
IPPrefix: netaddr.MustParseIPPrefix(viper.GetString("ip_prefix")),
BaseDomain: baseDomain,
EphemeralNodeInactivityTimeout: viper.GetDuration("ephemeral_node_inactivity_timeout"),
DERP: derpConfig,
EphemeralNodeInactivityTimeout: viper.GetDuration(
"ephemeral_node_inactivity_timeout",
),
DBtype: viper.GetString("db_type"),
DBpath: absPath(viper.GetString("db_path")),
@@ -208,21 +344,60 @@ func getHeadscaleApp() (*headscale.Headscale, error) {
DBuser: viper.GetString("db_user"),
DBpass: viper.GetString("db_pass"),
TLSLetsEncryptHostname: viper.GetString("tls_letsencrypt_hostname"),
TLSLetsEncryptListen: viper.GetString("tls_letsencrypt_listen"),
TLSLetsEncryptCacheDir: absPath(viper.GetString("tls_letsencrypt_cache_dir")),
TLSLetsEncryptHostname: viper.GetString("tls_letsencrypt_hostname"),
TLSLetsEncryptListen: viper.GetString("tls_letsencrypt_listen"),
TLSLetsEncryptCacheDir: absPath(
viper.GetString("tls_letsencrypt_cache_dir"),
),
TLSLetsEncryptChallengeType: viper.GetString("tls_letsencrypt_challenge_type"),
TLSCertPath: absPath(viper.GetString("tls_cert_path")),
TLSKeyPath: absPath(viper.GetString("tls_key_path")),
TLSCertPath: absPath(viper.GetString("tls_cert_path")),
TLSKeyPath: absPath(viper.GetString("tls_key_path")),
TLSClientAuthMode: tlsClientAuthMode,
DNSConfig: dnsConfig,
ACMEEmail: viper.GetString("acme_email"),
ACMEURL: viper.GetString("acme_url"),
UnixSocket: viper.GetString("unix_socket"),
UnixSocketPermission: GetFileMode("unix_socket_permission"),
OIDC: headscale.OIDCConfig{
Issuer: viper.GetString("oidc.issuer"),
ClientID: viper.GetString("oidc.client_id"),
ClientSecret: viper.GetString("oidc.client_secret"),
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
},
CLI: headscale.CLIConfig{
Address: viper.GetString("cli.address"),
APIKey: viper.GetString("cli.api_key"),
Timeout: viper.GetDuration("cli.timeout"),
Insecure: viper.GetBool("cli.insecure"),
},
}
}
func getHeadscaleApp() (*headscale.Headscale, error) {
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
// to avoid races
minInactivityTimeout, _ := time.ParseDuration("65s")
if viper.GetDuration("ephemeral_node_inactivity_timeout") <= minInactivityTimeout {
// TODO: Find a better way to return this text
//nolint
err := fmt.Errorf(
"ephemeral_node_inactivity_timeout (%s) is set too low, must be more than %s",
viper.GetString("ephemeral_node_inactivity_timeout"),
minInactivityTimeout,
)
return nil, err
}
h, err := headscale.NewHeadscale(cfg)
cfg := getHeadscaleConfig()
app, err := headscale.NewHeadscale(cfg)
if err != nil {
return nil, err
}
@@ -231,7 +406,7 @@ func getHeadscaleApp() (*headscale.Headscale, error) {
if viper.GetString("acl_policy_path") != "" {
aclPath := absPath(viper.GetString("acl_policy_path"))
err = h.LoadACLPolicy(aclPath)
err = app.LoadACLPolicy(aclPath)
if err != nil {
log.Error().
Str("path", aclPath).
@@ -240,61 +415,151 @@ func getHeadscaleApp() (*headscale.Headscale, error) {
}
}
return h, nil
return app, nil
}
func loadDerpMap(path string) (*tailcfg.DERPMap, error) {
derpFile, err := os.Open(path)
if err != nil {
return nil, err
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg := getHeadscaleConfig()
log.Debug().
Dur("timeout", cfg.CLI.Timeout).
Msgf("Setting timeout")
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
grpcOptions := []grpc.DialOption{
grpc.WithBlock(),
}
defer derpFile.Close()
var derpMap tailcfg.DERPMap
b, err := io.ReadAll(derpFile)
if err != nil {
return nil, err
address := cfg.CLI.Address
// If the address is not set, we assume that we are on the server hosting headscale.
if address == "" {
log.Debug().
Str("socket", cfg.UnixSocket).
Msgf("HEADSCALE_CLI_ADDRESS environment is not set, connecting to unix socket.")
address = cfg.UnixSocket
grpcOptions = append(
grpcOptions,
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(headscale.GrpcSocketDialer),
)
} else {
// If we are not connecting to a local server, require an API key for authentication
apiKey := cfg.CLI.APIKey
if apiKey == "" {
log.Fatal().Caller().Msgf("HEADSCALE_CLI_API_KEY environment variable needs to be set.")
}
grpcOptions = append(grpcOptions,
grpc.WithPerRPCCredentials(tokenAuth{
token: apiKey,
}),
)
if cfg.CLI.Insecure {
tlsConfig := &tls.Config{
// turn of gosec as we are intentionally setting
// insecure.
//nolint:gosec
InsecureSkipVerify: true,
}
grpcOptions = append(grpcOptions,
grpc.WithTransportCredentials(credentials.NewTLS(tlsConfig)),
)
} else {
grpcOptions = append(grpcOptions,
grpc.WithTransportCredentials(credentials.NewClientTLSFromCert(nil, "")),
)
}
}
err = yaml.Unmarshal(b, &derpMap)
return &derpMap, err
log.Trace().Caller().Str("address", address).Msg("Connecting via gRPC")
conn, err := grpc.DialContext(ctx, address, grpcOptions...)
if err != nil {
log.Fatal().Caller().Err(err).Msgf("Could not connect: %v", err)
}
client := v1.NewHeadscaleServiceClient(conn)
return ctx, client, conn, cancel
}
func JsonOutput(result interface{}, errResult error, outputFormat string) {
var j []byte
func SuccessOutput(result interface{}, override string, outputFormat string) {
var jsonBytes []byte
var err error
switch outputFormat {
case "json":
if errResult != nil {
j, err = json.MarshalIndent(ErrorOutput{errResult.Error()}, "", "\t")
if err != nil {
log.Fatal().Err(err)
}
} else {
j, err = json.MarshalIndent(result, "", "\t")
if err != nil {
log.Fatal().Err(err)
}
jsonBytes, err = json.MarshalIndent(result, "", "\t")
if err != nil {
log.Fatal().Err(err)
}
case "json-line":
if errResult != nil {
j, err = json.Marshal(ErrorOutput{errResult.Error()})
if err != nil {
log.Fatal().Err(err)
}
} else {
j, err = json.Marshal(result)
if err != nil {
log.Fatal().Err(err)
}
jsonBytes, err = json.Marshal(result)
if err != nil {
log.Fatal().Err(err)
}
case "yaml":
jsonBytes, err = yaml.Marshal(result)
if err != nil {
log.Fatal().Err(err)
}
default:
//nolint
fmt.Println(override)
return
}
fmt.Println(string(j))
//nolint
fmt.Println(string(jsonBytes))
}
func HasJsonOutputFlag() bool {
func ErrorOutput(errResult error, override string, outputFormat string) {
type errOutput struct {
Error string `json:"error"`
}
SuccessOutput(errOutput{errResult.Error()}, override, outputFormat)
}
func HasMachineOutputFlag() bool {
for _, arg := range os.Args {
if arg == "json" || arg == "json-line" {
if arg == "json" || arg == "json-line" || arg == "yaml" {
return true
}
}
return false
}
type tokenAuth struct {
token string
}
// Return value is mapped to request headers.
func (t tokenAuth) GetRequestMetadata(
ctx context.Context,
in ...string,
) (map[string]string, error) {
return map[string]string{
"authorization": "Bearer " + t.token,
}, nil
}
func (tokenAuth) RequireTransportSecurity() bool {
return true
}
func GetFileMode(key string) fs.FileMode {
modeStr := viper.GetString(key)
mode, err := strconv.ParseUint(modeStr, headscale.Base8, headscale.BitSize64)
if err != nil {
return PermissionFallback
}
return fs.FileMode(mode)
}

View File

@@ -1,9 +1,6 @@
package cli
import (
"fmt"
"strings"
"github.com/spf13/cobra"
)
@@ -18,11 +15,7 @@ var versionCmd = &cobra.Command{
Short: "Print the version.",
Long: "The version of headscale.",
Run: func(cmd *cobra.Command, args []string) {
o, _ := cmd.Flags().GetString("output")
if strings.HasPrefix(o, "json") {
JsonOutput(map[string]string{"version": Version}, nil, o)
return
}
fmt.Println(Version)
output, _ := cmd.Flags().GetString("output")
SuccessOutput(map[string]string{"version": Version}, Version, output)
},
}

View File

@@ -23,6 +23,8 @@ func main() {
colors = true
case termcolor.LevelBasic:
colors = true
case termcolor.LevelNone:
colors = false
default:
// no color, return text as is.
colors = false
@@ -41,38 +43,41 @@ func main() {
NoColor: !colors,
})
err := cli.LoadConfig("")
if err != nil {
log.Fatal().Err(err)
if err := cli.LoadConfig(""); err != nil {
log.Fatal().Caller().Err(err)
}
machineOutput := cli.HasMachineOutputFlag()
logLevel := viper.GetString("log_level")
switch logLevel {
case "trace":
zerolog.SetGlobalLevel(zerolog.TraceLevel)
case "debug":
zerolog.SetGlobalLevel(zerolog.DebugLevel)
case "info":
zerolog.SetGlobalLevel(zerolog.InfoLevel)
case "warn":
zerolog.SetGlobalLevel(zerolog.WarnLevel)
case "error":
zerolog.SetGlobalLevel(zerolog.ErrorLevel)
default:
level, err := zerolog.ParseLevel(logLevel)
if err != nil {
zerolog.SetGlobalLevel(zerolog.DebugLevel)
} else {
zerolog.SetGlobalLevel(level)
}
jsonOutput := cli.HasJsonOutputFlag()
if !viper.GetBool("disable_check_updates") && !jsonOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") && cli.Version != "dev" {
// If the user has requested a "machine" readable format,
// then disable login so the output remains valid.
if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled)
}
if !viper.GetBool("disable_check_updates") && !machineOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
cli.Version != "dev" {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
}
res, err := latest.Check(githubTag, cli.Version)
if err == nil && res.Outdated {
fmt.Printf("An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current, cli.Version)
//nolint
fmt.Printf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
cli.Version,
)
}
}
}

View File

@@ -1,7 +1,7 @@
package main
import (
"fmt"
"io/fs"
"io/ioutil"
"os"
"path/filepath"
@@ -25,10 +25,9 @@ func (s *Suite) SetUpSuite(c *check.C) {
}
func (s *Suite) TearDownSuite(c *check.C) {
}
func (*Suite) TestPostgresConfigLoading(c *check.C) {
func (*Suite) TestConfigLoading(c *check.C) {
tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil {
c.Fatal(err)
@@ -41,7 +40,10 @@ func (*Suite) TestPostgresConfigLoading(c *check.C) {
}
// Symlink the example config file
err = os.Symlink(filepath.Clean(path+"/../../config.json.postgres.example"), filepath.Join(tmpDir, "config.json"))
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
filepath.Join(tmpDir, "config.yaml"),
)
if err != nil {
c.Fatal(err)
}
@@ -53,46 +55,18 @@ func (*Suite) TestPostgresConfigLoading(c *check.C) {
// Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
c.Assert(viper.GetString("derp_map_path"), check.Equals, "derp.yaml")
c.Assert(viper.GetString("db_type"), check.Equals, "postgres")
c.Assert(viper.GetString("db_port"), check.Equals, "5432")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
}
func (*Suite) TestSqliteConfigLoading(c *check.C) {
tmpDir, err := ioutil.TempDir("", "headscale")
if err != nil {
c.Fatal(err)
}
defer os.RemoveAll(tmpDir)
path, err := os.Getwd()
if err != nil {
c.Fatal(err)
}
// Symlink the example config file
err = os.Symlink(filepath.Clean(path+"/../../config.json.sqlite.example"), filepath.Join(tmpDir, "config.json"))
if err != nil {
c.Fatal(err)
}
// Load example config, it should load without validation errors
err = cli.LoadConfig(tmpDir)
c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
c.Assert(viper.GetString("derp_map_path"), check.Equals, "derp.yaml")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
c.Assert(viper.GetString("db_path"), check.Equals, "db.sqlite")
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
c.Assert(
cli.GetFileMode("unix_socket_permission"),
check.Equals,
fs.FileMode(0o770),
)
}
func (*Suite) TestDNSConfigLoading(c *check.C) {
@@ -108,7 +82,10 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
}
// Symlink the example config file
err = os.Symlink(filepath.Clean(path+"/../../config.json.sqlite.example"), filepath.Join(tmpDir, "config.json"))
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
filepath.Join(tmpDir, "config.yaml"),
)
if err != nil {
c.Fatal(err)
}
@@ -128,7 +105,7 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
func writeConfig(c *check.C, tmpDir string, configYaml []byte) {
// Populate a custom config file
configFile := filepath.Join(tmpDir, "config.yaml")
err := ioutil.WriteFile(configFile, configYaml, 0644)
err := ioutil.WriteFile(configFile, configYaml, 0o600)
if err != nil {
c.Fatalf("Couldn't write file %s", configFile)
}
@@ -139,10 +116,11 @@ func (*Suite) TestTLSConfigValidation(c *check.C) {
if err != nil {
c.Fatal(err)
}
//defer os.RemoveAll(tmpDir)
fmt.Println(tmpDir)
// defer os.RemoveAll(tmpDir)
configYaml := []byte("---\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"\"\ntls_cert_path: \"abc.pem\"")
configYaml := []byte(
"---\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"\"\ntls_cert_path: \"abc.pem\"",
)
writeConfig(c, tmpDir, configYaml)
// Check configuration validation errors (1)
@@ -150,13 +128,26 @@ func (*Suite) TestTLSConfigValidation(c *check.C) {
c.Assert(err, check.NotNil)
// check.Matches can not handle multiline strings
tmp := strings.ReplaceAll(err.Error(), "\n", "***")
c.Assert(tmp, check.Matches, ".*Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both.*")
c.Assert(tmp, check.Matches, ".*Fatal config error: the only supported values for tls_letsencrypt_challenge_type are.*")
c.Assert(tmp, check.Matches, ".*Fatal config error: server_url must start with https:// or http://.*")
fmt.Println(tmp)
c.Assert(
tmp,
check.Matches,
".*Fatal config error: set either tls_letsencrypt_hostname or tls_cert_path/tls_key_path, not both.*",
)
c.Assert(
tmp,
check.Matches,
".*Fatal config error: the only supported values for tls_letsencrypt_challenge_type are.*",
)
c.Assert(
tmp,
check.Matches,
".*Fatal config error: server_url must start with https:// or http://.*",
)
// Check configuration validation errors (2)
configYaml = []byte("---\nserver_url: \"http://127.0.0.1:8080\"\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"TLS-ALPN-01\"")
configYaml = []byte(
"---\nserver_url: \"http://127.0.0.1:8080\"\ntls_letsencrypt_hostname: \"example.com\"\ntls_letsencrypt_challenge_type: \"TLS-ALPN-01\"",
)
writeConfig(c, tmpDir, configYaml)
err = cli.LoadConfig(tmpDir)
c.Assert(err, check.IsNil)

222
config-example.yaml Normal file
View File

@@ -0,0 +1,222 @@
---
# headscale will look for a configuration file named `config.yaml` (or `config.json`) in the following order:
#
# - `/etc/headscale`
# - `~/.headscale`
# - current working directory
# The url clients will connect to.
# Typically this will be a domain like:
#
# https://myheadscale.example.com:443
#
server_url: http://127.0.0.1:8080
# Address to listen to / bind to on the server
#
listen_addr: 0.0.0.0:8080
# Address to listen to /metrics, you may want
# to keep this endpoint private to your internal
# network
#
metrics_listen_addr: 127.0.0.1:9090
# Address to listen for gRPC.
# gRPC is used for controlling a headscale server
# remotely with the CLI
# Note: Remote access _only_ works if you have
# valid certificates.
grpc_listen_addr: 0.0.0.0:50443
# Allow the gRPC admin interface to run in INSECURE
# mode. This is not recommended as the traffic will
# be unencrypted. Only enable if you know what you
# are doing.
grpc_allow_insecure: false
# Private key used encrypt the traffic between headscale
# and Tailscale clients.
# The private key file which will be
# autogenerated if it's missing
private_key_path: /var/lib/headscale/private.key
# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash.
ip_prefixes:
- fd7a:115c:a1e0::/48
- 100.64.0.0/10
# DERP is a relay system that Tailscale uses when a direct
# connection cannot be established.
# https://tailscale.com/blog/how-tailscale-works/#encrypted-tcp-relays-derp
#
# headscale needs a list of DERP servers that can be presented
# to the clients.
derp:
server:
# If enabled, runs the embedded DERP server and merges it into the rest of the DERP config
# The Headscale server_url defined above MUST be using https, DERP requires TLS to be in place
enabled: false
# Region ID to use for the embedded DERP server.
# The local DERP prevails if the region ID collides with other region ID coming from
# the regular DERP config.
region_id: 999
# Region code and name are displayed in the Tailscale UI to identify a DERP region
region_code: "headscale"
region_name: "Headscale Embedded DERP"
# Listens in UDP at the configured address for STUN connections to help on NAT traversal.
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
#
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
stun_listen_addr: "0.0.0.0:3478"
# List of externally available DERP maps encoded in JSON
urls:
- https://controlplane.tailscale.com/derpmap/default
# Locally available DERP map files encoded in YAML
#
# This option is mostly interesting for people hosting
# their own DERP servers:
# https://tailscale.com/kb/1118/custom-derp-servers/
#
# paths:
# - /etc/headscale/derp-example.yaml
paths: []
# If enabled, a worker will be set up to periodically
# refresh the given sources and update the derpmap
# will be set up.
auto_update_enabled: true
# How often should we check for DERP updates?
update_frequency: 24h
# Disables the automatic check for headscale updates on startup
disable_check_updates: false
# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m
# SQLite config
db_type: sqlite3
db_path: /var/lib/headscale/db.sqlite
# # Postgres config
# db_type: postgres
# db_host: localhost
# db_port: 5432
# db_name: headscale
# db_user: foo
# db_pass: bar
### TLS configuration
#
## Let's encrypt / ACME
#
# headscale supports automatically requesting and setting up
# TLS for a domain with Let's Encrypt.
#
# URL to ACME directory
acme_url: https://acme-v02.api.letsencrypt.org/directory
# Email to register with ACME provider
acme_email: ""
# Domain name to request a TLS certificate for:
tls_letsencrypt_hostname: ""
# Client (Tailscale/Browser) authentication mode (mTLS)
# Acceptable values:
# - disabled: client authentication disabled
# - relaxed: client certificate is required but not verified
# - enforced: client certificate is required and verified
tls_client_auth_mode: relaxed
# Path to store certificates and metadata needed by
# letsencrypt
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
# Type of ACME challenge to use, currently supported types:
# HTTP-01 or TLS-ALPN-01
# See [docs/tls.md](docs/tls.md) for more information
tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a
# verification endpoint, and it will be listning on:
# :http = port 80
tls_letsencrypt_listen: ":http"
## Use already defined certificates:
tls_cert_path: ""
tls_key_path: ""
log_level: info
# Path to a file containg ACL policies.
# ACLs can be defined as YAML or HUJSON.
# https://tailscale.com/kb/1018/acls/
acl_policy_path: ""
## DNS
#
# headscale supports Tailscale's DNS configuration and MagicDNS.
# Please have a look to their KB to better understand the concepts:
#
# - https://tailscale.com/kb/1054/dns/
# - https://tailscale.com/kb/1081/magicdns/
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
#
dns_config:
# List of DNS servers to expose to clients.
nameservers:
- 1.1.1.1
# Split DNS (see https://tailscale.com/kb/1054/dns/),
# list of search domains and the DNS to query for each one.
#
# restricted_nameservers:
# foo.bar.com:
# - 1.1.1.1
# darp.headscale.net:
# - 1.1.1.1
# - 8.8.8.8
# Search domains to inject.
domains: []
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
# Only works if there is at least a nameserver defined.
magic_dns: true
# Defines the base domain to create the hostnames for MagicDNS.
# `base_domain` must be a FQDNs, without the trailing dot.
# The FQDN of the hosts will be
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
base_domain: example.com
# Unix socket used for the CLI to connect without authentication
# Note: for local development, you probably want to change this to:
# unix_socket: ./headscale.sock
unix_socket: /var/run/headscale.sock
unix_socket_permission: "0770"
#
# headscale supports experimental OpenID connect support,
# it is still being tested and might have some bugs, please
# help us test it.
# OpenID Connect
# oidc:
# issuer: "https://your-oidc.issuer.com/path"
# client_id: "your-oidc-client-id"
# client_secret: "your-oidc-client-secret"
#
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
# namespace: `first-name.last-name.example.com`
#
# strip_email_domain: true

View File

@@ -1,30 +0,0 @@
{
"server_url": "http://127.0.0.1:8080",
"listen_addr": "0.0.0.0:8080",
"private_key_path": "private.key",
"derp_map_path": "derp.yaml",
"ephemeral_node_inactivity_timeout": "30m",
"db_type": "postgres",
"db_host": "localhost",
"db_port": 5432,
"db_name": "headscale",
"db_user": "foo",
"db_pass": "bar",
"acme_url": "https://acme-v02.api.letsencrypt.org/directory",
"acme_email": "",
"tls_letsencrypt_hostname": "",
"tls_letsencrypt_listen": ":http",
"tls_letsencrypt_cache_dir": ".cache",
"tls_letsencrypt_challenge_type": "HTTP-01",
"tls_cert_path": "",
"tls_key_path": "",
"acl_policy_path": "",
"dns_config": {
"nameservers": [
"1.1.1.1"
],
"domains": [],
"magic_dns": true,
"base_domain": "example.com"
}
}

View File

@@ -1,26 +0,0 @@
{
"server_url": "http://127.0.0.1:8080",
"listen_addr": "0.0.0.0:8080",
"private_key_path": "private.key",
"derp_map_path": "derp.yaml",
"ephemeral_node_inactivity_timeout": "30m",
"db_type": "sqlite3",
"db_path": "db.sqlite",
"acme_url": "https://acme-v02.api.letsencrypt.org/directory",
"acme_email": "",
"tls_letsencrypt_hostname": "",
"tls_letsencrypt_listen": ":http",
"tls_letsencrypt_cache_dir": ".cache",
"tls_letsencrypt_challenge_type": "HTTP-01",
"tls_cert_path": "",
"tls_key_path": "",
"acl_policy_path": "",
"dns_config": {
"nameservers": [
"1.1.1.1"
],
"domains": [],
"magic_dns": true,
"base_domain": "example.com"
}
}

181
db.go
View File

@@ -1,15 +1,25 @@
package headscale
import (
"database/sql/driver"
"encoding/json"
"errors"
"fmt"
"time"
"github.com/glebarez/sqlite"
"github.com/rs/zerolog/log"
"gorm.io/driver/postgres"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"inet.af/netaddr"
"tailscale.com/tailcfg"
)
const dbVersion = "1"
const (
dbVersion = "1"
errValueNotFound = Error("not found")
)
// KV is a key-value store in a psql table. For future use...
type KV struct {
@@ -24,32 +34,73 @@ func (h *Headscale) initDB() error {
}
h.db = db
if h.dbType == "postgres" {
db.Exec("create extension if not exists \"uuid-ossp\";")
if h.dbType == Postgres {
db.Exec(`create extension if not exists "uuid-ossp";`)
}
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
// If the Machine table has a column for registered,
// find all occourences of "false" and drop them. Then
// remove the column.
if db.Migrator().HasColumn(&Machine{}, "registered") {
log.Info().
Msg(`Database has legacy "registered" column in machine, removing...`)
machines := Machines{}
if err := h.db.Not("registered").Find(&machines).Error; err != nil {
log.Error().Err(err).Msg("Error accessing db")
}
for _, machine := range machines {
log.Info().
Str("machine", machine.Name).
Str("machine_key", machine.MachineKey).
Msg("Deleting unregistered machine")
if err := h.db.Delete(&Machine{}, machine.ID).Error; err != nil {
log.Error().
Err(err).
Str("machine", machine.Name).
Str("machine_key", machine.MachineKey).
Msg("Error deleting unregistered machine")
}
}
err := db.Migrator().DropColumn(&Machine{}, "registered")
if err != nil {
log.Error().Err(err).Msg("Error dropping registered column")
}
}
err = db.AutoMigrate(&Machine{})
if err != nil {
return err
}
err = db.AutoMigrate(&KV{})
if err != nil {
return err
}
err = db.AutoMigrate(&Namespace{})
if err != nil {
return err
}
err = db.AutoMigrate(&PreAuthKey{})
if err != nil {
return err
}
err = db.AutoMigrate(&SharedMachine{})
_ = db.Migrator().DropTable("shared_machines")
err = db.AutoMigrate(&APIKey{})
if err != nil {
return err
}
err = h.setValue("db_version", dbVersion)
return err
}
@@ -65,12 +116,26 @@ func (h *Headscale) openDB() (*gorm.DB, error) {
}
switch h.dbType {
case "sqlite3":
db, err = gorm.Open(sqlite.Open(h.dbString), &gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
})
case "postgres":
case Sqlite:
db, err = gorm.Open(
sqlite.Open(h.dbString+"?_synchronous=1&_journal_mode=WAL"),
&gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
},
)
db.Exec("PRAGMA foreign_keys=ON")
// The pure Go SQLite library does not handle locking in
// the same way as the C based one and we cant use the gorm
// connection pool as of 2022/02/23.
sqlDB, _ := db.DB()
sqlDB.SetMaxIdleConns(1)
sqlDB.SetMaxOpenConns(1)
sqlDB.SetConnMaxIdleTime(time.Hour)
case Postgres:
db, err = gorm.Open(postgres.Open(h.dbString), &gorm.Config{
DisableForeignKeyConstraintWhenMigrating: true,
Logger: log,
@@ -84,28 +149,104 @@ func (h *Headscale) openDB() (*gorm.DB, error) {
return db, nil
}
// getValue returns the value for the given key in KV
// getValue returns the value for the given key in KV.
func (h *Headscale) getValue(key string) (string, error) {
var row KV
if result := h.db.First(&row, "key = ?", key); errors.Is(result.Error, gorm.ErrRecordNotFound) {
return "", errors.New("not found")
if result := h.db.First(&row, "key = ?", key); errors.Is(
result.Error,
gorm.ErrRecordNotFound,
) {
return "", errValueNotFound
}
return row.Value, nil
}
// setValue sets value for the given key in KV
// setValue sets value for the given key in KV.
func (h *Headscale) setValue(key string, value string) error {
kv := KV{
keyValue := KV{
Key: key,
Value: value,
}
_, err := h.getValue(key)
if err == nil {
h.db.Model(&kv).Where("key = ?", key).Update("value", value)
if _, err := h.getValue(key); err == nil {
h.db.Model(&keyValue).Where("key = ?", key).Update("value", value)
return nil
}
h.db.Create(kv)
h.db.Create(keyValue)
return nil
}
// This is a "wrapper" type around tailscales
// Hostinfo to allow us to add database "serialization"
// methods. This allows us to use a typed values throughout
// the code and not have to marshal/unmarshal and error
// check all over the code.
type HostInfo tailcfg.Hostinfo
func (hi *HostInfo) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, hi)
case string:
return json.Unmarshal([]byte(value), hi)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (hi HostInfo) Value() (driver.Value, error) {
bytes, err := json.Marshal(hi)
return string(bytes), err
}
type IPPrefixes []netaddr.IPPrefix
func (i *IPPrefixes) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i IPPrefixes) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}
type StringList []string
func (i *StringList) Scan(destination interface{}) error {
switch value := destination.(type) {
case []byte:
return json.Unmarshal(value, i)
case string:
return json.Unmarshal([]byte(value), i)
default:
return fmt.Errorf("%w: unexpected data type %T", errMachineAddressesInvalid, destination)
}
}
// Value return json value, implement driver.Valuer interface.
func (i StringList) Value() (driver.Value, error) {
bytes, err := json.Marshal(i)
return string(bytes), err
}

15
derp-example.yaml Normal file
View File

@@ -0,0 +1,15 @@
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
regions:
900:
regionid: 900
regioncode: custom
regionname: My Region
nodes:
- name: 900a
regionid: 900
hostname: myderp.mydomain.no
ipv4: 123.123.123.123
ipv6: "2604:a880:400:d1::828:b001"
stunport: 0
stunonly: false
derptestport: 0

167
derp.go Normal file
View File

@@ -0,0 +1,167 @@
package headscale
import (
"context"
"encoding/json"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
"time"
"github.com/rs/zerolog/log"
"gopkg.in/yaml.v2"
"tailscale.com/tailcfg"
)
func loadDERPMapFromPath(path string) (*tailcfg.DERPMap, error) {
derpFile, err := os.Open(path)
if err != nil {
return nil, err
}
defer derpFile.Close()
var derpMap tailcfg.DERPMap
b, err := io.ReadAll(derpFile)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(b, &derpMap)
return &derpMap, err
}
func loadDERPMapFromURL(addr url.URL) (*tailcfg.DERPMap, error) {
ctx, cancel := context.WithTimeout(context.Background(), HTTPReadTimeout)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", addr.String(), nil)
if err != nil {
return nil, err
}
client := http.Client{
Timeout: HTTPReadTimeout,
}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
var derpMap tailcfg.DERPMap
err = json.Unmarshal(body, &derpMap)
return &derpMap, err
}
// mergeDERPMaps naively merges a list of DERPMaps into a single
// DERPMap, it will _only_ look at the Regions, an integer.
// If a region exists in two of the given DERPMaps, the region
// form the _last_ DERPMap will be preserved.
// An empty DERPMap list will result in a DERPMap with no regions.
func mergeDERPMaps(derpMaps []*tailcfg.DERPMap) *tailcfg.DERPMap {
result := tailcfg.DERPMap{
OmitDefaultRegions: false,
Regions: map[int]*tailcfg.DERPRegion{},
}
for _, derpMap := range derpMaps {
for id, region := range derpMap.Regions {
result.Regions[id] = region
}
}
return &result
}
func GetDERPMap(cfg DERPConfig) *tailcfg.DERPMap {
derpMaps := make([]*tailcfg.DERPMap, 0)
for _, path := range cfg.Paths {
log.Debug().
Str("func", "GetDERPMap").
Str("path", path).
Msg("Loading DERPMap from path")
derpMap, err := loadDERPMapFromPath(path)
if err != nil {
log.Error().
Str("func", "GetDERPMap").
Str("path", path).
Err(err).
Msg("Could not load DERP map from path")
break
}
derpMaps = append(derpMaps, derpMap)
}
for _, addr := range cfg.URLs {
derpMap, err := loadDERPMapFromURL(addr)
log.Debug().
Str("func", "GetDERPMap").
Str("url", addr.String()).
Msg("Loading DERPMap from path")
if err != nil {
log.Error().
Str("func", "GetDERPMap").
Str("url", addr.String()).
Err(err).
Msg("Could not load DERP map from path")
break
}
derpMaps = append(derpMaps, derpMap)
}
derpMap := mergeDERPMaps(derpMaps)
log.Trace().Interface("derpMap", derpMap).Msg("DERPMap loaded")
if len(derpMap.Regions) == 0 {
log.Warn().
Msg("DERP map is empty, not a single DERP map datasource was loaded correctly or contained a region")
}
return derpMap
}
func (h *Headscale) scheduledDERPMapUpdateWorker(cancelChan <-chan struct{}) {
log.Info().
Dur("frequency", h.cfg.DERP.UpdateFrequency).
Msg("Setting up a DERPMap update worker")
ticker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
for {
select {
case <-cancelChan:
return
case <-ticker.C:
log.Info().Msg("Fetching DERPMap updates")
h.DERPMap = GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled {
h.DERPMap.Regions[h.DERPServer.region.RegionID] = &h.DERPServer.region
}
namespaces, err := h.ListNamespaces()
if err != nil {
log.Error().
Err(err).
Msg("Failed to fetch namespaces")
}
for _, namespace := range namespaces {
h.setLastStateChangeToNow(namespace.Name)
}
}
}
}

146
derp.yaml
View File

@@ -1,146 +0,0 @@
# This file contains some of the official Tailscale DERP servers,
# shamelessly taken from https://github.com/tailscale/tailscale/blob/main/net/dnsfallback/dns-fallback-servers.json
#
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
regions:
1:
regionid: 1
regioncode: nyc
regionname: New York City
nodes:
- name: 1a
regionid: 1
hostname: derp1.tailscale.com
ipv4: 159.89.225.99
ipv6: "2604:a880:400:d1::828:b001"
stunport: 0
stunonly: false
derptestport: 0
- name: 1b
regionid: 1
hostname: derp1b.tailscale.com
ipv4: 45.55.35.93
ipv6: "2604:a880:800:a1::f:2001"
stunport: 0
stunonly: false
derptestport: 0
2:
regionid: 2
regioncode: sfo
regionname: San Francisco
nodes:
- name: 2a
regionid: 2
hostname: derp2.tailscale.com
ipv4: 167.172.206.31
ipv6: "2604:a880:2:d1::c5:7001"
stunport: 0
stunonly: false
derptestport: 0
- name: 2b
regionid: 2
hostname: derp2b.tailscale.com
ipv4: 64.227.106.23
ipv6: "2604:a880:4:1d0::29:9000"
stunport: 0
stunonly: false
derptestport: 0
3:
regionid: 3
regioncode: sin
regionname: Singapore
nodes:
- name: 3a
regionid: 3
hostname: derp3.tailscale.com
ipv4: 68.183.179.66
ipv6: "2400:6180:0:d1::67d:8001"
stunport: 0
stunonly: false
derptestport: 0
4:
regionid: 4
regioncode: fra
regionname: Frankfurt
nodes:
- name: 4a
regionid: 4
hostname: derp4.tailscale.com
ipv4: 167.172.182.26
ipv6: "2a03:b0c0:3:e0::36e:900"
stunport: 0
stunonly: false
derptestport: 0
- name: 4b
regionid: 4
hostname: derp4b.tailscale.com
ipv4: 157.230.25.0
ipv6: "2a03:b0c0:3:e0::58f:3001"
stunport: 0
stunonly: false
derptestport: 0
5:
regionid: 5
regioncode: syd
regionname: Sydney
nodes:
- name: 5a
regionid: 5
hostname: derp5.tailscale.com
ipv4: 103.43.75.49
ipv6: "2001:19f0:5801:10b7:5400:2ff:feaa:284c"
stunport: 0
stunonly: false
derptestport: 0
6:
regionid: 6
regioncode: blr
regionname: Bangalore
nodes:
- name: 6a
regionid: 6
hostname: derp6.tailscale.com
ipv4: 68.183.90.120
ipv6: "2400:6180:100:d0::982:d001"
stunport: 0
stunonly: false
derptestport: 0
7:
regionid: 7
regioncode: tok
regionname: Tokyo
nodes:
- name: 7a
regionid: 7
hostname: derp7.tailscale.com
ipv4: 167.179.89.145
ipv6: "2401:c080:1000:467f:5400:2ff:feee:22aa"
stunport: 0
stunonly: false
derptestport: 0
8:
regionid: 8
regioncode: lhr
regionname: London
nodes:
- name: 8a
regionid: 8
hostname: derp8.tailscale.com
ipv4: 167.71.139.179
ipv6: "2a03:b0c0:1:e0::3cc:e001"
stunport: 0
stunonly: false
derptestport: 0
9:
regionid: 9
regioncode: sao
regionname: São Paulo
nodes:
- name: 9a
regionid: 9
hostname: derp9.tailscale.com
ipv4: 207.148.3.137
ipv6: "2001:19f0:6401:1d9c:5400:2ff:feef:bb82"
stunport: 0
stunonly: false
derptestport: 0

231
derp_server.go Normal file
View File

@@ -0,0 +1,231 @@
package headscale
import (
"context"
"fmt"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/rs/zerolog/log"
"tailscale.com/derp"
"tailscale.com/net/stun"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
)
// fastStartHeader is the header (with value "1") that signals to the HTTP
// server that the DERP HTTP client does not want the HTTP 101 response
// headers and it will begin writing & reading the DERP protocol immediately
// following its HTTP request.
const fastStartHeader = "Derp-Fast-Start"
type DERPServer struct {
tailscaleDERP *derp.Server
region tailcfg.DERPRegion
}
func (h *Headscale) NewDERPServer() (*DERPServer, error) {
server := derp.NewServer(key.NodePrivate(*h.privateKey), log.Info().Msgf)
region, err := h.generateRegionLocalDERP()
if err != nil {
return nil, err
}
return &DERPServer{server, region}, nil
}
func (h *Headscale) generateRegionLocalDERP() (tailcfg.DERPRegion, error) {
serverURL, err := url.Parse(h.cfg.ServerURL)
if err != nil {
return tailcfg.DERPRegion{}, err
}
var host string
var port int
host, portStr, err := net.SplitHostPort(serverURL.Host)
if err != nil {
if serverURL.Scheme == "https" {
host = serverURL.Host
port = 443
} else {
host = serverURL.Host
port = 80
}
} else {
port, err = strconv.Atoi(portStr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
}
localDERPregion := tailcfg.DERPRegion{
RegionID: h.cfg.DERP.ServerRegionID,
RegionCode: h.cfg.DERP.ServerRegionCode,
RegionName: h.cfg.DERP.ServerRegionName,
Avoid: false,
Nodes: []*tailcfg.DERPNode{
{
Name: fmt.Sprintf("%d", h.cfg.DERP.ServerRegionID),
RegionID: h.cfg.DERP.ServerRegionID,
HostName: host,
DERPPort: port,
},
},
}
_, portSTUNStr, err := net.SplitHostPort(h.cfg.DERP.STUNAddr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
portSTUN, err := strconv.Atoi(portSTUNStr)
if err != nil {
return tailcfg.DERPRegion{}, err
}
localDERPregion.Nodes[0].STUNPort = portSTUN
return localDERPregion, nil
}
func (h *Headscale) DERPHandler(ctx *gin.Context) {
log.Trace().Caller().Msgf("/derp request from %v", ctx.ClientIP())
up := strings.ToLower(ctx.Request.Header.Get("Upgrade"))
if up != "websocket" && up != "derp" {
if up != "" {
log.Warn().Caller().Msgf("Weird websockets connection upgrade: %q", up)
}
ctx.String(http.StatusUpgradeRequired, "DERP requires connection upgrade")
return
}
fastStart := ctx.Request.Header.Get(fastStartHeader) == "1"
hijacker, ok := ctx.Writer.(http.Hijacker)
if !ok {
log.Error().Caller().Msg("DERP requires Hijacker interface from Gin")
ctx.String(http.StatusInternalServerError, "HTTP does not support general TCP support")
return
}
netConn, conn, err := hijacker.Hijack()
if err != nil {
log.Error().Caller().Err(err).Msgf("Hijack failed")
ctx.String(http.StatusInternalServerError, "HTTP does not support general TCP support")
return
}
if !fastStart {
pubKey := h.privateKey.Public()
pubKeyStr := pubKey.UntypedHexString() // nolint
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
"Upgrade: DERP\r\n"+
"Connection: Upgrade\r\n"+
"Derp-Version: %v\r\n"+
"Derp-Public-Key: %s\r\n\r\n",
derp.ProtocolVersion,
pubKeyStr)
}
h.DERPServer.tailscaleDERP.Accept(netConn, conn, netConn.RemoteAddr().String())
}
// DERPProbeHandler is the endpoint that js/wasm clients hit to measure
// DERP latency, since they can't do UDP STUN queries.
func (h *Headscale) DERPProbeHandler(ctx *gin.Context) {
switch ctx.Request.Method {
case "HEAD", "GET":
ctx.Writer.Header().Set("Access-Control-Allow-Origin", "*")
default:
ctx.String(http.StatusMethodNotAllowed, "bogus probe method")
}
}
// DERPBootstrapDNSHandler implements the /bootsrap-dns endpoint
// Described in https://github.com/tailscale/tailscale/issues/1405,
// this endpoint provides a way to help a client when it fails to start up
// because its DNS are broken.
// The initial implementation is here https://github.com/tailscale/tailscale/pull/1406
// They have a cache, but not clear if that is really necessary at Headscale, uh, scale.
// An example implementation is found here https://derp.tailscale.com/bootstrap-dns
func (h *Headscale) DERPBootstrapDNSHandler(ctx *gin.Context) {
dnsEntries := make(map[string][]net.IP)
resolvCtx, cancel := context.WithTimeout(context.Background(), time.Minute)
defer cancel()
var r net.Resolver
for _, region := range h.DERPMap.Regions {
for _, node := range region.Nodes { // we don't care if we override some nodes
addrs, err := r.LookupIP(resolvCtx, "ip", node.HostName)
if err != nil {
log.Trace().Caller().Err(err).Msgf("bootstrap DNS lookup failed %q", node.HostName)
continue
}
dnsEntries[node.HostName] = addrs
}
}
ctx.JSON(http.StatusOK, dnsEntries)
}
// ServeSTUN starts a STUN server on the configured addr.
func (h *Headscale) ServeSTUN() {
packetConn, err := net.ListenPacket("udp", h.cfg.DERP.STUNAddr)
if err != nil {
log.Fatal().Msgf("failed to open STUN listener: %v", err)
}
log.Info().Msgf("STUN server started at %s", packetConn.LocalAddr())
udpConn, ok := packetConn.(*net.UDPConn)
if !ok {
log.Fatal().Msg("STUN listener is not a UDP listener")
}
serverSTUNListener(context.Background(), udpConn)
}
func serverSTUNListener(ctx context.Context, packetConn *net.UDPConn) {
var buf [64 << 10]byte
var (
bytesRead int
udpAddr *net.UDPAddr
err error
)
for {
bytesRead, udpAddr, err = packetConn.ReadFromUDP(buf[:])
if err != nil {
if ctx.Err() != nil {
return
}
log.Error().Caller().Err(err).Msgf("STUN ReadFrom")
time.Sleep(time.Second)
continue
}
log.Trace().Caller().Msgf("STUN request from %v", udpAddr)
pkt := buf[:bytesRead]
if !stun.Is(pkt) {
log.Trace().Caller().Msgf("UDP packet is not STUN")
continue
}
txid, err := stun.ParseBindingRequest(pkt)
if err != nil {
log.Trace().Caller().Err(err).Msgf("STUN parse error")
continue
}
res := stun.Response(txid, udpAddr.IP, uint16(udpAddr.Port))
_, err = packetConn.WriteTo(res, udpAddr)
if err != nil {
log.Trace().Caller().Err(err).Msgf("Issue writing to UDP")
continue
}
}
}

131
dns.go
View File

@@ -10,6 +10,15 @@ import (
"tailscale.com/util/dnsname"
)
const (
ByteSize = 8
)
const (
ipv4AddressLength = 32
ipv6AddressLength = 128
)
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
// server (listening in 100.100.100.100 udp/53) should be used for.
@@ -30,26 +39,47 @@ import (
// From the netmask we can find out the wildcard bits (the bits that are not set in the netmask).
// This allows us to then calculate the subnets included in the subsequent class block and generate the entries.
func generateMagicDNSRootDomains(ipPrefix netaddr.IPPrefix, baseDomain string) ([]dnsname.FQDN, error) {
// TODO(juanfont): we are not handing out IPv6 addresses yet
// and in fact this is Tailscale.com's range (note the fd7a:115c:a1e0: range in the fc00::/7 network)
ipv6base := dnsname.FQDN("0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa.")
fqdns := []dnsname.FQDN{ipv6base}
func generateMagicDNSRootDomains(ipPrefixes []netaddr.IPPrefix) []dnsname.FQDN {
fqdns := make([]dnsname.FQDN, 0, len(ipPrefixes))
for _, ipPrefix := range ipPrefixes {
var generateDNSRoot func(netaddr.IPPrefix) []dnsname.FQDN
switch ipPrefix.IP().BitLen() {
case ipv4AddressLength:
generateDNSRoot = generateIPv4DNSRootDomain
case ipv6AddressLength:
generateDNSRoot = generateIPv6DNSRootDomain
default:
panic(
fmt.Sprintf(
"unsupported IP version with address length %d",
ipPrefix.IP().BitLen(),
),
)
}
fqdns = append(fqdns, generateDNSRoot(ipPrefix)...)
}
return fqdns
}
func generateIPv4DNSRootDomain(ipPrefix netaddr.IPPrefix) []dnsname.FQDN {
// Conversion to the std lib net.IPnet, a bit easier to operate
netRange := ipPrefix.IPNet()
maskBits, _ := netRange.Mask.Size()
// lastOctet is the last IP byte covered by the mask
lastOctet := maskBits / 8
lastOctet := maskBits / ByteSize
// wildcardBits is the number of bits not under the mask in the lastOctet
wildcardBits := 8 - maskBits%8
wildcardBits := ByteSize - maskBits%ByteSize
// min is the value in the lastOctet byte of the IP
// max is basically 2^wildcardBits - i.e., the value when all the wildcardBits are set to 1
min := uint(netRange.IP[lastOctet])
max := uint((min + 1<<uint(wildcardBits)) - 1)
max := (min + 1<<uint(wildcardBits)) - 1
// here we generate the base domain (e.g., 100.in-addr.arpa., 16.172.in-addr.arpa., etc.)
rdnsSlice := []string{}
@@ -59,6 +89,7 @@ func generateMagicDNSRootDomains(ipPrefix netaddr.IPPrefix, baseDomain string) (
rdnsSlice = append(rdnsSlice, "in-addr.arpa.")
rdnsBase := strings.Join(rdnsSlice, ".")
fqdns := make([]dnsname.FQDN, 0, max-min+1)
for i := min; i <= max; i++ {
fqdn, err := dnsname.ToFQDN(fmt.Sprintf("%d.%s", i, rdnsBase))
if err != nil {
@@ -66,27 +97,97 @@ func generateMagicDNSRootDomains(ipPrefix netaddr.IPPrefix, baseDomain string) (
}
fqdns = append(fqdns, fqdn)
}
return fqdns, nil
return fqdns
}
func getMapResponseDNSConfig(dnsConfigOrig *tailcfg.DNSConfig, baseDomain string, m Machine, peers Machines) (*tailcfg.DNSConfig, error) {
func generateIPv6DNSRootDomain(ipPrefix netaddr.IPPrefix) []dnsname.FQDN {
const nibbleLen = 4
maskBits, _ := ipPrefix.IPNet().Mask.Size()
expanded := ipPrefix.IP().StringExpanded()
nibbleStr := strings.Map(func(r rune) rune {
if r == ':' {
return -1
}
return r
}, expanded)
// TODO?: that does not look the most efficient implementation,
// but the inputs are not so long as to cause problems,
// and from what I can see, the generateMagicDNSRootDomains
// function is called only once over the lifetime of a server process.
prefixConstantParts := []string{}
for i := 0; i < maskBits/nibbleLen; i++ {
prefixConstantParts = append(
[]string{string(nibbleStr[i])},
prefixConstantParts...)
}
makeDomain := func(variablePrefix ...string) (dnsname.FQDN, error) {
prefix := strings.Join(append(variablePrefix, prefixConstantParts...), ".")
return dnsname.ToFQDN(fmt.Sprintf("%s.ip6.arpa", prefix))
}
var fqdns []dnsname.FQDN
if maskBits%4 == 0 {
dom, _ := makeDomain()
fqdns = append(fqdns, dom)
} else {
domCount := 1 << (maskBits % nibbleLen)
fqdns = make([]dnsname.FQDN, 0, domCount)
for i := 0; i < domCount; i++ {
varNibble := fmt.Sprintf("%x", i)
dom, err := makeDomain(varNibble)
if err != nil {
continue
}
fqdns = append(fqdns, dom)
}
}
return fqdns
}
func getMapResponseDNSConfig(
dnsConfigOrig *tailcfg.DNSConfig,
baseDomain string,
machine Machine,
peers Machines,
) *tailcfg.DNSConfig {
var dnsConfig *tailcfg.DNSConfig
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
dnsConfig = dnsConfigOrig.Clone()
dnsConfig.Domains = append(dnsConfig.Domains, fmt.Sprintf("%s.%s", m.Namespace.Name, baseDomain))
dnsConfig.Domains = append(
dnsConfig.Domains,
fmt.Sprintf(
"%s.%s",
machine.Namespace.Name,
baseDomain,
),
)
namespaceSet := set.New(set.ThreadSafe)
namespaceSet.Add(m.Namespace)
namespaceSet.Add(machine.Namespace)
for _, p := range peers {
namespaceSet.Add(p.Namespace)
}
for _, namespace := range namespaceSet.List() {
dnsRoute := fmt.Sprintf("%s.%s", namespace.(Namespace).Name, baseDomain)
for _, ns := range namespaceSet.List() {
namespace, ok := ns.(Namespace)
if !ok {
dnsConfig = dnsConfigOrig
continue
}
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
dnsConfig.Routes[dnsRoute] = nil
}
} else {
dnsConfig = dnsConfigOrig
}
return dnsConfig, nil
return dnsConfig
}

View File

@@ -10,14 +10,16 @@ import (
)
func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
prefix := netaddr.MustParseIPPrefix("100.64.0.0/10")
domains, err := generateMagicDNSRootDomains(prefix, "foobar.headscale.net")
c.Assert(err, check.IsNil)
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("100.64.0.0/10"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "64.100.in-addr.arpa." {
found = true
break
}
}
@@ -27,6 +29,7 @@ func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
for _, domain := range domains {
if domain == "100.100.in-addr.arpa." {
found = true
break
}
}
@@ -36,6 +39,7 @@ func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
for _, domain := range domains {
if domain == "127.100.in-addr.arpa." {
found = true
break
}
}
@@ -43,14 +47,16 @@ func (s *Suite) TestMagicDNSRootDomains100(c *check.C) {
}
func (s *Suite) TestMagicDNSRootDomains172(c *check.C) {
prefix := netaddr.MustParseIPPrefix("172.16.0.0/16")
domains, err := generateMagicDNSRootDomains(prefix, "headscale.net")
c.Assert(err, check.IsNil)
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("172.16.0.0/16"),
}
domains := generateMagicDNSRootDomains(prefixes)
found := false
for _, domain := range domains {
if domain == "0.16.172.in-addr.arpa." {
found = true
break
}
}
@@ -60,108 +66,160 @@ func (s *Suite) TestMagicDNSRootDomains172(c *check.C) {
for _, domain := range domains {
if domain == "255.16.172.in-addr.arpa." {
found = true
break
}
}
c.Assert(found, check.Equals, true)
}
// Happens when netmask is a multiple of 4 bits (sounds likely).
func (s *Suite) TestMagicDNSRootDomainsIPv6Single(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("fd7a:115c:a1e0::/48"),
}
domains := generateMagicDNSRootDomains(prefixes)
c.Assert(len(domains), check.Equals, 1)
c.Assert(
domains[0].WithTrailingDot(),
check.Equals,
"0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa.",
)
}
func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
prefixes := []netaddr.IPPrefix{
netaddr.MustParseIPPrefix("fd7a:115c:a1e0::/50"),
}
domains := generateMagicDNSRootDomains(prefixes)
yieldsRoot := func(dom string) bool {
for _, candidate := range domains {
if candidate.WithTrailingDot() == dom {
return true
}
}
return false
}
c.Assert(len(domains), check.Equals, 4)
c.Assert(yieldsRoot("0.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("1.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("2.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
c.Assert(yieldsRoot("3.0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa."), check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
n1, err := h.CreateNamespace("shared1")
namespaceShared1, err := app.CreateNamespace("shared1")
c.Assert(err, check.IsNil)
n2, err := h.CreateNamespace("shared2")
namespaceShared2, err := app.CreateNamespace("shared2")
c.Assert(err, check.IsNil)
n3, err := h.CreateNamespace("shared3")
namespaceShared3, err := app.CreateNamespace("shared3")
c.Assert(err, check.IsNil)
pak1n1, err := h.CreatePreAuthKey(n1.Name, false, false, nil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
pak2n2, err := h.CreatePreAuthKey(n2.Name, false, false, nil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
pak3n3, err := h.CreatePreAuthKey(n3.Name, false, false, nil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
pak4n1, err := h.CreatePreAuthKey(n1.Name, false, false, nil)
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
_, err = h.GetMachine(n1.Name, "test_get_shared_nodes_1")
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
m1 := &Machine{
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Name: "test_get_shared_nodes_1",
NamespaceID: n1.ID,
Namespace: *n1,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.1",
AuthKeyID: uint(pak1n1.ID),
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
h.db.Save(m1)
app.db.Save(machineInShared1)
_, err = h.GetMachine(n1.Name, m1.Name)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Name)
c.Assert(err, check.IsNil)
m2 := &Machine{
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_2",
NamespaceID: n2.ID,
Namespace: *n2,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.2",
AuthKeyID: uint(pak2n2.ID),
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
h.db.Save(m2)
app.db.Save(machineInShared2)
_, err = h.GetMachine(n2.Name, m2.Name)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Name)
c.Assert(err, check.IsNil)
m3 := &Machine{
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_3",
NamespaceID: n3.ID,
Namespace: *n3,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.3",
AuthKeyID: uint(pak3n3.ID),
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
h.db.Save(m3)
app.db.Save(machineInShared3)
_, err = h.GetMachine(n3.Name, m3.Name)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Name)
c.Assert(err, check.IsNil)
m4 := &Machine{
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_4",
NamespaceID: n1.ID,
Namespace: *n1,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.4",
AuthKeyID: uint(pak4n1.ID),
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.4")},
AuthKeyID: uint(PreAuthKey2InShared1.ID),
}
h.db.Save(m4)
err = h.AddSharedMachineToNamespace(m2, n1)
c.Assert(err, check.IsNil)
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
@@ -170,123 +228,141 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
Proxied: true,
}
m1peers, err := h.getPeers(m1)
peersOfMachineInShared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig, err := getMapResponseDNSConfig(&dnsConfigOrig, baseDomain, *m1, m1peers)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachineInShared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 2)
routeN1 := fmt.Sprintf("%s.%s", n1.Name, baseDomain)
_, ok := dnsConfig.Routes[routeN1]
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
_, ok := dnsConfig.Routes[domainRouteShared1]
c.Assert(ok, check.Equals, true)
routeN2 := fmt.Sprintf("%s.%s", n2.Name, baseDomain)
_, ok = dnsConfig.Routes[routeN2]
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared2]
c.Assert(ok, check.Equals, true)
routeN3 := fmt.Sprintf("%s.%s", n3.Name, baseDomain)
_, ok = dnsConfig.Routes[routeN3]
c.Assert(ok, check.Equals, false)
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
_, ok = dnsConfig.Routes[domainRouteShared3]
c.Assert(ok, check.Equals, true)
}
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
n1, err := h.CreateNamespace("shared1")
namespaceShared1, err := app.CreateNamespace("shared1")
c.Assert(err, check.IsNil)
n2, err := h.CreateNamespace("shared2")
namespaceShared2, err := app.CreateNamespace("shared2")
c.Assert(err, check.IsNil)
n3, err := h.CreateNamespace("shared3")
namespaceShared3, err := app.CreateNamespace("shared3")
c.Assert(err, check.IsNil)
pak1n1, err := h.CreatePreAuthKey(n1.Name, false, false, nil)
preAuthKeyInShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
pak2n2, err := h.CreatePreAuthKey(n2.Name, false, false, nil)
preAuthKeyInShared2, err := app.CreatePreAuthKey(
namespaceShared2.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
pak3n3, err := h.CreatePreAuthKey(n3.Name, false, false, nil)
preAuthKeyInShared3, err := app.CreatePreAuthKey(
namespaceShared3.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
pak4n1, err := h.CreatePreAuthKey(n1.Name, false, false, nil)
preAuthKey2InShared1, err := app.CreatePreAuthKey(
namespaceShared1.Name,
false,
false,
nil,
)
c.Assert(err, check.IsNil)
_, err = h.GetMachine(n1.Name, "test_get_shared_nodes_1")
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
c.Assert(err, check.NotNil)
m1 := &Machine{
machineInShared1 := &Machine{
ID: 1,
MachineKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
Name: "test_get_shared_nodes_1",
NamespaceID: n1.ID,
Namespace: *n1,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.1",
AuthKeyID: uint(pak1n1.ID),
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.1")},
AuthKeyID: uint(preAuthKeyInShared1.ID),
}
h.db.Save(m1)
app.db.Save(machineInShared1)
_, err = h.GetMachine(n1.Name, m1.Name)
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Name)
c.Assert(err, check.IsNil)
m2 := &Machine{
machineInShared2 := &Machine{
ID: 2,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_2",
NamespaceID: n2.ID,
Namespace: *n2,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.2",
AuthKeyID: uint(pak2n2.ID),
NamespaceID: namespaceShared2.ID,
Namespace: *namespaceShared2,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.2")},
AuthKeyID: uint(preAuthKeyInShared2.ID),
}
h.db.Save(m2)
app.db.Save(machineInShared2)
_, err = h.GetMachine(n2.Name, m2.Name)
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Name)
c.Assert(err, check.IsNil)
m3 := &Machine{
machineInShared3 := &Machine{
ID: 3,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_3",
NamespaceID: n3.ID,
Namespace: *n3,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.3",
AuthKeyID: uint(pak3n3.ID),
NamespaceID: namespaceShared3.ID,
Namespace: *namespaceShared3,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.3")},
AuthKeyID: uint(preAuthKeyInShared3.ID),
}
h.db.Save(m3)
app.db.Save(machineInShared3)
_, err = h.GetMachine(n3.Name, m3.Name)
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Name)
c.Assert(err, check.IsNil)
m4 := &Machine{
machine2InShared1 := &Machine{
ID: 4,
MachineKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
Name: "test_get_shared_nodes_4",
NamespaceID: n1.ID,
Namespace: *n1,
Registered: true,
RegisterMethod: "authKey",
IPAddress: "100.64.0.4",
AuthKeyID: uint(pak4n1.ID),
NamespaceID: namespaceShared1.ID,
Namespace: *namespaceShared1,
RegisterMethod: RegisterMethodAuthKey,
IPAddresses: []netaddr.IP{netaddr.MustParseIP("100.64.0.4")},
AuthKeyID: uint(preAuthKey2InShared1.ID),
}
h.db.Save(m4)
err = h.AddSharedMachineToNamespace(m2, n1)
c.Assert(err, check.IsNil)
app.db.Save(machine2InShared1)
baseDomain := "foobar.headscale.net"
dnsConfigOrig := tailcfg.DNSConfig{
@@ -295,11 +371,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
Proxied: false,
}
m1peers, err := h.getPeers(m1)
peersOfMachine1Shared1, err := app.getPeers(machineInShared1)
c.Assert(err, check.IsNil)
dnsConfig, err := getMapResponseDNSConfig(&dnsConfigOrig, baseDomain, *m1, m1peers)
c.Assert(err, check.IsNil)
dnsConfig := getMapResponseDNSConfig(
&dnsConfigOrig,
baseDomain,
*machineInShared1,
peersOfMachine1Shared1,
)
c.Assert(dnsConfig, check.NotNil)
c.Assert(len(dnsConfig.Routes), check.Equals, 0)
c.Assert(len(dnsConfig.Domains), check.Equals, 1)

View File

@@ -1,39 +0,0 @@
# DNS in Headscale
Headscale supports Tailscale's DNS configuration and MagicDNS. Please have a look to their KB to better understand what this means:
- https://tailscale.com/kb/1054/dns/
- https://tailscale.com/kb/1081/magicdns/
- https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
Long story short, you can define the DNS servers you want to use in your tailnets, activate MagicDNS (so you don't have to remember the IP addresses of your nodes), define search domains, as well as predefined hosts. Headscale will inject that settings into your nodes.
## Configuration reference
The setup is done via the `config.yaml` file, under the `dns_config` key.
```yaml
server_url: http://127.0.0.1:8001
listen_addr: 0.0.0.0:8001
private_key_path: private.key
dns_config:
nameservers:
- 1.1.1.1
- 8.8.8.8
restricted_nameservers:
foo.bar.com:
- 1.1.1.1
darp.headscale.net:
- 1.1.1.1
- 8.8.8.8
domains: []
magic_dns: true
base_domain: example.com
```
- `nameservers`: The list of DNS servers to use.
- `domains`: Search domains to inject.
- `magic_dns`: Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/). Only works if there is at least a nameserver defined.
- `base_domain`: Defines the base domain to create the hostnames for MagicDNS. `base_domain` must be a FQDNs, without the trailing dot. The FQDN of the hosts will be `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
- `restricted_nameservers`: Split DNS (see https://tailscale.com/kb/1054/dns/), list of search domains and the DNS to query for each one.

52
docs/README.md Normal file
View File

@@ -0,0 +1,52 @@
# headscale documentation
This page contains the official and community contributed documentation for `headscale`.
If you are having trouble with following the documentation or get unexpected results,
please ask on [Discord](https://discord.gg/XcQxk2VHjx) instead of opening an Issue.
## Official documentation
### How-to
- [Running headscale on Linux](running-headscale-linux.md)
- [Control headscale remotely](remote-cli.md)
- [Using a Windows client with headscale](windows-client.md)
### References
- [Configuration](../config-example.yaml)
- [Glossary](glossary.md)
- [TLS](tls.md)
## Community documentation
Community documentation is not actively maintained by the headscale authors and is
written by community members. It is _not_ verified by `headscale` developers.
**It might be outdated and it might miss necessary steps**.
- [Running headscale in a container](running-headscale-container.md)
## Misc
### Policy ACLs
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
For instance, instead of referring to users when defining groups you must
use namespaces (which are the equivalent to user/logins in Tailscale.com).
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
When using ACL's the Namespace borders are no longer applied. All machines
whichever the Namespace have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
The [ACLs](acls.md) document should help understand a fictional case of setting
up ACLs in a small company. All concepts presented in this document could be
applied outside of business oriented usage.
### Apple devices
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.

141
docs/acls.md Normal file
View File

@@ -0,0 +1,141 @@
# ACLs use case example
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
We have a small company with a boss, an admin, two developers and an intern.
The boss should have access to all servers but not to the users hosts. Admin
should also have access to all hosts except that their permissions should be
limited to maintaining the hosts (for example purposes). The developers can do
anything they want on dev hosts, but only watch on productions hosts. Intern
can only interact with the development servers.
Each user have at least a device connected to the network and we have some
servers.
- database.prod
- database.dev
- app-server1.prod
- app-server1.dev
- billing.internal
## Setup of the network
Let's create the namespaces. Each user should have his own namespace. The users
here are represented as namespaces.
```bash
headscale namespaces create boss
headscale namespaces create admin1
headscale namespaces create dev1
headscale namespaces create dev2
headscale namespaces create intern1
```
We don't need to create namespaces for the servers because the servers will be
tagged. When registering the servers we will need to add the flag
`--advertised-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
registering the server should be allowed to do it. Since anyone can add tags to
a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the namespace that is
registering it is allowed to do it.
Here are the ACL's to implement the same permissions as above:
```json
{
// groups are collections of users having a common scope. A user can be in multiple groups
// groups cannot be composed of groups
"groups": {
"group:boss": ["boss"],
"group:dev": ["dev1", "dev2"],
"group:admin": ["admin1"],
"group:intern": ["intern1"]
},
// tagOwners in tailscale is an association between a TAG and the people allowed to set this TAG on a server.
// This is documented [here](https://tailscale.com/kb/1068/acl-tags#defining-a-tag)
// and explained [here](https://tailscale.com/blog/rbac-like-it-was-meant-to-be/)
"tagOwners": {
// the administrators can add servers in production
"tag:prod-databases": ["group:admin"],
"tag:prod-app-servers": ["group:admin"],
// the boss can tag any server as internal
"tag:internal": ["group:boss"],
// dev can add servers for dev purposes as well as admins
"tag:dev-databases": ["group:admin", "group:dev"],
"tag:dev-app-servers": ["group:admin", "group:dev"]
// interns cannot add servers
},
"acls": [
// boss have access to all servers
{
"action": "accept",
"users": ["group:boss"],
"ports": [
"tag:prod-databases:*",
"tag:prod-app-servers:*",
"tag:internal:*",
"tag:dev-databases:*",
"tag:dev-app-servers:*"
]
},
// admin have only access to administrative ports of the servers
{
"action": "accept",
"users": ["group:admin"],
"ports": [
"tag:prod-databases:22",
"tag:prod-app-servers:22",
"tag:internal:22",
"tag:dev-databases:22",
"tag:dev-app-servers:22"
]
},
// developers have access to databases servers and application servers on all ports
// they can only view the applications servers in prod and have no access to databases servers in production
{
"action": "accept",
"users": ["group:dev"],
"ports": [
"tag:dev-databases:*",
"tag:dev-app-servers:*",
"tag:prod-app-servers:80,443"
]
},
// servers should be able to talk to database. Database should not be able to initiate connections to
// applications servers
{
"action": "accept",
"users": ["tag:dev-app-servers"],
"ports": ["tag:dev-databases:5432"]
},
{
"action": "accept",
"users": ["tag:prod-app-servers"],
"ports": ["tag:prod-databases:5432"]
},
// interns have access to dev-app-servers only in reading mode
{
"action": "accept",
"users": ["group:intern"],
"ports": ["tag:dev-app-servers:80,443"]
},
// We still have to allow internal namespaces communications since nothing guarantees that each user have
// their own namespaces.
{ "action": "accept", "users": ["boss"], "ports": ["boss:*"] },
{ "action": "accept", "users": ["dev1"], "ports": ["dev1:*"] },
{ "action": "accept", "users": ["dev2"], "ports": ["dev2:*"] },
{ "action": "accept", "users": ["admin1"], "ports": ["admin1:*"] },
{ "action": "accept", "users": ["intern1"], "ports": ["intern1:*"] }
]
}
```

5
docs/examples/README.md Normal file
View File

@@ -0,0 +1,5 @@
# Examples
This directory contains examples on how to run `headscale` on different platforms.
All examples are provided by the community and they are not verified by the `headscale` authors.

View File

@@ -1,7 +1,9 @@
# Deploying Headscale on Kubernetes
# Deploying headscale on Kubernetes
**Note:** This is contributed by the community and not verified by the headscale authors.
This directory contains [Kustomize](https://kustomize.io) templates that deploy
Headscale in various configurations.
headscale in various configurations.
These templates currently support Rancher k3s. Other clusters may require
adaptation, especially around volume claims and ingress.
@@ -24,6 +26,7 @@ Configure DERP servers by editing `base/site/derp.yaml` if needed.
You'll somehow need to get `headscale:latest` into your cluster image registry.
An easy way to do this with k3s:
- Reconfigure k3s to use docker instead of containerd (`k3s server --docker`)
- `docker build -t headscale:latest ..` from here
@@ -61,21 +64,21 @@ Use the wrapper script to remotely operate headscale to perform administrative
tasks like creating namespaces, authkeys, etc.
```
[c@nix-slate:~/Projects/headscale/k8s]$ ./headscale.bash
[c@nix-slate:~/Projects/headscale/k8s]$ ./headscale.bash
headscale is an open source implementation of the Tailscale control server
https://gitlab.com/juanfont/headscale
https://github.com/juanfont/headscale
Usage:
headscale [command]
Available Commands:
help Help about any command
namespace Manage the namespaces of Headscale
node Manage the nodes of Headscale
preauthkey Handle the preauthkeys in Headscale
routes Manage the routes of Headscale
namespace Manage the namespaces of headscale
node Manage the nodes of headscale
preauthkey Handle the preauthkeys in headscale
routes Manage the routes of headscale
serve Launches the headscale server
version Print the version.

View File

@@ -5,4 +5,5 @@ metadata:
data:
server_url: $(PUBLIC_PROTO)://$(PUBLIC_HOSTNAME)
listen_addr: "0.0.0.0:8080"
metrics_listen_addr: "127.0.0.1:9090"
ephemeral_node_inactivity_timeout: "30m"

View File

@@ -0,0 +1,18 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: headscale
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: $(PUBLIC_HOSTNAME)
http:
paths:
- backend:
service:
name: headscale
port:
number: 8080
path: /
pathType: Prefix

View File

@@ -0,0 +1,42 @@
namespace: headscale
resources:
- configmap.yaml
- ingress.yaml
- service.yaml
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: headscale-site
files:
- derp.yaml=site/derp.yaml
envs:
- site/public.env
- name: headscale-etc
literals:
- config.json={}
secretGenerator:
- name: headscale
files:
- secrets/private-key
vars:
- name: PUBLIC_PROTO
objRef:
kind: ConfigMap
name: headscale-site
apiVersion: v1
fieldRef:
fieldPath: data.public-proto
- name: PUBLIC_HOSTNAME
objRef:
kind: ConfigMap
name: headscale-site
apiVersion: v1
fieldRef:
fieldPath: data.public-hostname
- name: CONTACT_EMAIL
objRef:
kind: ConfigMap
name: headscale-site
apiVersion: v1
fieldRef:
fieldPath: data.contact-email

View File

@@ -8,6 +8,6 @@ spec:
selector:
app: headscale
ports:
- name: http
targetPort: http
port: 8080
- name: http
targetPort: http
port: 8080

View File

@@ -0,0 +1,81 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: headscale
spec:
replicas: 2
selector:
matchLabels:
app: headscale
template:
metadata:
labels:
app: headscale
spec:
containers:
- name: headscale
image: "headscale:latest"
imagePullPolicy: IfNotPresent
command: ["/go/bin/headscale", "serve"]
env:
- name: SERVER_URL
value: $(PUBLIC_PROTO)://$(PUBLIC_HOSTNAME)
- name: LISTEN_ADDR
valueFrom:
configMapKeyRef:
name: headscale-config
key: listen_addr
- name: METRICS_LISTEN_ADDR
valueFrom:
configMapKeyRef:
name: headscale-config
key: metrics_listen_addr
- name: DERP_MAP_PATH
value: /vol/config/derp.yaml
- name: EPHEMERAL_NODE_INACTIVITY_TIMEOUT
valueFrom:
configMapKeyRef:
name: headscale-config
key: ephemeral_node_inactivity_timeout
- name: DB_TYPE
value: postgres
- name: DB_HOST
value: postgres.headscale.svc.cluster.local
- name: DB_PORT
value: "5432"
- name: DB_USER
value: headscale
- name: DB_PASS
valueFrom:
secretKeyRef:
name: postgresql
key: password
- name: DB_NAME
value: headscale
ports:
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: config
mountPath: /vol/config
- name: secret
mountPath: /vol/secret
- name: etc
mountPath: /etc/headscale
volumes:
- name: config
configMap:
name: headscale-site
- name: etc
configMap:
name: headscale-etc
- name: secret
secret:
secretName: headscale

View File

@@ -0,0 +1,13 @@
namespace: headscale
bases:
- ../base
resources:
- deployment.yaml
- postgres-service.yaml
- postgres-statefulset.yaml
generatorOptions:
disableNameSuffixHash: true
secretGenerator:
- name: postgresql
files:
- secrets/password

View File

@@ -8,6 +8,6 @@ spec:
selector:
app: postgres
ports:
- name: postgres
targetPort: postgres
port: 5432
- name: postgres
targetPort: postgres
port: 5432

View File

@@ -0,0 +1,49 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: "postgres:13"
imagePullPolicy: IfNotPresent
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql
key: password
- name: POSTGRES_USER
value: headscale
ports:
- name: postgres
protocol: TCP
containerPort: 5432
livenessProbe:
tcpSocket:
port: 5432
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
storageClassName: local-path
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

View File

@@ -6,6 +6,6 @@ metadata:
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- $(PUBLIC_HOSTNAME)
secretName: production-cert
- hosts:
- $(PUBLIC_HOSTNAME)
secretName: production-cert

View File

@@ -0,0 +1,9 @@
namespace: headscale
bases:
- ../base
resources:
- production-issuer.yaml
patches:
- path: ingress-patch.yaml
target:
kind: Ingress

View File

@@ -11,6 +11,6 @@ spec:
# Secret resource used to store the account's private key.
name: letsencrypt-production-acc-key
solvers:
- http01:
ingress:
class: traefik
- http01:
ingress:
class: traefik

View File

@@ -1,5 +1,5 @@
namespace: headscale
bases:
- ../base
- ../base
resources:
- statefulset.yaml
- statefulset.yaml

View File

@@ -0,0 +1,82 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: headscale
spec:
serviceName: headscale
replicas: 1
selector:
matchLabels:
app: headscale
template:
metadata:
labels:
app: headscale
spec:
containers:
- name: headscale
image: "headscale:latest"
imagePullPolicy: IfNotPresent
command: ["/go/bin/headscale", "serve"]
env:
- name: SERVER_URL
value: $(PUBLIC_PROTO)://$(PUBLIC_HOSTNAME)
- name: LISTEN_ADDR
valueFrom:
configMapKeyRef:
name: headscale-config
key: listen_addr
- name: METRICS_LISTEN_ADDR
valueFrom:
configMapKeyRef:
name: headscale-config
key: metrics_listen_addr
- name: DERP_MAP_PATH
value: /vol/config/derp.yaml
- name: EPHEMERAL_NODE_INACTIVITY_TIMEOUT
valueFrom:
configMapKeyRef:
name: headscale-config
key: ephemeral_node_inactivity_timeout
- name: DB_TYPE
value: sqlite3
- name: DB_PATH
value: /vol/data/db.sqlite
ports:
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: http
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: config
mountPath: /vol/config
- name: data
mountPath: /vol/data
- name: secret
mountPath: /vol/secret
- name: etc
mountPath: /etc/headscale
volumes:
- name: config
configMap:
name: headscale-site
- name: etc
configMap:
name: headscale-etc
- name: secret
secret:
secretName: headscale
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: local-path
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

View File

@@ -6,6 +6,6 @@ metadata:
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
tls:
- hosts:
- $(PUBLIC_HOSTNAME)
secretName: staging-cert
- hosts:
- $(PUBLIC_HOSTNAME)
secretName: staging-cert

View File

@@ -0,0 +1,9 @@
namespace: headscale
bases:
- ../base
resources:
- staging-issuer.yaml
patches:
- path: ingress-patch.yaml
target:
kind: Ingress

View File

@@ -11,6 +11,6 @@ spec:
# Secret resource used to store the account's private key.
name: letsencrypt-staging-acc-key
solvers:
- http01:
ingress:
class: traefik
- http01:
ingress:
class: traefik

6
docs/glossary.md Normal file
View File

@@ -0,0 +1,6 @@
# Glossary
| Term | Description |
| --------- | --------------------------------------------------------------------------------------------------------------------- |
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
| Namespace | A namespace is a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User |

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

362
docs/proposals/001-acls.md Normal file
View File

@@ -0,0 +1,362 @@
# ACLs
A key component of tailscale is the notion of Tailnet. This notion is hidden
but the implications that it have on how to use tailscale are not.
For tailscale an [tailnet](https://tailscale.com/kb/1136/tailnet/) is the
following:
> For personal users, you are a tailnet of many devices and one person. Each
> device gets a private Tailscale IP address in the CGNAT range and every
> device can talk directly to every other device, wherever they are on the
> internet.
>
> For businesses and organizations, a tailnet is many devices and many users.
> It can be based on your Microsoft Active Directory, your Google Workspace, a
> GitHub organization, Okta tenancy, or other identity provider namespace. All
> of the devices and users in your tailnet can be seen by the tailnet
> administrators in the Tailscale admin console. There you can apply
> tailnet-wide configuration, such as ACLs that affect visibility of devices
> inside your tailnet, DNS settings, and more.
## Current implementation and issues
Currently in headscale, the namespaces are used both as tailnet and users. The
issue is that if we want to use the ACL's we can't use both at the same time.
Tailnet's cannot communicate with each others. So we can't have an ACL that
authorize tailnet (namespace) A to talk to tailnet (namespace) B.
We also can't write ACLs based on the users (namespaces in headscale) since all
devices belong to the same user.
With the current implementation the only ACL that we can user is to associate
each headscale IP to a host manually then write the ACLs according to this
manual mapping.
```json
{
"hosts": {
"host1": "100.64.0.1",
"server": "100.64.0.2"
},
"acls": [
{ "action": "accept", "users": ["host1"], "ports": ["host2:80,443"] }
]
}
```
While this works, it requires a lot of manual editing on the configuration and
to keep track of all devices IP address.
## Proposition for a next implementation
In order to ease the use of ACL's we need to split the tailnet and users
notion.
A solution could be to consider a headscale server (in it's entirety) as a
tailnet.
For personal users the default behavior could either allow all communications
between all namespaces (like tailscale) or dissallow all communications between
namespaces (current behavior).
For businesses and organisations, viewing a headscale instance a single tailnet
would allow users (namespace) to talk to each other with the ACLs. As described
in tailscale's documentation [[1]], a server should be tagged and personnal
devices should be tied to a user. Translated in headscale's terms each user can
have multiple devices and all those devices should be in the same namespace.
The servers should be tagged and used as such.
This implementation would render useless the sharing feature that is currently
implemented since an ACL could do the same. Simplifying to only one user
interface to do one thing is easier and less confusing for the users.
To better suit the ACLs in this proposition, it's advised to consider that each
namespaces belong to one person. This person can have multiple devices, they
will all be considered as the same user in the ACLs. OIDC feature wouldn't need
to map people to namespace, just create a namespace if the person isn't
registered yet.
As a sidenote, users would like to write ACLs as YAML. We should offer users
the ability to rules in either format (HuJSON or YAML).
[1]: https://tailscale.com/kb/1068/acl-tags/
## Example
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
We have a small company with a boss, an admin, two developper and an intern.
The boss should have access to all servers but not to the users hosts. Admin
should also have access to all hosts except that their permissions should be
limited to maintaining the hosts (for example purposes). The developers can do
anything they want on dev hosts, but only watch on productions hosts. Intern
can only interact with the development servers.
Each user have at least a device connected to the network and we have some
servers.
- database.prod
- database.dev
- app-server1.prod
- app-server1.dev
- billing.internal
### Current headscale implementation
Let's create some namespaces
```bash
headscale namespaces create prod
headscale namespaces create dev
headscale namespaces create internal
headscale namespaces create users
headscale nodes register -n users boss-computer
headscale nodes register -n users admin1-computer
headscale nodes register -n users dev1-computer
headscale nodes register -n users dev1-phone
headscale nodes register -n users dev2-computer
headscale nodes register -n users intern1-computer
headscale nodes register -n prod database
headscale nodes register -n prod app-server1
headscale nodes register -n dev database
headscale nodes register -n dev app-server1
headscale nodes register -n internal billing
headscale nodes list
ID | Name | Namespace | IP address
1 | boss-computer | users | 100.64.0.1
2 | admin1-computer | users | 100.64.0.2
3 | dev1-computer | users | 100.64.0.3
4 | dev1-phone | users | 100.64.0.4
5 | dev2-computer | users | 100.64.0.5
6 | intern1-computer | users | 100.64.0.6
7 | database | prod | 100.64.0.7
8 | app-server1 | prod | 100.64.0.8
9 | database | dev | 100.64.0.9
10 | app-server1 | dev | 100.64.0.10
11 | internal | internal | 100.64.0.11
```
In order to only allow the communications related to our description above we
need to add the following ACLs
```json
{
"hosts": {
"boss-computer": "100.64.0.1",
"admin1-computer": "100.64.0.2",
"dev1-computer": "100.64.0.3",
"dev1-phone": "100.64.0.4",
"dev2-computer": "100.64.0.5",
"intern1-computer": "100.64.0.6",
"prod-app-server1": "100.64.0.8"
},
"groups": {
"group:dev": ["dev1-computer", "dev1-phone", "dev2-computer"],
"group:admin": ["admin1-computer"],
"group:boss": ["boss-computer"],
"group:intern": ["intern1-computer"]
},
"acls": [
// boss have access to all servers but no users hosts
{
"action": "accept",
"users": ["group:boss"],
"ports": ["prod:*", "dev:*", "internal:*"]
},
// admin have access to adminstration port (lets only consider port 22 here)
{
"action": "accept",
"users": ["group:admin"],
"ports": ["prod:22", "dev:22", "internal:22"]
},
// dev can do anything on dev servers and check access on prod servers
{
"action": "accept",
"users": ["group:dev"],
"ports": ["dev:*", "prod-app-server1:80,443"]
},
// interns only have access to port 80 and 443 on dev servers (lame internship)
{ "action": "accept", "users": ["group:intern"], "ports": ["dev:80,443"] },
// users can access their own devices
{
"action": "accept",
"users": ["dev1-computer"],
"ports": ["dev1-phone:*"]
},
{
"action": "accept",
"users": ["dev1-phone"],
"ports": ["dev1-computer:*"]
},
// internal namespace communications should still be allowed within the namespace
{ "action": "accept", "users": ["dev"], "ports": ["dev:*"] },
{ "action": "accept", "users": ["prod"], "ports": ["prod:*"] },
{ "action": "accept", "users": ["internal"], "ports": ["internal:*"] }
]
}
```
Since communications between namespace isn't possible we also have to share the
devices between the namespaces.
```bash
// add boss host to prod, dev and internal network
headscale nodes share -i 1 -n prod
headscale nodes share -i 1 -n dev
headscale nodes share -i 1 -n internal
// add admin computer to prod, dev and internal network
headscale nodes share -i 2 -n prod
headscale nodes share -i 2 -n dev
headscale nodes share -i 2 -n internal
// add all dev to prod and dev network
headscale nodes share -i 3 -n dev
headscale nodes share -i 4 -n dev
headscale nodes share -i 3 -n prod
headscale nodes share -i 4 -n prod
headscale nodes share -i 5 -n dev
headscale nodes share -i 5 -n prod
headscale nodes share -i 6 -n dev
```
This fake network have not been tested but it should work. Operating it could
be quite tedious if the company grows. Each time a new user join we have to add
it to a group, and share it to the correct namespaces. If the user want
multiple devices we have to allow communication to each of them one by one. If
business conduct a change in the organisations we may have to rewrite all acls
and reorganise all namespaces.
If we add servers in production we should also update the ACLs to allow dev
access to certain category of them (only app servers for example).
### example based on the proposition in this document
Let's create the namespaces
```bash
headscale namespaces create boss
headscale namespaces create admin1
headscale namespaces create dev1
headscale namespaces create dev2
headscale namespaces create intern1
```
We don't need to create namespaces for the servers because the servers will be
tagged. When registering the servers we will need to add the flag
`--advertised-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
registering the server should be allowed to do it. Since anyone can add tags to
a server they can register, the check of the tags is done on headscale server
and only valid tags are applied. A tag is valid if the namespace that is
registering it is allowed to do it.
Here are the ACL's to implement the same permissions as above:
```json
{
// groups are simpler and only list the namespaces name
"groups": {
"group:boss": ["boss"],
"group:dev": ["dev1", "dev2"],
"group:admin": ["admin1"],
"group:intern": ["intern1"]
},
"tagOwners": {
// the administrators can add servers in production
"tag:prod-databases": ["group:admin"],
"tag:prod-app-servers": ["group:admin"],
// the boss can tag any server as internal
"tag:internal": ["group:boss"],
// dev can add servers for dev purposes as well as admins
"tag:dev-databases": ["group:admin", "group:dev"],
"tag:dev-app-servers": ["group:admin", "group:dev"]
// interns cannot add servers
},
"acls": [
// boss have access to all servers
{
"action": "accept",
"users": ["group:boss"],
"ports": [
"tag:prod-databases:*",
"tag:prod-app-servers:*",
"tag:internal:*",
"tag:dev-databases:*",
"tag:dev-app-servers:*"
]
},
// admin have only access to administrative ports of the servers
{
"action": "accept",
"users": ["group:admin"],
"ports": [
"tag:prod-databases:22",
"tag:prod-app-servers:22",
"tag:internal:22",
"tag:dev-databases:22",
"tag:dev-app-servers:22"
]
},
{
"action": "accept",
"users": ["group:dev"],
"ports": [
"tag:dev-databases:*",
"tag:dev-app-servers:*",
"tag:prod-app-servers:80,443"
]
},
// servers should be able to talk to database. Database should not be able to initiate connections to server
{
"action": "accept",
"users": ["tag:dev-app-servers"],
"ports": ["tag:dev-databases:5432"]
},
{
"action": "accept",
"users": ["tag:prod-app-servers"],
"ports": ["tag:prod-databases:5432"]
},
// interns have access to dev-app-servers only in reading mode
{
"action": "accept",
"users": ["group:intern"],
"ports": ["tag:dev-app-servers:80,443"]
},
// we still have to allow internal namespaces communications since nothing guarantees that each user have their own namespaces. This could be talked over.
{ "action": "accept", "users": ["boss"], "ports": ["boss:*"] },
{ "action": "accept", "users": ["dev1"], "ports": ["dev1:*"] },
{ "action": "accept", "users": ["dev2"], "ports": ["dev2:*"] },
{ "action": "accept", "users": ["admin1"], "ports": ["admin1:*"] },
{ "action": "accept", "users": ["intern1"], "ports": ["intern1:*"] }
]
}
```
With this implementation, the sharing step is not necessary. Maintenance cost
of the ACL file is lower and less tedious (no need to map hostname and IP's
into it).

100
docs/remote-cli.md Normal file
View File

@@ -0,0 +1,100 @@
# Controlling `headscale` with remote CLI
## Prerequisit
- A workstation to run `headscale` (could be Linux, macOS, other supported platforms)
- A `headscale` server (version `0.13.0` or newer)
- Access to create API keys (local access to the `headscale` server)
- `headscale` _must_ be served over TLS/HTTPS
- Remote access does _not_ support unencrypted traffic.
- Port `50443` must be open in the firewall (or port overriden by `grpc_listen_addr` option)
## Goal
This documentation has the goal of showing a user how-to set control a `headscale` instance
from a remote machine with the `headscale` command line binary.
## Create an API key
We need to create an API key to authenticate our remote `headscale` when using it from our workstation.
To create a API key, log into your `headscale` server and generate a key:
```shell
headscale apikeys create --expiration 90d
```
Copy the output of the command and save it for later. Please not that you can not retrieve a key again,
if the key is lost, expire the old one, and create a new key.
To list the keys currently assosicated with the server:
```shell
headscale apikeys list
```
and to expire a key:
```shell
headscale apikeys expire --prefix "<PREFIX>"
```
## Download and configure `headscale`
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
2. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headcale`
3. Make `headscale` executable:
```shell
chmod +x /usr/local/bin/headscale
```
4. Configure the CLI through Environment Variables
```shell
export HEADSCALE_CLI_ADDRESS="<HEADSCALE ADDRESS>:<PORT>"
export HEADSCALE_CLI_API_KEY="<API KEY FROM PREVIOUS STAGE>"
```
for example:
```shell
export HEADSCALE_CLI_ADDRESS="headscale.example.com:50443"
export HEADSCALE_CLI_API_KEY="abcde12345"
```
This will tell the `headscale` binary to connect to a remote instance, instead of looking
for a local instance (which is what it does on the server).
The API key is needed to make sure that your are allowed to access the server. The key is _not_
needed when running directly on the server, as the connection is local.
5. Test the connection
Let us run the headscale command to verify that we can connect by listing our nodes:
```shell
headscale nodes list
```
You should now be able to see a list of your nodes from your workstation, and you can
now control the `headscale` server from your workstation.
## Behind a proxy
It is possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as `headscale`.
While this is _not a supported_ feature, an example on how this can be set up on
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
## Troubleshooting
Checklist:
- Make sure you have the _same_ `headscale` version on your server and workstation
- Make sure you use version `0.13.0` or newer.
- Verify that your TLS certificate is valid and trusted
- If you do not have access to a trusted certificate (e.g. from Let's Encrypt), add your self signed certificate to the trust store of your OS or
- Set `HEADSCALE_CLI_INSECURE` to 0 in your environement

View File

@@ -0,0 +1,149 @@
# Running headscale in a container
**Note:** the container documentation is maintained by the _community_ and there is no guarentee
it is up to date, or working.
## Goal
This documentation has the goal of showing a user how-to set up and run `headscale` in a container.
[Docker](https://www.docker.com) is used as the reference container implementation, but there is no reason that it should
not work with alternatives like [Podman](https://podman.io). The Docker image can be found on Docker Hub [here](https://hub.docker.com/r/headscale/headscale).
## Configure and run `headscale`
1. Prepare a directory on the host Docker node in your directory of choice, used to hold `headscale` configuration and the [SQLite](https://www.sqlite.org/) database:
```shell
mkdir ./headscale && cd ./headscale
mkdir ./config
```
2. Create an empty SQlite datebase in the headscale directory:
```shell
touch ./config/db.sqlite
```
3. **(Strongly Recommended)** Download a copy of the [example configuration](../config-example.yaml) from the [headscale repository](https://github.com/juanfont/headscale/).
Using wget:
```shell
wget -O ./config/config.yaml https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml
```
Using curl:
```shell
curl https://raw.githubusercontent.com/juanfont/headscale/main/config-example.yaml -o ./config/config.yaml
```
**(Advanced)** If you would like to hand craft a config file **instead** of downloading the example config file, create a blank `headscale` configuration in the headscale directory to edit:
```shell
touch ./config/config.yaml
```
Modify the config file to your preferences before launching Docker container.
4. Start the headscale server while working in the host headscale directory:
```shell
docker run \
--name headscale \
--detach \
--rm \
--volume $(pwd)/config:/etc/headscale/ \
--publish 127.0.0.1:8080:8080 \
--publish 127.0.0.1:9090:9090 \
headscale/headscale:<VERSION> \
headscale serve
```
This command will mount `config/` under `/etc/headscale`, forward port 8080 out of the container so the
`headscale` instance becomes available and then detach so headscale runs in the background.
5. Verify `headscale` is running:
Follow the container logs:
```shell
docker logs --follow headscale
```
Verify running containers:
```shell
docker ps
```
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
6. Create a namespace ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
docker exec headscale -- headscale namespaces create myfirstnamespace
```
### Register a machine (normal login)
On a client machine, execute the `tailscale` login command:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
```
To register a machine when running `headscale` in a container, take the headscale command and pass it to the container:
```shell
docker exec headscale -- \
headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
```
### Register machine using a pre authenticated key
Generate a key using the command line:
```shell
docker exec headscale -- \
headscale --namespace myfirstnamespace preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Debugging headscale running in Docker
The `headscale/headscale` Docker container is based on a "distroless" image that does not contain a shell or any other debug tools. If you need to debug your application running in the Docker container, you can use the `-debug` variant, for example `headscale/headscale:x.x.x-debug`.
### Running the debug Docker container
To run the debug Docker container, use the exact same commands as above, but replace `headscale/headscale:x.x.x` with `headscale/headscale:x.x.x-debug` (`x.x.x` is the version of headscale). The two containers are compatible with each other, so you can alternate between them.
### Executing commands in the debug container
The default command in the debug container is to run `headscale`, which is located at `/bin/headscale` inside the container.
Additionally, the debug container includes a minimalist Busybox shell.
To launch a shell in the container, use:
```
docker run -it headscale/headscale:x.x.x-debug sh
```
You can also execute commands directly, such as `ls /bin` in this example:
```
docker run headscale/headscale:x.x.x-debug ls /bin
```
Using `docker exec` allows you to run commands in an existing container.

View File

@@ -0,0 +1,184 @@
# Running headscale on Linux
## Goal
This documentation has the goal of showing a user how-to set up and run `headscale` on Linux.
In additional to the "get up and running section", there is an optional [SystemD section](#running-headscale-in-the-background-with-systemd)
describing how to make `headscale` run properly in a server environment.
## Configure and run `headscale`
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell
wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
```
2. Make `headscale` executable:
```shell
chmod +x /usr/local/bin/headscale
```
3. Prepare a directory to hold `headscale` configuration and the [SQLite](https://www.sqlite.org/) database:
```shell
# Directory for configuration
mkdir -p /etc/headscale
# Directory for Database, and other variable data (like certificates)
mkdir -p /var/lib/headscale
```
4. Create an empty SQLite database:
```shell
touch /var/lib/headscale/db.sqlite
```
5. Create a `headscale` configuration:
```shell
touch /etc/headscale/config.yaml
```
It is **strongly recommended** to copy and modify the [example configuration](../config-example.yaml)
from the [headscale repository](../)
6. Start the headscale server:
```shell
headscale serve
```
This command will start `headscale` in the current terminal session.
---
To continue the tutorial, open a new terminal and let it run in the background.
Alternatively use terminal emulators like [tmux](https://github.com/tmux/tmux) or [screen](https://www.gnu.org/software/screen/).
To run `headscale` in the background, please follow the steps in the [SystemD section](#running-headscale-in-the-background-with-systemd) before continuing.
7. Verify `headscale` is running:
Verify `headscale` is available:
```shell
curl http://127.0.0.1:9090/metrics
```
8. Create a namespace ([tailnet](https://tailscale.com/kb/1136/tailnet/)):
```shell
headscale namespaces create myfirstnamespace
```
### Register a machine (normal login)
On a client machine, execute the `tailscale` login command:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
```
Register the machine:
```shell
headscale --namespace myfirstnamespace nodes register --key <YOU_+MACHINE_KEY>
```
### Register machine using a pre authenticated key
Generate a key using the command line:
```shell
headscale --namespace myfirstnamespace preauthkeys create --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to `headscale` during the `tailscale` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Running `headscale` in the background with SystemD
This section demonstrates how to run `headscale` as a service in the background with [SystemD](https://www.freedesktop.org/wiki/Software/systemd/).
This should work on most modern Linux distributions.
1. Create a SystemD service configuration at `/etc/systemd/system/headscale.service` containing:
```systemd
[Unit]
Description=headscale controller
After=syslog.target
After=network.target
[Service]
Type=simple
User=headscale
Group=headscale
ExecStart=/usr/local/bin/headscale serve
Restart=always
RestartSec=5
# Optional security enhancements
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/headscale /var/run/headscale
AmbientCapabilities=CAP_NET_BIND_SERVICE
RuntimeDirectory=headscale
[Install]
WantedBy=multi-user.target
```
Note that when running as the headscale user ensure that, either you add your current user to the headscale group:
```shell
usermod -a -G headscale current_user
```
or run all headscale commands as the headscale user:
```shell
su - headscale
```
2. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with a SystemD friendly path:
```yaml
unix_socket: /var/run/headscale/headscale.sock
```
3. Reload SystemD to load the new configuration file:
```shell
systemctl daemon-reload
```
4. Enable and start the new `headscale` service:
```shell
systemctl enable headscale
systemctl start headscale
```
5. Verify the headscale service:
```shell
systemctl status headscale
```
Verify `headscale` is available:
```shell
curl http://127.0.0.1:8080/metrics
```
`headscale` will now run in the background and start at boot.

45
docs/tls.md Normal file
View File

@@ -0,0 +1,45 @@
# Running the service via TLS (optional)
## Let's Encrypt / ACME
To get a certificate automatically via [Let's Encrypt](https://letsencrypt.org/), set `tls_letsencrypt_hostname` to the desired certificate hostname. This name must resolve to the IP address(es) headscale is reachable on (i.e., it must correspond to the `server_url` configuration parameter). The certificate and Let's Encrypt account credentials will be stored in the directory configured in `tls_letsencrypt_cache_dir`. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from. The certificate will automatically be renewed as needed.
```yaml
tls_letsencrypt_hostname: ""
tls_letsencrypt_listen: ":http"
tls_letsencrypt_cache_dir: ".cache"
tls_letsencrypt_challenge_type: HTTP-01
```
### Challenge type HTTP-01
The default challenge type `HTTP-01` requires that headscale is reachable on port 80 for the Let's Encrypt automated validation, in addition to whatever port is configured in `listen_addr`. By default, headscale listens on port 80 on all local IPs for Let's Encrypt automated validation.
If you need to change the ip and/or port used by headscale for the Let's Encrypt validation process, set `tls_letsencrypt_listen` to the appropriate value. This can be handy if you are running headscale as a non-root user (or can't run `setcap`). Keep in mind, however, that Let's Encrypt will _only_ connect to port 80 for the validation callback, so if you change `tls_letsencrypt_listen` you will also need to configure something else (e.g. a firewall rule) to forward the traffic from port 80 to the ip:port combination specified in `tls_letsencrypt_listen`.
### Challenge type TLS-ALPN-01
Alternatively, `tls_letsencrypt_challenge_type` can be set to `TLS-ALPN-01`. In this configuration, headscale listens on the ip:port combination defined in `listen_addr`. Let's Encrypt will _only_ connect to port 443 for the validation callback, so if `listen_addr` is not set to port 443, something else (e.g. a firewall rule) will be required to forward the traffic from port 443 to the ip:port combination specified in `listen_addr`.
## Bring your own certificate
headscale can also be configured to expose its web service via TLS. To configure the certificate and key file manually, set the `tls_cert_path` and `tls_cert_path` configuration parameters. If the path is relative, it will be interpreted as relative to the directory the configuration file was read from.
```yaml
tls_cert_path: ""
tls_key_path: ""
```
### Configuring Mutual TLS Authentication (mTLS)
mTLS is a method by which an HTTPS server authenticates clients, e.g. Tailscale, using TLS certificates. This can be configured by applying one of the following values to the `tls_client_auth_mode` setting in the configuration file.
| Value | Behavior |
| ------------------- | ---------------------------------------------------------- |
| `disabled` | Disable mTLS. |
| `relaxed` (default) | A client certificate is required, but it is not verified. |
| `enforced` | Requires clients to supply a certificate that is verified. |
```yaml
tls_client_auth_mode: ""
```

50
docs/windows-client.md Normal file
View File

@@ -0,0 +1,50 @@
# Connecting a Windows client
## Goal
This documentation has the goal of showing how a user can use the official Windows [Tailscale](https://tailscale.com) client with `headscale`.
## Add registry keys
To make the Windows client behave as expected and to run well with `headscale`, two registry keys **must** be set:
- `HKLM:\SOFTWARE\Tailscale IPN\UnattendedMode` must be set to `always` as a `string` type, to allow Tailscale to run properly in the background
- `HKLM:\SOFTWARE\Tailscale IPN\LoginURL` must be set to `<YOUR HEADSCALE URL>` as a `string` type, to ensure Tailscale contacts the correct control server.
![windows-registry](./images/windows-registry.png)
The Tailscale Windows client has been observed to reset its configuration on logout/reboot and these two keys [resolves that issue](https://github.com/tailscale/tailscale/issues/2798).
For a guide on how to edit registry keys, [check out Computer Hope](https://www.computerhope.com/issues/ch001348.htm).
## Installation
Download the [Official Windows Client](https://tailscale.com/download/windows) and install it.
When the installation has finished, start Tailscale and log in (you might have to click the icon in the system tray).
The log in should open a browser Window and direct you to your `headscale` instance.
## Troubleshooting
If you are seeing repeated messages like:
```
[GIN] 2022/02/10 - 16:39:34 | 200 | 1.105306ms | 127.0.0.1 | POST "/machine/redacted"
```
in your `headscale` output, turn on `DEBUG` logging and look for:
```
2022-02-11T00:59:29Z DBG Machine registration has expired. Sending a authurl to register machine=redacted
```
This typically means that the registry keys above was not set appropriately.
To reset and try again, it is important to do the following:
1. Ensure the registry keys from the previous guide is correctly set.
2. Shut down the Tailscale service (or the client running in the tray)
3. Delete Tailscale Application data folder, located at `C:\Users\<USERNAME>\AppData\Local\Tailscale` and try to connect again.
4. Ensure the Windows node is deleted from headscale (to ensure fresh setup)
5. Start Tailscale on the windows machine and retry the login.

View File

@@ -0,0 +1,558 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.27.1
// protoc (unknown)
// source: headscale/v1/apikey.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
timestamppb "google.golang.org/protobuf/types/known/timestamppb"
reflect "reflect"
sync "sync"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type ApiKey struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Id uint64 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"`
Prefix string `protobuf:"bytes,2,opt,name=prefix,proto3" json:"prefix,omitempty"`
Expiration *timestamppb.Timestamp `protobuf:"bytes,3,opt,name=expiration,proto3" json:"expiration,omitempty"`
CreatedAt *timestamppb.Timestamp `protobuf:"bytes,4,opt,name=created_at,json=createdAt,proto3" json:"created_at,omitempty"`
LastSeen *timestamppb.Timestamp `protobuf:"bytes,5,opt,name=last_seen,json=lastSeen,proto3" json:"last_seen,omitempty"`
}
func (x *ApiKey) Reset() {
*x = ApiKey{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ApiKey) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ApiKey) ProtoMessage() {}
func (x *ApiKey) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[0]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ApiKey.ProtoReflect.Descriptor instead.
func (*ApiKey) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{0}
}
func (x *ApiKey) GetId() uint64 {
if x != nil {
return x.Id
}
return 0
}
func (x *ApiKey) GetPrefix() string {
if x != nil {
return x.Prefix
}
return ""
}
func (x *ApiKey) GetExpiration() *timestamppb.Timestamp {
if x != nil {
return x.Expiration
}
return nil
}
func (x *ApiKey) GetCreatedAt() *timestamppb.Timestamp {
if x != nil {
return x.CreatedAt
}
return nil
}
func (x *ApiKey) GetLastSeen() *timestamppb.Timestamp {
if x != nil {
return x.LastSeen
}
return nil
}
type CreateApiKeyRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Expiration *timestamppb.Timestamp `protobuf:"bytes,1,opt,name=expiration,proto3" json:"expiration,omitempty"`
}
func (x *CreateApiKeyRequest) Reset() {
*x = CreateApiKeyRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CreateApiKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateApiKeyRequest) ProtoMessage() {}
func (x *CreateApiKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[1]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateApiKeyRequest.ProtoReflect.Descriptor instead.
func (*CreateApiKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{1}
}
func (x *CreateApiKeyRequest) GetExpiration() *timestamppb.Timestamp {
if x != nil {
return x.Expiration
}
return nil
}
type CreateApiKeyResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ApiKey string `protobuf:"bytes,1,opt,name=api_key,json=apiKey,proto3" json:"api_key,omitempty"`
}
func (x *CreateApiKeyResponse) Reset() {
*x = CreateApiKeyResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *CreateApiKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*CreateApiKeyResponse) ProtoMessage() {}
func (x *CreateApiKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[2]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use CreateApiKeyResponse.ProtoReflect.Descriptor instead.
func (*CreateApiKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{2}
}
func (x *CreateApiKeyResponse) GetApiKey() string {
if x != nil {
return x.ApiKey
}
return ""
}
type ExpireApiKeyRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
Prefix string `protobuf:"bytes,1,opt,name=prefix,proto3" json:"prefix,omitempty"`
}
func (x *ExpireApiKeyRequest) Reset() {
*x = ExpireApiKeyRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExpireApiKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpireApiKeyRequest) ProtoMessage() {}
func (x *ExpireApiKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[3]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpireApiKeyRequest.ProtoReflect.Descriptor instead.
func (*ExpireApiKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{3}
}
func (x *ExpireApiKeyRequest) GetPrefix() string {
if x != nil {
return x.Prefix
}
return ""
}
type ExpireApiKeyResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *ExpireApiKeyResponse) Reset() {
*x = ExpireApiKeyResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ExpireApiKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ExpireApiKeyResponse) ProtoMessage() {}
func (x *ExpireApiKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[4]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ExpireApiKeyResponse.ProtoReflect.Descriptor instead.
func (*ExpireApiKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{4}
}
type ListApiKeysRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
}
func (x *ListApiKeysRequest) Reset() {
*x = ListApiKeysRequest{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ListApiKeysRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListApiKeysRequest) ProtoMessage() {}
func (x *ListApiKeysRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[5]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListApiKeysRequest.ProtoReflect.Descriptor instead.
func (*ListApiKeysRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{5}
}
type ListApiKeysResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
unknownFields protoimpl.UnknownFields
ApiKeys []*ApiKey `protobuf:"bytes,1,rep,name=api_keys,json=apiKeys,proto3" json:"api_keys,omitempty"`
}
func (x *ListApiKeysResponse) Reset() {
*x = ListApiKeysResponse{}
if protoimpl.UnsafeEnabled {
mi := &file_headscale_v1_apikey_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
}
func (x *ListApiKeysResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*ListApiKeysResponse) ProtoMessage() {}
func (x *ListApiKeysResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_apikey_proto_msgTypes[6]
if protoimpl.UnsafeEnabled && x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use ListApiKeysResponse.ProtoReflect.Descriptor instead.
func (*ListApiKeysResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_apikey_proto_rawDescGZIP(), []int{6}
}
func (x *ListApiKeysResponse) GetApiKeys() []*ApiKey {
if x != nil {
return x.ApiKeys
}
return nil
}
var File_headscale_v1_apikey_proto protoreflect.FileDescriptor
var file_headscale_v1_apikey_proto_rawDesc = []byte{
0x0a, 0x19, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x61,
0x70, 0x69, 0x6b, 0x65, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0c, 0x68, 0x65, 0x61,
0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x1a, 0x1f, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x74, 0x69, 0x6d, 0x65, 0x73,
0x74, 0x61, 0x6d, 0x70, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0xe0, 0x01, 0x0a, 0x06, 0x41,
0x70, 0x69, 0x4b, 0x65, 0x79, 0x12, 0x0e, 0x0a, 0x02, 0x69, 0x64, 0x18, 0x01, 0x20, 0x01, 0x28,
0x04, 0x52, 0x02, 0x69, 0x64, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x18,
0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78, 0x12, 0x3a, 0x0a,
0x0a, 0x65, 0x78, 0x70, 0x69, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x18, 0x03, 0x20, 0x01, 0x28,
0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x0a, 0x65,
0x78, 0x70, 0x69, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x39, 0x0a, 0x0a, 0x63, 0x72, 0x65,
0x61, 0x74, 0x65, 0x64, 0x5f, 0x61, 0x74, 0x18, 0x04, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e,
0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e,
0x54, 0x69, 0x6d, 0x65, 0x73, 0x74, 0x61, 0x6d, 0x70, 0x52, 0x09, 0x63, 0x72, 0x65, 0x61, 0x74,
0x65, 0x64, 0x41, 0x74, 0x12, 0x37, 0x0a, 0x09, 0x6c, 0x61, 0x73, 0x74, 0x5f, 0x73, 0x65, 0x65,
0x6e, 0x18, 0x05, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x74,
0x61, 0x6d, 0x70, 0x52, 0x08, 0x6c, 0x61, 0x73, 0x74, 0x53, 0x65, 0x65, 0x6e, 0x22, 0x51, 0x0a,
0x13, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x12, 0x3a, 0x0a, 0x0a, 0x65, 0x78, 0x70, 0x69, 0x72, 0x61, 0x74, 0x69,
0x6f, 0x6e, 0x18, 0x01, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x1a, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x54, 0x69, 0x6d, 0x65, 0x73,
0x74, 0x61, 0x6d, 0x70, 0x52, 0x0a, 0x65, 0x78, 0x70, 0x69, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e,
0x22, 0x2f, 0x0a, 0x14, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79,
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x17, 0x0a, 0x07, 0x61, 0x70, 0x69, 0x5f,
0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x61, 0x70, 0x69, 0x4b, 0x65,
0x79, 0x22, 0x2d, 0x0a, 0x13, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65,
0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x12, 0x16, 0x0a, 0x06, 0x70, 0x72, 0x65, 0x66,
0x69, 0x78, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x70, 0x72, 0x65, 0x66, 0x69, 0x78,
0x22, 0x16, 0x0a, 0x14, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79,
0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x14, 0x0a, 0x12, 0x4c, 0x69, 0x73, 0x74,
0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x22, 0x46,
0x0a, 0x13, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x73, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x12, 0x2f, 0x0a, 0x08, 0x61, 0x70, 0x69, 0x5f, 0x6b, 0x65, 0x79,
0x73, 0x18, 0x01, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x14, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63,
0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x52, 0x07, 0x61,
0x70, 0x69, 0x4b, 0x65, 0x79, 0x73, 0x42, 0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62,
0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6a, 0x75, 0x61, 0x6e, 0x66, 0x6f, 0x6e, 0x74, 0x2f, 0x68, 0x65,
0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2f, 0x67, 0x65, 0x6e, 0x2f, 0x67, 0x6f, 0x2f, 0x76,
0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
}
var (
file_headscale_v1_apikey_proto_rawDescOnce sync.Once
file_headscale_v1_apikey_proto_rawDescData = file_headscale_v1_apikey_proto_rawDesc
)
func file_headscale_v1_apikey_proto_rawDescGZIP() []byte {
file_headscale_v1_apikey_proto_rawDescOnce.Do(func() {
file_headscale_v1_apikey_proto_rawDescData = protoimpl.X.CompressGZIP(file_headscale_v1_apikey_proto_rawDescData)
})
return file_headscale_v1_apikey_proto_rawDescData
}
var file_headscale_v1_apikey_proto_msgTypes = make([]protoimpl.MessageInfo, 7)
var file_headscale_v1_apikey_proto_goTypes = []interface{}{
(*ApiKey)(nil), // 0: headscale.v1.ApiKey
(*CreateApiKeyRequest)(nil), // 1: headscale.v1.CreateApiKeyRequest
(*CreateApiKeyResponse)(nil), // 2: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyRequest)(nil), // 3: headscale.v1.ExpireApiKeyRequest
(*ExpireApiKeyResponse)(nil), // 4: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysRequest)(nil), // 5: headscale.v1.ListApiKeysRequest
(*ListApiKeysResponse)(nil), // 6: headscale.v1.ListApiKeysResponse
(*timestamppb.Timestamp)(nil), // 7: google.protobuf.Timestamp
}
var file_headscale_v1_apikey_proto_depIdxs = []int32{
7, // 0: headscale.v1.ApiKey.expiration:type_name -> google.protobuf.Timestamp
7, // 1: headscale.v1.ApiKey.created_at:type_name -> google.protobuf.Timestamp
7, // 2: headscale.v1.ApiKey.last_seen:type_name -> google.protobuf.Timestamp
7, // 3: headscale.v1.CreateApiKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.ListApiKeysResponse.api_keys:type_name -> headscale.v1.ApiKey
5, // [5:5] is the sub-list for method output_type
5, // [5:5] is the sub-list for method input_type
5, // [5:5] is the sub-list for extension type_name
5, // [5:5] is the sub-list for extension extendee
0, // [0:5] is the sub-list for field type_name
}
func init() { file_headscale_v1_apikey_proto_init() }
func file_headscale_v1_apikey_proto_init() {
if File_headscale_v1_apikey_proto != nil {
return
}
if !protoimpl.UnsafeEnabled {
file_headscale_v1_apikey_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ApiKey); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_headscale_v1_apikey_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CreateApiKeyRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_headscale_v1_apikey_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*CreateApiKeyResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_headscale_v1_apikey_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExpireApiKeyRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_headscale_v1_apikey_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ExpireApiKeyResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_headscale_v1_apikey_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ListApiKeysRequest); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
file_headscale_v1_apikey_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} {
switch v := v.(*ListApiKeysResponse); i {
case 0:
return &v.state
case 1:
return &v.sizeCache
case 2:
return &v.unknownFields
default:
return nil
}
}
}
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_headscale_v1_apikey_proto_rawDesc,
NumEnums: 0,
NumMessages: 7,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_apikey_proto_goTypes,
DependencyIndexes: file_headscale_v1_apikey_proto_depIdxs,
MessageInfos: file_headscale_v1_apikey_proto_msgTypes,
}.Build()
File_headscale_v1_apikey_proto = out.File
file_headscale_v1_apikey_proto_rawDesc = nil
file_headscale_v1_apikey_proto_goTypes = nil
file_headscale_v1_apikey_proto_depIdxs = nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,313 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.27.1
// protoc (unknown)
// source: headscale/v1/headscale.proto
package v1
import (
_ "google.golang.org/genproto/googleapis/api/annotations"
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
var File_headscale_v1_headscale_proto protoreflect.FileDescriptor
var file_headscale_v1_headscale_proto_rawDesc = []byte{
0x0a, 0x1c, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x68,
0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0c,
0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x1a, 0x1c, 0x67, 0x6f,
0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x61, 0x6e, 0x6e, 0x6f, 0x74, 0x61, 0x74,
0x69, 0x6f, 0x6e, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1c, 0x68, 0x65, 0x61, 0x64,
0x73, 0x63, 0x61, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61,
0x63, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1d, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63,
0x61, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x70, 0x72, 0x65, 0x61, 0x75, 0x74, 0x68, 0x6b, 0x65,
0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1a, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61,
0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x1a, 0x19, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2f, 0x76,
0x31, 0x2f, 0x72, 0x6f, 0x75, 0x74, 0x65, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x19,
0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2f, 0x76, 0x31, 0x2f, 0x61, 0x70, 0x69,
0x6b, 0x65, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x32, 0xa3, 0x13, 0x0a, 0x10, 0x48, 0x65,
0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x77,
0x0a, 0x0c, 0x47, 0x65, 0x74, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x21,
0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x47, 0x65,
0x74, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73,
0x74, 0x1a, 0x22, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31,
0x2e, 0x47, 0x65, 0x74, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52, 0x65, 0x73,
0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x20, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1a, 0x12, 0x18, 0x2f,
0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65,
0x2f, 0x7b, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x12, 0x7c, 0x0a, 0x0f, 0x43, 0x72, 0x65, 0x61, 0x74,
0x65, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x24, 0x2e, 0x68, 0x65, 0x61,
0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65,
0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x1a, 0x25, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e,
0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1c, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x16, 0x22,
0x11, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61,
0x63, 0x65, 0x3a, 0x01, 0x2a, 0x12, 0x96, 0x01, 0x0a, 0x0f, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65,
0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x24, 0x2e, 0x68, 0x65, 0x61, 0x64,
0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x52, 0x65, 0x6e, 0x61, 0x6d, 0x65, 0x4e,
0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
0x25, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x52,
0x65, 0x6e, 0x61, 0x6d, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x36, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x30, 0x22, 0x2e,
0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63,
0x65, 0x2f, 0x7b, 0x6f, 0x6c, 0x64, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x2f, 0x72, 0x65, 0x6e,
0x61, 0x6d, 0x65, 0x2f, 0x7b, 0x6e, 0x65, 0x77, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x7d, 0x12, 0x80,
0x01, 0x0a, 0x0f, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61,
0x63, 0x65, 0x12, 0x24, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76,
0x31, 0x2e, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63,
0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73,
0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x4e, 0x61,
0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
0x20, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1a, 0x2a, 0x18, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31,
0x2f, 0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x2f, 0x7b, 0x6e, 0x61, 0x6d, 0x65,
0x7d, 0x12, 0x76, 0x0a, 0x0e, 0x4c, 0x69, 0x73, 0x74, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61,
0x63, 0x65, 0x73, 0x12, 0x23, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e,
0x76, 0x31, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x4e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65,
0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x24, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73,
0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x4e, 0x61, 0x6d, 0x65,
0x73, 0x70, 0x61, 0x63, 0x65, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x19,
0x82, 0xd3, 0xe4, 0x93, 0x02, 0x13, 0x12, 0x11, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f,
0x6e, 0x61, 0x6d, 0x65, 0x73, 0x70, 0x61, 0x63, 0x65, 0x12, 0x80, 0x01, 0x0a, 0x10, 0x43, 0x72,
0x65, 0x61, 0x74, 0x65, 0x50, 0x72, 0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65, 0x79, 0x12, 0x25,
0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72,
0x65, 0x61, 0x74, 0x65, 0x50, 0x72, 0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65, 0x79, 0x52, 0x65,
0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c,
0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x50, 0x72, 0x65, 0x41, 0x75,
0x74, 0x68, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1d, 0x82,
0xd3, 0xe4, 0x93, 0x02, 0x17, 0x22, 0x12, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x70,
0x72, 0x65, 0x61, 0x75, 0x74, 0x68, 0x6b, 0x65, 0x79, 0x3a, 0x01, 0x2a, 0x12, 0x87, 0x01, 0x0a,
0x10, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x50, 0x72, 0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65,
0x79, 0x12, 0x25, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31,
0x2e, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x50, 0x72, 0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65,
0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x26, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73,
0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65, 0x50, 0x72,
0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65,
0x22, 0x24, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1e, 0x22, 0x19, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76,
0x31, 0x2f, 0x70, 0x72, 0x65, 0x61, 0x75, 0x74, 0x68, 0x6b, 0x65, 0x79, 0x2f, 0x65, 0x78, 0x70,
0x69, 0x72, 0x65, 0x3a, 0x01, 0x2a, 0x12, 0x7a, 0x0a, 0x0f, 0x4c, 0x69, 0x73, 0x74, 0x50, 0x72,
0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65, 0x79, 0x73, 0x12, 0x24, 0x2e, 0x68, 0x65, 0x61, 0x64,
0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x50, 0x72, 0x65,
0x41, 0x75, 0x74, 0x68, 0x4b, 0x65, 0x79, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
0x25, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x4c,
0x69, 0x73, 0x74, 0x50, 0x72, 0x65, 0x41, 0x75, 0x74, 0x68, 0x4b, 0x65, 0x79, 0x73, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x1a, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x14, 0x12, 0x12,
0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x70, 0x72, 0x65, 0x61, 0x75, 0x74, 0x68, 0x6b,
0x65, 0x79, 0x12, 0x89, 0x01, 0x0a, 0x12, 0x44, 0x65, 0x62, 0x75, 0x67, 0x43, 0x72, 0x65, 0x61,
0x74, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x12, 0x27, 0x2e, 0x68, 0x65, 0x61, 0x64,
0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x62, 0x75, 0x67, 0x43, 0x72,
0x65, 0x61, 0x74, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65,
0x73, 0x74, 0x1a, 0x28, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76,
0x31, 0x2e, 0x44, 0x65, 0x62, 0x75, 0x67, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x4d, 0x61, 0x63,
0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x20, 0x82, 0xd3,
0xe4, 0x93, 0x02, 0x1a, 0x22, 0x15, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x64, 0x65,
0x62, 0x75, 0x67, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x3a, 0x01, 0x2a, 0x12, 0x75,
0x0a, 0x0a, 0x47, 0x65, 0x74, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x12, 0x1f, 0x2e, 0x68,
0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x47, 0x65, 0x74, 0x4d,
0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x20, 0x2e,
0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x47, 0x65, 0x74,
0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
0x24, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1e, 0x12, 0x1c, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31,
0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x2f, 0x7b, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e,
0x65, 0x5f, 0x69, 0x64, 0x7d, 0x12, 0x80, 0x01, 0x0a, 0x0f, 0x52, 0x65, 0x67, 0x69, 0x73, 0x74,
0x65, 0x72, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x12, 0x24, 0x2e, 0x68, 0x65, 0x61, 0x64,
0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x52, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65,
0x72, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a,
0x25, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x52,
0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65,
0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x20, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1a, 0x22, 0x18,
0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x2f,
0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x12, 0x7e, 0x0a, 0x0d, 0x44, 0x65, 0x6c, 0x65,
0x74, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x12, 0x22, 0x2e, 0x68, 0x65, 0x61, 0x64,
0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x6c, 0x65, 0x74, 0x65, 0x4d,
0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x23, 0x2e,
0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x44, 0x65, 0x6c,
0x65, 0x74, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e,
0x73, 0x65, 0x22, 0x24, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1e, 0x2a, 0x1c, 0x2f, 0x61, 0x70, 0x69,
0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x2f, 0x7b, 0x6d, 0x61, 0x63,
0x68, 0x69, 0x6e, 0x65, 0x5f, 0x69, 0x64, 0x7d, 0x12, 0x85, 0x01, 0x0a, 0x0d, 0x45, 0x78, 0x70,
0x69, 0x72, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x12, 0x22, 0x2e, 0x68, 0x65, 0x61,
0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65,
0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x23,
0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78,
0x70, 0x69, 0x72, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f,
0x6e, 0x73, 0x65, 0x22, 0x2b, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x25, 0x22, 0x23, 0x2f, 0x61, 0x70,
0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x2f, 0x7b, 0x6d, 0x61,
0x63, 0x68, 0x69, 0x6e, 0x65, 0x5f, 0x69, 0x64, 0x7d, 0x2f, 0x65, 0x78, 0x70, 0x69, 0x72, 0x65,
0x12, 0x6e, 0x0a, 0x0c, 0x4c, 0x69, 0x73, 0x74, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x73,
0x12, 0x21, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e,
0x4c, 0x69, 0x73, 0x74, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x73, 0x52, 0x65, 0x71, 0x75,
0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e,
0x76, 0x31, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x73, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x17, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x11, 0x12,
0x0f, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65,
0x12, 0x8b, 0x01, 0x0a, 0x0f, 0x47, 0x65, 0x74, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52,
0x6f, 0x75, 0x74, 0x65, 0x12, 0x24, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65,
0x2e, 0x76, 0x31, 0x2e, 0x47, 0x65, 0x74, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x6f,
0x75, 0x74, 0x65, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x25, 0x2e, 0x68, 0x65, 0x61,
0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x47, 0x65, 0x74, 0x4d, 0x61, 0x63,
0x68, 0x69, 0x6e, 0x65, 0x52, 0x6f, 0x75, 0x74, 0x65, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x22, 0x2b, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x25, 0x12, 0x23, 0x2f, 0x61, 0x70, 0x69, 0x2f,
0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x2f, 0x7b, 0x6d, 0x61, 0x63, 0x68,
0x69, 0x6e, 0x65, 0x5f, 0x69, 0x64, 0x7d, 0x2f, 0x72, 0x6f, 0x75, 0x74, 0x65, 0x73, 0x12, 0x97,
0x01, 0x0a, 0x13, 0x45, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65,
0x52, 0x6f, 0x75, 0x74, 0x65, 0x73, 0x12, 0x28, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61,
0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x4d, 0x61, 0x63, 0x68,
0x69, 0x6e, 0x65, 0x52, 0x6f, 0x75, 0x74, 0x65, 0x73, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74,
0x1a, 0x29, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e,
0x45, 0x6e, 0x61, 0x62, 0x6c, 0x65, 0x4d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x52, 0x6f, 0x75,
0x74, 0x65, 0x73, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x2b, 0x82, 0xd3, 0xe4,
0x93, 0x02, 0x25, 0x22, 0x23, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x6d, 0x61, 0x63,
0x68, 0x69, 0x6e, 0x65, 0x2f, 0x7b, 0x6d, 0x61, 0x63, 0x68, 0x69, 0x6e, 0x65, 0x5f, 0x69, 0x64,
0x7d, 0x2f, 0x72, 0x6f, 0x75, 0x74, 0x65, 0x73, 0x12, 0x70, 0x0a, 0x0c, 0x43, 0x72, 0x65, 0x61,
0x74, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x12, 0x21, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73,
0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74, 0x65, 0x41, 0x70,
0x69, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e, 0x68, 0x65,
0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x43, 0x72, 0x65, 0x61, 0x74,
0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22,
0x19, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x13, 0x22, 0x0e, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31,
0x2f, 0x61, 0x70, 0x69, 0x6b, 0x65, 0x79, 0x3a, 0x01, 0x2a, 0x12, 0x77, 0x0a, 0x0c, 0x45, 0x78,
0x70, 0x69, 0x72, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x12, 0x21, 0x2e, 0x68, 0x65, 0x61,
0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x70, 0x69, 0x72, 0x65,
0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x71, 0x75, 0x65, 0x73, 0x74, 0x1a, 0x22, 0x2e,
0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76, 0x31, 0x2e, 0x45, 0x78, 0x70,
0x69, 0x72, 0x65, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x52, 0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73,
0x65, 0x22, 0x20, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x1a, 0x22, 0x15, 0x2f, 0x61, 0x70, 0x69, 0x2f,
0x76, 0x31, 0x2f, 0x61, 0x70, 0x69, 0x6b, 0x65, 0x79, 0x2f, 0x65, 0x78, 0x70, 0x69, 0x72, 0x65,
0x3a, 0x01, 0x2a, 0x12, 0x6a, 0x0a, 0x0b, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x69, 0x4b, 0x65,
0x79, 0x73, 0x12, 0x20, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65, 0x2e, 0x76,
0x31, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x73, 0x52, 0x65, 0x71,
0x75, 0x65, 0x73, 0x74, 0x1a, 0x21, 0x2e, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65,
0x2e, 0x76, 0x31, 0x2e, 0x4c, 0x69, 0x73, 0x74, 0x41, 0x70, 0x69, 0x4b, 0x65, 0x79, 0x73, 0x52,
0x65, 0x73, 0x70, 0x6f, 0x6e, 0x73, 0x65, 0x22, 0x16, 0x82, 0xd3, 0xe4, 0x93, 0x02, 0x10, 0x12,
0x0e, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x76, 0x31, 0x2f, 0x61, 0x70, 0x69, 0x6b, 0x65, 0x79, 0x42,
0x29, 0x5a, 0x27, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x6a, 0x75,
0x61, 0x6e, 0x66, 0x6f, 0x6e, 0x74, 0x2f, 0x68, 0x65, 0x61, 0x64, 0x73, 0x63, 0x61, 0x6c, 0x65,
0x2f, 0x67, 0x65, 0x6e, 0x2f, 0x67, 0x6f, 0x2f, 0x76, 0x31, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
0x6f, 0x33,
}
var file_headscale_v1_headscale_proto_goTypes = []interface{}{
(*GetNamespaceRequest)(nil), // 0: headscale.v1.GetNamespaceRequest
(*CreateNamespaceRequest)(nil), // 1: headscale.v1.CreateNamespaceRequest
(*RenameNamespaceRequest)(nil), // 2: headscale.v1.RenameNamespaceRequest
(*DeleteNamespaceRequest)(nil), // 3: headscale.v1.DeleteNamespaceRequest
(*ListNamespacesRequest)(nil), // 4: headscale.v1.ListNamespacesRequest
(*CreatePreAuthKeyRequest)(nil), // 5: headscale.v1.CreatePreAuthKeyRequest
(*ExpirePreAuthKeyRequest)(nil), // 6: headscale.v1.ExpirePreAuthKeyRequest
(*ListPreAuthKeysRequest)(nil), // 7: headscale.v1.ListPreAuthKeysRequest
(*DebugCreateMachineRequest)(nil), // 8: headscale.v1.DebugCreateMachineRequest
(*GetMachineRequest)(nil), // 9: headscale.v1.GetMachineRequest
(*RegisterMachineRequest)(nil), // 10: headscale.v1.RegisterMachineRequest
(*DeleteMachineRequest)(nil), // 11: headscale.v1.DeleteMachineRequest
(*ExpireMachineRequest)(nil), // 12: headscale.v1.ExpireMachineRequest
(*ListMachinesRequest)(nil), // 13: headscale.v1.ListMachinesRequest
(*GetMachineRouteRequest)(nil), // 14: headscale.v1.GetMachineRouteRequest
(*EnableMachineRoutesRequest)(nil), // 15: headscale.v1.EnableMachineRoutesRequest
(*CreateApiKeyRequest)(nil), // 16: headscale.v1.CreateApiKeyRequest
(*ExpireApiKeyRequest)(nil), // 17: headscale.v1.ExpireApiKeyRequest
(*ListApiKeysRequest)(nil), // 18: headscale.v1.ListApiKeysRequest
(*GetNamespaceResponse)(nil), // 19: headscale.v1.GetNamespaceResponse
(*CreateNamespaceResponse)(nil), // 20: headscale.v1.CreateNamespaceResponse
(*RenameNamespaceResponse)(nil), // 21: headscale.v1.RenameNamespaceResponse
(*DeleteNamespaceResponse)(nil), // 22: headscale.v1.DeleteNamespaceResponse
(*ListNamespacesResponse)(nil), // 23: headscale.v1.ListNamespacesResponse
(*CreatePreAuthKeyResponse)(nil), // 24: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyResponse)(nil), // 25: headscale.v1.ExpirePreAuthKeyResponse
(*ListPreAuthKeysResponse)(nil), // 26: headscale.v1.ListPreAuthKeysResponse
(*DebugCreateMachineResponse)(nil), // 27: headscale.v1.DebugCreateMachineResponse
(*GetMachineResponse)(nil), // 28: headscale.v1.GetMachineResponse
(*RegisterMachineResponse)(nil), // 29: headscale.v1.RegisterMachineResponse
(*DeleteMachineResponse)(nil), // 30: headscale.v1.DeleteMachineResponse
(*ExpireMachineResponse)(nil), // 31: headscale.v1.ExpireMachineResponse
(*ListMachinesResponse)(nil), // 32: headscale.v1.ListMachinesResponse
(*GetMachineRouteResponse)(nil), // 33: headscale.v1.GetMachineRouteResponse
(*EnableMachineRoutesResponse)(nil), // 34: headscale.v1.EnableMachineRoutesResponse
(*CreateApiKeyResponse)(nil), // 35: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 36: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 37: headscale.v1.ListApiKeysResponse
}
var file_headscale_v1_headscale_proto_depIdxs = []int32{
0, // 0: headscale.v1.HeadscaleService.GetNamespace:input_type -> headscale.v1.GetNamespaceRequest
1, // 1: headscale.v1.HeadscaleService.CreateNamespace:input_type -> headscale.v1.CreateNamespaceRequest
2, // 2: headscale.v1.HeadscaleService.RenameNamespace:input_type -> headscale.v1.RenameNamespaceRequest
3, // 3: headscale.v1.HeadscaleService.DeleteNamespace:input_type -> headscale.v1.DeleteNamespaceRequest
4, // 4: headscale.v1.HeadscaleService.ListNamespaces:input_type -> headscale.v1.ListNamespacesRequest
5, // 5: headscale.v1.HeadscaleService.CreatePreAuthKey:input_type -> headscale.v1.CreatePreAuthKeyRequest
6, // 6: headscale.v1.HeadscaleService.ExpirePreAuthKey:input_type -> headscale.v1.ExpirePreAuthKeyRequest
7, // 7: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest
8, // 8: headscale.v1.HeadscaleService.DebugCreateMachine:input_type -> headscale.v1.DebugCreateMachineRequest
9, // 9: headscale.v1.HeadscaleService.GetMachine:input_type -> headscale.v1.GetMachineRequest
10, // 10: headscale.v1.HeadscaleService.RegisterMachine:input_type -> headscale.v1.RegisterMachineRequest
11, // 11: headscale.v1.HeadscaleService.DeleteMachine:input_type -> headscale.v1.DeleteMachineRequest
12, // 12: headscale.v1.HeadscaleService.ExpireMachine:input_type -> headscale.v1.ExpireMachineRequest
13, // 13: headscale.v1.HeadscaleService.ListMachines:input_type -> headscale.v1.ListMachinesRequest
14, // 14: headscale.v1.HeadscaleService.GetMachineRoute:input_type -> headscale.v1.GetMachineRouteRequest
15, // 15: headscale.v1.HeadscaleService.EnableMachineRoutes:input_type -> headscale.v1.EnableMachineRoutesRequest
16, // 16: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
17, // 17: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
18, // 18: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
19, // 19: headscale.v1.HeadscaleService.GetNamespace:output_type -> headscale.v1.GetNamespaceResponse
20, // 20: headscale.v1.HeadscaleService.CreateNamespace:output_type -> headscale.v1.CreateNamespaceResponse
21, // 21: headscale.v1.HeadscaleService.RenameNamespace:output_type -> headscale.v1.RenameNamespaceResponse
22, // 22: headscale.v1.HeadscaleService.DeleteNamespace:output_type -> headscale.v1.DeleteNamespaceResponse
23, // 23: headscale.v1.HeadscaleService.ListNamespaces:output_type -> headscale.v1.ListNamespacesResponse
24, // 24: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
25, // 25: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
26, // 26: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
27, // 27: headscale.v1.HeadscaleService.DebugCreateMachine:output_type -> headscale.v1.DebugCreateMachineResponse
28, // 28: headscale.v1.HeadscaleService.GetMachine:output_type -> headscale.v1.GetMachineResponse
29, // 29: headscale.v1.HeadscaleService.RegisterMachine:output_type -> headscale.v1.RegisterMachineResponse
30, // 30: headscale.v1.HeadscaleService.DeleteMachine:output_type -> headscale.v1.DeleteMachineResponse
31, // 31: headscale.v1.HeadscaleService.ExpireMachine:output_type -> headscale.v1.ExpireMachineResponse
32, // 32: headscale.v1.HeadscaleService.ListMachines:output_type -> headscale.v1.ListMachinesResponse
33, // 33: headscale.v1.HeadscaleService.GetMachineRoute:output_type -> headscale.v1.GetMachineRouteResponse
34, // 34: headscale.v1.HeadscaleService.EnableMachineRoutes:output_type -> headscale.v1.EnableMachineRoutesResponse
35, // 35: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
36, // 36: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
37, // 37: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
19, // [19:38] is the sub-list for method output_type
0, // [0:19] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
}
func init() { file_headscale_v1_headscale_proto_init() }
func file_headscale_v1_headscale_proto_init() {
if File_headscale_v1_headscale_proto != nil {
return
}
file_headscale_v1_namespace_proto_init()
file_headscale_v1_preauthkey_proto_init()
file_headscale_v1_machine_proto_init()
file_headscale_v1_routes_proto_init()
file_headscale_v1_apikey_proto_init()
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: file_headscale_v1_headscale_proto_rawDesc,
NumEnums: 0,
NumMessages: 0,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_headscale_v1_headscale_proto_goTypes,
DependencyIndexes: file_headscale_v1_headscale_proto_depIdxs,
}.Build()
File_headscale_v1_headscale_proto = out.File
file_headscale_v1_headscale_proto_rawDesc = nil
file_headscale_v1_headscale_proto_goTypes = nil
file_headscale_v1_headscale_proto_depIdxs = nil
}

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More