mirror of
https://github.com/juanfont/headscale.git
synced 2025-08-17 16:17:30 +00:00
Compare commits
343 Commits
web-auth-f
...
v0.22.2
Author | SHA1 | Date | |
---|---|---|---|
![]() |
22e397e0b6 | ||
![]() |
c7db99d6ca | ||
![]() |
f73354b4f4 | ||
![]() |
4c8f8c6a1c | ||
![]() |
997e93455d | ||
![]() |
9f381256c4 | ||
![]() |
f60c5a1398 | ||
![]() |
5706f84cb0 | ||
![]() |
9478c288f6 | ||
![]() |
6043ec87cf | ||
![]() |
dcf2439c61 | ||
![]() |
ba45d7dbd3 | ||
![]() |
bab4e14828 | ||
![]() |
526e568e1e | ||
![]() |
02ab0df2de | ||
![]() |
7338775de7 | ||
![]() |
00c514608e | ||
![]() |
6c5723a463 | ||
![]() |
57fd5cf310 | ||
![]() |
f113cc7846 | ||
![]() |
ca54fb9f56 | ||
![]() |
735b185e7f | ||
![]() |
1a7ae11697 | ||
![]() |
644be822d5 | ||
![]() |
56b63c6e10 | ||
![]() |
ccedf276ab | ||
![]() |
10320a5f1f | ||
![]() |
ecd62fb785 | ||
![]() |
0d24e878d0 | ||
![]() |
889d5a1b29 | ||
![]() |
1700a747f6 | ||
![]() |
200e3b88cc | ||
![]() |
5bbbe437df | ||
![]() |
6de53e2f8d | ||
![]() |
b23a9153df | ||
![]() |
80772033ee | ||
![]() |
a2b760834f | ||
![]() |
493bcfcf18 | ||
![]() |
df72508089 | ||
![]() |
0f8d8fc2d8 | ||
![]() |
744e5a11b6 | ||
![]() |
3ea1750ea0 | ||
![]() |
a45777d22e | ||
![]() |
56dd734300 | ||
![]() |
d0113732fe | ||
![]() |
6215eb6471 | ||
![]() |
1d2b4bca8a | ||
![]() |
96f9680afd | ||
![]() |
b465592c07 | ||
![]() |
991ff25362 | ||
![]() |
eacd687dbf | ||
![]() |
549f5a164d | ||
![]() |
bb07aec82c | ||
![]() |
a5afe4bd06 | ||
![]() |
a71cc81fe7 | ||
![]() |
679305c3e4 | ||
![]() |
c0680f34f1 | ||
![]() |
64ebe6b0c8 | ||
![]() |
e6b26499f7 | ||
![]() |
977eb1dee3 | ||
![]() |
b2e2b02210 | ||
![]() |
2abff4bb08 | ||
![]() |
54c00645d1 | ||
![]() |
cad5ce0ebd | ||
![]() |
b12a167fa2 | ||
![]() |
667295e15e | ||
![]() |
bea52678e3 | ||
![]() |
307cfc3304 | ||
![]() |
5e74ca9414 | ||
![]() |
9836b097a4 | ||
![]() |
d0b3b1bfc4 | ||
![]() |
6eea96eabc | ||
![]() |
d08fee78c3 | ||
![]() |
bb5f0d456c | ||
![]() |
c186c49e25 | ||
![]() |
4ec6894773 | ||
![]() |
dd9b4b1cb7 | ||
![]() |
a43bb9c958 | ||
![]() |
ba905ff6fc | ||
![]() |
99bd09f688 | ||
![]() |
a6bc792a61 | ||
![]() |
6381d3660a | ||
![]() |
66c5f74d78 | ||
![]() |
1723a6bf40 | ||
![]() |
353f191e4f | ||
![]() |
8d865bb61b | ||
![]() |
c6815c5334 | ||
![]() |
b684ac0668 | ||
![]() |
dfc5d861c7 | ||
![]() |
50b706eeed | ||
![]() |
036ff1cbb9 | ||
![]() |
ceeef40cdf | ||
![]() |
681c86cc95 | ||
![]() |
c7b459b615 | ||
![]() |
56a7b1e349 | ||
![]() |
f1eee841cb | ||
![]() |
45fbd34480 | ||
![]() |
248abcf353 | ||
![]() |
2560c32378 | ||
![]() |
e38efd3cfa | ||
![]() |
d12f247490 | ||
![]() |
003036a779 | ||
![]() |
ed79f977a7 | ||
![]() |
8012e1cbd2 | ||
![]() |
a5562850a7 | ||
![]() |
bb786ac8e4 | ||
![]() |
ea82035222 | ||
![]() |
c9ecdd6ef1 | ||
![]() |
54f5c249f1 | ||
![]() |
a82a603db6 | ||
![]() |
f49930c514 | ||
![]() |
2baeb79aa0 | ||
![]() |
b3f78a209a | ||
![]() |
5e6868a858 | ||
![]() |
5caf848f94 | ||
![]() |
3e097123bf | ||
![]() |
74447b02e8 | ||
![]() |
20e96de963 | ||
![]() |
7c765fb3dc | ||
![]() |
dcc246c869 | ||
![]() |
cf7767d8f9 | ||
![]() |
61c578f82b | ||
![]() |
6950ff7841 | ||
![]() |
e65ce17f7b | ||
![]() |
b190ec8edc | ||
![]() |
c39085911f | ||
![]() |
3c20d2a178 | ||
![]() |
9187e4287c | ||
![]() |
2b7bcb77a5 | ||
![]() |
97a909866d | ||
![]() |
feeb5d334b | ||
![]() |
a840a2e6ee | ||
![]() |
4183345020 | ||
![]() |
50fb7ad6ce | ||
![]() |
88a9f4b44c | ||
![]() |
00fbd8dd93 | ||
![]() |
ce587d2421 | ||
![]() |
e1eb30084d | ||
![]() |
673638afe7 | ||
![]() |
da48cf64b3 | ||
![]() |
385fd93e73 | ||
![]() |
26edf24477 | ||
![]() |
83a538cc95 | ||
![]() |
cffa040474 | ||
![]() |
727d95b477 | ||
![]() |
640bb94119 | ||
![]() |
0f65918a25 | ||
![]() |
3ac2e0b253 | ||
![]() |
b322cdf251 | ||
![]() |
e128796b59 | ||
![]() |
6d669c6b9c | ||
![]() |
8dadb045cf | ||
![]() |
9f6e546522 | ||
![]() |
9714900db9 | ||
![]() |
cb25f0d650 | ||
![]() |
9c2e580ab5 | ||
![]() |
0ffff2c994 | ||
![]() |
c720af66d6 | ||
![]() |
86a7129027 | ||
![]() |
9eaa8dd049 | ||
![]() |
81441afe70 | ||
![]() |
f19e8aa7f0 | ||
![]() |
90287a6735 | ||
![]() |
fb3e2dcf10 | ||
![]() |
bf0b85f382 | ||
![]() |
5da0963aac | ||
![]() |
da5c051d73 | ||
![]() |
b98bf199dd | ||
![]() |
428d7c86ce | ||
![]() |
af1ec5a593 | ||
![]() |
e3a2593344 | ||
![]() |
bafb6791d3 | ||
![]() |
6edac4863a | ||
![]() |
e27e01c09f | ||
![]() |
dd173ecc1f | ||
![]() |
8ca0fb7ed0 | ||
![]() |
6c714e88ee | ||
![]() |
a6c8718a97 | ||
![]() |
26282b7a54 | ||
![]() |
93aca81c1c | ||
![]() |
81254cdf7a | ||
![]() |
b3a0c4a63b | ||
![]() |
376235c9de | ||
![]() |
7274fdacc6 | ||
![]() |
91c1f54b49 | ||
![]() |
efd0f79fbc | ||
![]() |
2084464225 | ||
![]() |
66ebbf3ecb | ||
![]() |
55a3885614 | ||
![]() |
afae1ff7b6 | ||
![]() |
4de49f5f49 | ||
![]() |
6db9656008 | ||
![]() |
fecb13b24b | ||
![]() |
23a595c26f | ||
![]() |
085912cfb4 | ||
![]() |
7157e14aff | ||
![]() |
4e2c4f92d3 | ||
![]() |
893b0de8fa | ||
![]() |
9b98c3b79f | ||
![]() |
6de26b1d7c | ||
![]() |
1f1931fb00 | ||
![]() |
1f4efbcd3b | ||
![]() |
711fe1d806 | ||
![]() |
e2c62a7b0c | ||
![]() |
ab6565723e | ||
![]() |
7bb6f1a7eb | ||
![]() |
549b82df11 | ||
![]() |
036cdf922f | ||
![]() |
b4ff22935c | ||
![]() |
5feadbf3fc | ||
![]() |
3e9ee816f9 | ||
![]() |
2494e27a73 | ||
![]() |
8e8b65bb84 | ||
![]() |
b7d7fc57c4 | ||
![]() |
b54c0e3d22 | ||
![]() |
593040b73d | ||
![]() |
6e890afc5f | ||
![]() |
2afba0233b | ||
![]() |
91900b7310 | ||
![]() |
55b198a16a | ||
![]() |
ca37dc6268 | ||
![]() |
000c02dad9 | ||
![]() |
4532915be1 | ||
![]() |
4b8d6e7c64 | ||
![]() |
579c5827b3 | ||
![]() |
01628f76ff | ||
![]() |
53858a32f1 | ||
![]() |
2bf576ea8a | ||
![]() |
1faac0b3d7 | ||
![]() |
134c72f4fb | ||
![]() |
70f2f5d750 | ||
![]() |
4453728614 | ||
![]() |
34107f9a0f | ||
![]() |
52862b8a22 | ||
![]() |
946d38e5d7 | ||
![]() |
78819be03c | ||
![]() |
34631dfcf5 | ||
![]() |
8fa9755b55 | ||
![]() |
1b557ac1ea | ||
![]() |
8170f5e693 | ||
![]() |
a506d0fcc8 | ||
![]() |
6718ff71d3 | ||
![]() |
b62acff2e3 | ||
![]() |
ac8bff716d | ||
![]() |
fba77de4eb | ||
![]() |
d1bca105ef | ||
![]() |
6c2d6fa302 | ||
![]() |
1015bc3e02 | ||
![]() |
68c72d03b5 | ||
![]() |
bd4b2da06e | ||
![]() |
7b8cf5ef1a | ||
![]() |
638a3d48ec | ||
![]() |
4de676c64e | ||
![]() |
a58a552f0e | ||
![]() |
0db16c7bbe | ||
![]() |
06f7e7cfd8 | ||
![]() |
19f12f94c0 | ||
![]() |
95d3062c21 | ||
![]() |
86fa136a63 | ||
![]() |
89c12072ba | ||
![]() |
54f701ff92 | ||
![]() |
5a70ea7326 | ||
![]() |
63cd3122e6 | ||
![]() |
6f4c6c1876 | ||
![]() |
eb072a1a74 | ||
![]() |
36b8862e7c | ||
![]() |
d4e3bf184b | ||
![]() |
c28ca27133 | ||
![]() |
c02e105065 | ||
![]() |
22da5bfc1d | ||
![]() |
c6d31747f7 | ||
![]() |
91ed6e2197 | ||
![]() |
d71aef3b98 | ||
![]() |
8a79c2e7ed | ||
![]() |
f34e7c341b | ||
![]() |
e28d308796 | ||
![]() |
f610be632e | ||
![]() |
fd6d25b5c1 | ||
![]() |
3695284286 | ||
![]() |
cfaa36e51a | ||
![]() |
d207c30949 | ||
![]() |
519f22f9bf | ||
![]() |
52a323b90d | ||
![]() |
91559d0558 | ||
![]() |
25195b8d73 | ||
![]() |
e69176e200 | ||
![]() |
d29d0222af | ||
![]() |
72b9803a08 | ||
![]() |
99e33181b2 | ||
![]() |
e7f322b9b6 | ||
![]() |
1d36e1775f | ||
![]() |
0525bea593 | ||
![]() |
2770c7cc07 | ||
![]() |
1b0e80bb10 | ||
![]() |
4ccc528d96 | ||
![]() |
6a311f4ab6 | ||
![]() |
a49a405413 | ||
![]() |
24f946e2e9 | ||
![]() |
c3cdb340de | ||
![]() |
935319a218 | ||
![]() |
4c7e15a7ce | ||
![]() |
d461097247 | ||
![]() |
f90a3c196c | ||
![]() |
751cc173d4 | ||
![]() |
ff134f2b8e | ||
![]() |
6d3ede1367 | ||
![]() |
c0884f94b8 | ||
![]() |
3d8dd68b14 | ||
![]() |
b02e88364e | ||
![]() |
9790831afb | ||
![]() |
2d79179141 | ||
![]() |
275cc28193 | ||
![]() |
c5ba7552c5 | ||
![]() |
8909f801bb | ||
![]() |
3d4af52b3a | ||
![]() |
6391555dab | ||
![]() |
8cc5b2174b | ||
![]() |
9269dd01f5 | ||
![]() |
ef68f17a96 | ||
![]() |
f74266f8f8 | ||
![]() |
46df219ed3 | ||
![]() |
835288d864 | ||
![]() |
93d56362af | ||
![]() |
4799859be0 | ||
![]() |
8e44596171 | ||
![]() |
d479234058 | ||
![]() |
3fc5866de0 | ||
![]() |
f3c40086ac | ||
![]() |
09ed21edd8 | ||
![]() |
456479eaa1 | ||
![]() |
cb87852825 | ||
![]() |
69440058bb | ||
![]() |
9bc6ac0f35 | ||
![]() |
89ff5c83d2 | ||
![]() |
0a47d694be | ||
![]() |
73c84d4f6a | ||
![]() |
a9251d6652 | ||
![]() |
f9c44f11d6 | ||
![]() |
1f8bd24a0d | ||
![]() |
7bf2eb3d71 | ||
![]() |
f5a5437917 | ||
![]() |
9989657c0f | ||
![]() |
cb2790984f |
3
.github/FUNDING.yml
vendored
Normal file
3
.github/FUNDING.yml
vendored
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# These are supported funding model platforms
|
||||||
|
|
||||||
|
ko_fi: headscale
|
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -6,19 +6,24 @@ labels: ["bug"]
|
|||||||
assignees: ""
|
assignees: ""
|
||||||
---
|
---
|
||||||
|
|
||||||
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
|
<!--
|
||||||
|
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
|
||||||
|
to make sure that it is truly a bug.
|
||||||
|
The issue tracker is not the place to ask for support or how to set up Headscale.
|
||||||
|
|
||||||
**Bug description**
|
Bug reports without the sufficient information will be closed.
|
||||||
|
|
||||||
|
Headscale is a multinational community across the globe. Our language is English.
|
||||||
|
All bug reports needs to be in English.
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Bug description
|
||||||
|
|
||||||
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
||||||
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
||||||
it on our Discord server first. -->
|
it on our Discord server first. -->
|
||||||
|
|
||||||
**To Reproduce**
|
## Environment
|
||||||
|
|
||||||
<!-- Steps to reproduce the behavior. -->
|
|
||||||
|
|
||||||
**Context info**
|
|
||||||
|
|
||||||
<!-- Please add relevant information about your system. For example:
|
<!-- Please add relevant information about your system. For example:
|
||||||
- Version of headscale used
|
- Version of headscale used
|
||||||
@@ -28,3 +33,20 @@ assignees: ""
|
|||||||
- The relevant config parameters you used
|
- The relevant config parameters you used
|
||||||
- Log output
|
- Log output
|
||||||
-->
|
-->
|
||||||
|
|
||||||
|
- OS:
|
||||||
|
- Headscale version:
|
||||||
|
- Tailscale version:
|
||||||
|
|
||||||
|
<!--
|
||||||
|
We do not support running Headscale in a container nor behind a (reverse) proxy.
|
||||||
|
If either of these are true for your environment, ask the community in Discord
|
||||||
|
instead of filing a bug report.
|
||||||
|
-->
|
||||||
|
|
||||||
|
- [ ] Headscale is behind a (reverse) proxy
|
||||||
|
- [ ] Headscale runs in a container
|
||||||
|
|
||||||
|
## To Reproduce
|
||||||
|
|
||||||
|
<!-- Steps to reproduce the behavior. -->
|
||||||
|
17
.github/ISSUE_TEMPLATE/feature_request.md
vendored
17
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -6,12 +6,21 @@ labels: ["enhancement"]
|
|||||||
assignees: ""
|
assignees: ""
|
||||||
---
|
---
|
||||||
|
|
||||||
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
|
<!--
|
||||||
|
We typically have a clear roadmap for what we want to improve and reserve the right
|
||||||
|
to close feature requests that does not fit in the roadmap, or fit with the scope
|
||||||
|
of the project, or we actually want to implement ourselves.
|
||||||
|
|
||||||
**Feature request**
|
Headscale is a multinational community across the globe. Our language is English.
|
||||||
|
All bug reports needs to be in English.
|
||||||
|
-->
|
||||||
|
|
||||||
<!-- A clear and precise description of what new or changed feature you want. -->
|
## Why
|
||||||
|
|
||||||
<!-- Please include the reason, why you would need the feature. E.g. what problem
|
<!-- Include the reason, why you would need the feature. E.g. what problem
|
||||||
does it solve? Or which workflow is currently frustrating and will be improved by
|
does it solve? Or which workflow is currently frustrating and will be improved by
|
||||||
this? -->
|
this? -->
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
<!-- A clear and precise description of what new or changed feature you want. -->
|
||||||
|
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
@@ -1,30 +0,0 @@
|
|||||||
---
|
|
||||||
name: "Other issue"
|
|
||||||
about: "Report a different issue"
|
|
||||||
title: ""
|
|
||||||
labels: ["bug"]
|
|
||||||
assignees: ""
|
|
||||||
---
|
|
||||||
|
|
||||||
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
|
|
||||||
|
|
||||||
<!-- If you have a question, please consider using our Discord for asking questions -->
|
|
||||||
|
|
||||||
**Issue description**
|
|
||||||
|
|
||||||
<!-- Please add your issue description. -->
|
|
||||||
|
|
||||||
**To Reproduce**
|
|
||||||
|
|
||||||
<!-- Steps to reproduce the behavior. -->
|
|
||||||
|
|
||||||
**Context info**
|
|
||||||
|
|
||||||
<!-- Please add relevant information about your system. For example:
|
|
||||||
- Version of headscale used
|
|
||||||
- Version of tailscale client
|
|
||||||
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
|
|
||||||
- Kernel version
|
|
||||||
- The relevant config parameters you used
|
|
||||||
- Log output
|
|
||||||
-->
|
|
12
.github/pull_request_template.md
vendored
12
.github/pull_request_template.md
vendored
@@ -1,3 +1,15 @@
|
|||||||
|
<!--
|
||||||
|
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||||
|
contribution will have to be discussed with the Maintainers before being submitted.
|
||||||
|
|
||||||
|
This model has been chosen to reduce the risk of burnout by limiting the
|
||||||
|
maintenance overhead of reviewing and validating third-party code.
|
||||||
|
|
||||||
|
Headscale is open to code contributions for bug fixes without discussion.
|
||||||
|
|
||||||
|
If you find mistakes in the documentation, please submit a fix to the documentation.
|
||||||
|
-->
|
||||||
|
|
||||||
<!-- Please tick if the following things apply. You… -->
|
<!-- Please tick if the following things apply. You… -->
|
||||||
|
|
||||||
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
||||||
|
10
.github/renovate.json
vendored
10
.github/renovate.json
vendored
@@ -6,7 +6,7 @@
|
|||||||
"onboarding": false,
|
"onboarding": false,
|
||||||
"extends": ["config:base", ":rebaseStalePrs"],
|
"extends": ["config:base", ":rebaseStalePrs"],
|
||||||
"ignorePresets": [":prHourlyLimit2"],
|
"ignorePresets": [":prHourlyLimit2"],
|
||||||
"enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
|
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"],
|
||||||
"includeForks": true,
|
"includeForks": true,
|
||||||
"repositories": ["juanfont/headscale"],
|
"repositories": ["juanfont/headscale"],
|
||||||
"platform": "github",
|
"platform": "github",
|
||||||
@@ -25,12 +25,8 @@
|
|||||||
],
|
],
|
||||||
"regexManagers": [
|
"regexManagers": [
|
||||||
{
|
{
|
||||||
"fileMatch": [
|
"fileMatch": [".github/workflows/.*.yml$"],
|
||||||
".github/workflows/.*.yml$"
|
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"],
|
||||||
],
|
|
||||||
"matchStrings": [
|
|
||||||
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
|
|
||||||
],
|
|
||||||
"datasourceTemplate": "golang-version",
|
"datasourceTemplate": "golang-version",
|
||||||
"depNameTemplate": "actions/go-version"
|
"depNameTemplate": "actions/go-version"
|
||||||
}
|
}
|
||||||
|
33
.github/workflows/build.yml
vendored
33
.github/workflows/build.yml
vendored
@@ -8,9 +8,14 @@ on:
|
|||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
permissions: write-all
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
@@ -32,10 +37,34 @@ jobs:
|
|||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
- name: Run build
|
- name: Run build
|
||||||
|
id: build
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
run: nix build
|
run: |
|
||||||
|
nix build |& tee build-result
|
||||||
|
BUILD_STATUS="${PIPESTATUS[0]}"
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v2
|
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
|
||||||
|
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
|
||||||
|
|
||||||
|
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
|
||||||
|
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
exit $BUILD_STATUS
|
||||||
|
|
||||||
|
- name: Nix gosum diverging
|
||||||
|
uses: actions/github-script@v6
|
||||||
|
if: failure() && steps.build.outcome == 'failure'
|
||||||
|
with:
|
||||||
|
github-token: ${{secrets.GITHUB_TOKEN}}
|
||||||
|
script: |
|
||||||
|
github.rest.pulls.createReviewComment({
|
||||||
|
pull_number: context.issue.number,
|
||||||
|
owner: context.repo.owner,
|
||||||
|
repo: context.repo.repo,
|
||||||
|
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
||||||
|
})
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
with:
|
with:
|
||||||
name: headscale-linux
|
name: headscale-linux
|
||||||
|
45
.github/workflows/docs.yml
vendored
Normal file
45
.github/workflows/docs.yml
vendored
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
name: Build documentation
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
pages: write
|
||||||
|
id-token: write
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
- name: Install python
|
||||||
|
uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: 3.x
|
||||||
|
- name: Setup cache
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
key: ${{ github.ref }}
|
||||||
|
path: .cache
|
||||||
|
- name: Setup dependencies
|
||||||
|
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
|
||||||
|
- name: Build docs
|
||||||
|
run: mkdocs build --strict
|
||||||
|
- name: Upload artifact
|
||||||
|
uses: actions/upload-pages-artifact@v1
|
||||||
|
with:
|
||||||
|
path: ./site
|
||||||
|
deploy:
|
||||||
|
environment:
|
||||||
|
name: github-pages
|
||||||
|
url: ${{ steps.deployment.outputs.page_url }}
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: build
|
||||||
|
steps:
|
||||||
|
- name: Deploy to GitHub Pages
|
||||||
|
id: deployment
|
||||||
|
uses: actions/deploy-pages@v1
|
23
.github/workflows/gh-actions-updater.yaml
vendored
Normal file
23
.github/workflows/gh-actions-updater.yaml
vendored
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
name: GitHub Actions Version Updater
|
||||||
|
|
||||||
|
# Controls when the action will run.
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
# Automatically run on every Sunday
|
||||||
|
- cron: "0 0 * * 0"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
# [Required] Access token with `workflow` scope.
|
||||||
|
token: ${{ secrets.WORKFLOW_SECRET }}
|
||||||
|
|
||||||
|
- name: Run GitHub Actions Version Updater
|
||||||
|
uses: saadmk11/github-actions-version-updater@v0.7.1
|
||||||
|
with:
|
||||||
|
# [Required] Access token with `workflow` scope.
|
||||||
|
token: ${{ secrets.WORKFLOW_SECRET }}
|
8
.github/workflows/lint.yml
vendored
8
.github/workflows/lint.yml
vendored
@@ -3,6 +3,10 @@ name: Lint
|
|||||||
|
|
||||||
on: [push, pull_request]
|
on: [push, pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
golangci-lint:
|
golangci-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
@@ -26,7 +30,7 @@ jobs:
|
|||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
uses: golangci/golangci-lint-action@v2
|
uses: golangci/golangci-lint-action@v2
|
||||||
with:
|
with:
|
||||||
version: v1.49.0
|
version: v1.51.2
|
||||||
|
|
||||||
# Only block PRs on new problems.
|
# Only block PRs on new problems.
|
||||||
# If this is not enabled, we will end up having PRs
|
# If this is not enabled, we will end up having PRs
|
||||||
@@ -59,7 +63,7 @@ jobs:
|
|||||||
|
|
||||||
- name: Prettify code
|
- name: Prettify code
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
uses: creyD/prettier_action@v4.0
|
uses: creyD/prettier_action@v4.3
|
||||||
with:
|
with:
|
||||||
prettier_options: >-
|
prettier_options: >-
|
||||||
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
||||||
|
138
.github/workflows/release-docker.yml
vendored
Normal file
138
.github/workflows/release-docker.yml
vendored
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
---
|
||||||
|
name: Release Docker
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
tags:
|
||||||
|
- "*" # triggers only if push new tag version
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docker-release:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache
|
||||||
|
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
type=semver,pattern={{major}}.{{minor}}
|
||||||
|
type=semver,pattern={{major}}
|
||||||
|
type=sha
|
||||||
|
type=raw,value=develop
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache
|
||||||
|
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
||||||
|
|
||||||
|
docker-debug-release:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache-debug
|
||||||
|
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-debug-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta-debug
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
flavor: |
|
||||||
|
suffix=-debug,onlatest=true
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
type=semver,pattern={{major}}.{{minor}}
|
||||||
|
type=semver,pattern={{major}}
|
||||||
|
type=sha
|
||||||
|
type=raw,value=develop
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
file: Dockerfile.debug
|
||||||
|
tags: ${{ steps.meta-debug.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta-debug.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta-debug.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache-debug
|
||||||
|
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
215
.github/workflows/release.yml
vendored
215
.github/workflows/release.yml
vendored
@@ -9,221 +9,16 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
goreleaser:
|
goreleaser:
|
||||||
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
- name: Set up Go
|
|
||||||
uses: actions/setup-go@v3
|
|
||||||
with:
|
|
||||||
go-version: 1.19.0
|
|
||||||
|
|
||||||
- name: Install dependencies
|
- uses: cachix/install-nix-action@v16
|
||||||
run: |
|
|
||||||
sudo apt update
|
- name: Run goreleaser
|
||||||
sudo apt install -y gcc-aarch64-linux-gnu
|
run: nix develop --command -- goreleaser release --clean
|
||||||
- name: Run GoReleaser
|
|
||||||
uses: goreleaser/goreleaser-action@v2
|
|
||||||
with:
|
|
||||||
distribution: goreleaser
|
|
||||||
version: latest
|
|
||||||
args: release --rm-dist
|
|
||||||
env:
|
env:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
docker-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache
|
|
||||||
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}
|
|
||||||
type=semver,pattern={{major}}.{{minor}}
|
|
||||||
type=semver,pattern={{major}}
|
|
||||||
type=raw,value=latest
|
|
||||||
type=sha
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
tags: ${{ steps.meta.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache
|
|
||||||
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
|
||||||
|
|
||||||
docker-debug-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache-debug
|
|
||||||
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-debug-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta-debug
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
flavor: |
|
|
||||||
latest=false
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}-debug
|
|
||||||
type=semver,pattern={{major}}.{{minor}}-debug
|
|
||||||
type=semver,pattern={{major}}-debug
|
|
||||||
type=raw,value=latest-debug
|
|
||||||
type=sha,suffix=-debug
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
file: Dockerfile.debug
|
|
||||||
tags: ${{ steps.meta-debug.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta-debug.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta-debug.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache-debug
|
|
||||||
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
|
||||||
|
|
||||||
docker-alpine-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache-alpine
|
|
||||||
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-alpine-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta-alpine
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
flavor: |
|
|
||||||
latest=false
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}-alpine
|
|
||||||
type=semver,pattern={{major}}.{{minor}}-alpine
|
|
||||||
type=semver,pattern={{major}}-alpine
|
|
||||||
type=raw,value=latest-alpine
|
|
||||||
type=sha,suffix=-alpine
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
file: Dockerfile.alpine
|
|
||||||
tags: ${{ steps.meta-alpine.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta-alpine.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache-alpine
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta-alpine.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache-alpine
|
|
||||||
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine
|
|
||||||
|
27
.github/workflows/renovatebot.yml
vendored
27
.github/workflows/renovatebot.yml
vendored
@@ -1,27 +0,0 @@
|
|||||||
---
|
|
||||||
name: Renovate
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
|
|
||||||
workflow_dispatch:
|
|
||||||
jobs:
|
|
||||||
renovate:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Get token
|
|
||||||
id: get_token
|
|
||||||
uses: machine-learning-apps/actions-app-token@master
|
|
||||||
with:
|
|
||||||
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
|
|
||||||
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
|
|
||||||
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
|
|
||||||
- name: Self-hosted Renovate
|
|
||||||
uses: renovatebot/github-action@v31.81.3
|
|
||||||
with:
|
|
||||||
configurationFile: .github/renovate.json
|
|
||||||
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
|
|
||||||
# env:
|
|
||||||
# LOG_LEVEL: "debug"
|
|
35
.github/workflows/test-integration-derp.yml
vendored
35
.github/workflows/test-integration-derp.yml
vendored
@@ -1,35 +0,0 @@
|
|||||||
name: Integration Test DERP
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
integration-test-derp:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Set Swap Space
|
|
||||||
uses: pierotofy/set-swap-space@master
|
|
||||||
with:
|
|
||||||
swap-size-gb: 10
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v16
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run Embedded DERP server integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: nix develop --command -- make test_integration_derp
|
|
35
.github/workflows/test-integration-oidc.yml
vendored
35
.github/workflows/test-integration-oidc.yml
vendored
@@ -1,35 +0,0 @@
|
|||||||
name: Integration Test OIDC
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
integration-test-oidc:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Set Swap Space
|
|
||||||
uses: pierotofy/set-swap-space@master
|
|
||||||
with:
|
|
||||||
swap-size-gb: 10
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v16
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run OIDC integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: nix develop --command -- make test_integration_oidc
|
|
63
.github/workflows/test-integration-v2-TestACLAllowStarDst.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLAllowStarDst.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLAllowStarDst
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLAllowStarDst$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLAllowUser80Dst.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLAllowUser80Dst.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLAllowUser80Dst
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLAllowUser80Dst$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLAllowUserDst.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLAllowUserDst.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLAllowUserDst
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLAllowUserDst$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLDenyAllPort80.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLDenyAllPort80.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLDenyAllPort80
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLDenyAllPort80$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLDevice1CanAccessDevice2.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLDevice1CanAccessDevice2.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLDevice1CanAccessDevice2$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLHostsInNetMapTable.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLHostsInNetMapTable.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLHostsInNetMapTable
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLHostsInNetMapTable$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReach.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReach.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLNamedHostsCanReach
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLNamedHostsCanReach$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReachBySubnet.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReachBySubnet.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestACLNamedHostsCanReachBySubnet$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestAuthKeyLogoutAndRelogin.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestAuthKeyLogoutAndRelogin.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestAuthKeyLogoutAndRelogin$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestAuthWebFlowAuthenticationPingAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestAuthWebFlowAuthenticationPingAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestAuthWebFlowAuthenticationPingAll$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestAuthWebFlowLogoutAndRelogin.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestAuthWebFlowLogoutAndRelogin.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestAuthWebFlowLogoutAndRelogin$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestCreateTailscale.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestCreateTailscale.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestCreateTailscale
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestCreateTailscale$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestDERPServerScenario.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestDERPServerScenario.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestDERPServerScenario
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestDERPServerScenario$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestEnablingRoutes.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestEnablingRoutes.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestEnablingRoutes
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestEnablingRoutes$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestEphemeral.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestEphemeral.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestEphemeral
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestEphemeral$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestExpireNode.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestExpireNode.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestExpireNode
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestExpireNode$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestHeadscale.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestHeadscale.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestHeadscale
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestHeadscale$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestOIDCAuthenticationPingAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestOIDCAuthenticationPingAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestOIDCAuthenticationPingAll
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestOIDCAuthenticationPingAll$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestOIDCExpireNodesBasedOnTokenExpiry.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestOIDCExpireNodesBasedOnTokenExpiry.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPingAllByHostname.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPingAllByHostname.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestPingAllByHostname
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestPingAllByHostname$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPingAllByIP.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPingAllByIP.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestPingAllByIP
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestPingAllByIP$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommand.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommand.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestPreAuthKeyCommand
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestPreAuthKeyCommand$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandReusableEphemeral.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandReusableEphemeral.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestPreAuthKeyCommandReusableEphemeral$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandWithoutExpiry.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandWithoutExpiry.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestPreAuthKeyCommandWithoutExpiry$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestResolveMagicDNS.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestResolveMagicDNS.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestResolveMagicDNS
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestResolveMagicDNS$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHIsBlockedInACL.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHIsBlockedInACL.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestSSHIsBlockedInACL
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestSSHIsBlockedInACL$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHMultipleUsersAllToAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHMultipleUsersAllToAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestSSHMultipleUsersAllToAll$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHNoSSHConfigured.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHNoSSHConfigured.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestSSHNoSSHConfigured
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestSSHNoSSHConfigured$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHOneUserAllToAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHOneUserAllToAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestSSHOneUserAllToAll
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestSSHOneUserAllToAll$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSUserOnlyIsolation.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSUserOnlyIsolation.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestSSUserOnlyIsolation
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestSSUserOnlyIsolation$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestTaildrop.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestTaildrop.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestTaildrop
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestTaildrop$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestTailscaleNodesJoiningHeadcale.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestTailscaleNodesJoiningHeadcale.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestTailscaleNodesJoiningHeadcale$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestUserCommand.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestUserCommand.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - TestUserCommand
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^TestUserCommand$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
@@ -1,35 +0,0 @@
|
|||||||
name: Integration Test v2
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
integration-test-v2:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v2
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Set Swap Space
|
|
||||||
uses: pierotofy/set-swap-space@master
|
|
||||||
with:
|
|
||||||
swap-size-gb: 10
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v14.1
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v16
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: nix develop --command -- make test_integration_v2_auth_web_flow
|
|
@@ -1,27 +0,0 @@
|
|||||||
name: Integration Test v2 - kradalby
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
integration-test-v2-kradalby:
|
|
||||||
runs-on: [self-hosted, linux, x64, nixos, docker]
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: nix develop --command -- make test_integration_v2_general
|
|
4
.github/workflows/test.yml
vendored
4
.github/workflows/test.yml
vendored
@@ -2,6 +2,10 @@ name: Tests
|
|||||||
|
|
||||||
on: [push, pull_request]
|
on: [push, pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
test:
|
test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
10
.gitignore
vendored
10
.gitignore
vendored
@@ -1,3 +1,5 @@
|
|||||||
|
ignored/
|
||||||
|
|
||||||
# Binaries for programs and plugins
|
# Binaries for programs and plugins
|
||||||
*.exe
|
*.exe
|
||||||
*.exe~
|
*.exe~
|
||||||
@@ -12,8 +14,9 @@
|
|||||||
*.out
|
*.out
|
||||||
|
|
||||||
# Dependency directories (remove the comment below to include it)
|
# Dependency directories (remove the comment below to include it)
|
||||||
# vendor/
|
vendor/
|
||||||
|
|
||||||
|
dist/
|
||||||
/headscale
|
/headscale
|
||||||
config.json
|
config.json
|
||||||
config.yaml
|
config.yaml
|
||||||
@@ -27,9 +30,14 @@ derp.yaml
|
|||||||
.idea
|
.idea
|
||||||
|
|
||||||
test_output/
|
test_output/
|
||||||
|
control_logs/
|
||||||
|
|
||||||
# Nix build output
|
# Nix build output
|
||||||
result
|
result
|
||||||
.direnv/
|
.direnv/
|
||||||
|
|
||||||
integration_test/etc/config.dump.yaml
|
integration_test/etc/config.dump.yaml
|
||||||
|
|
||||||
|
# MkDocs
|
||||||
|
.cache
|
||||||
|
/site
|
||||||
|
@@ -29,6 +29,14 @@ linters:
|
|||||||
- execinquery
|
- execinquery
|
||||||
- exhaustruct
|
- exhaustruct
|
||||||
- nolintlint
|
- nolintlint
|
||||||
|
- musttag # causes issues with imported libs
|
||||||
|
|
||||||
|
# deprecated
|
||||||
|
- structcheck # replaced by unused
|
||||||
|
- ifshort # deprecated by the owner
|
||||||
|
- varcheck # replaced by unused
|
||||||
|
- nosnakecase # replaced by revive
|
||||||
|
- deadcode # replaced by unused
|
||||||
|
|
||||||
# We should strive to enable these:
|
# We should strive to enable these:
|
||||||
- wrapcheck
|
- wrapcheck
|
||||||
|
115
.goreleaser.yml
115
.goreleaser.yml
@@ -1,21 +1,28 @@
|
|||||||
---
|
---
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- go mod tidy -compat=1.19
|
- go mod tidy -compat=1.20
|
||||||
|
- go mod vendor
|
||||||
|
|
||||||
release:
|
release:
|
||||||
prerelease: auto
|
prerelease: auto
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
- id: darwin-amd64
|
- id: headscale
|
||||||
main: ./cmd/headscale/headscale.go
|
main: ./cmd/headscale/headscale.go
|
||||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
env:
|
env:
|
||||||
- CGO_ENABLED=0
|
- CGO_ENABLED=0
|
||||||
goos:
|
targets:
|
||||||
- darwin
|
- darwin_amd64
|
||||||
goarch:
|
- darwin_arm64
|
||||||
- amd64
|
- freebsd_amd64
|
||||||
|
- linux_386
|
||||||
|
- linux_amd64
|
||||||
|
- linux_arm64
|
||||||
|
- linux_arm_5
|
||||||
|
- linux_arm_6
|
||||||
|
- linux_arm_7
|
||||||
flags:
|
flags:
|
||||||
- -mod=readonly
|
- -mod=readonly
|
||||||
ldflags:
|
ldflags:
|
||||||
@@ -23,60 +30,56 @@ builds:
|
|||||||
tags:
|
tags:
|
||||||
- ts2019
|
- ts2019
|
||||||
|
|
||||||
- id: darwin-arm64
|
|
||||||
main: ./cmd/headscale/headscale.go
|
|
||||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
|
||||||
env:
|
|
||||||
- CGO_ENABLED=0
|
|
||||||
goos:
|
|
||||||
- darwin
|
|
||||||
goarch:
|
|
||||||
- arm64
|
|
||||||
flags:
|
|
||||||
- -mod=readonly
|
|
||||||
ldflags:
|
|
||||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
|
||||||
tags:
|
|
||||||
- ts2019
|
|
||||||
|
|
||||||
- id: linux-amd64
|
|
||||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
|
||||||
env:
|
|
||||||
- CGO_ENABLED=0
|
|
||||||
goos:
|
|
||||||
- linux
|
|
||||||
goarch:
|
|
||||||
- amd64
|
|
||||||
main: ./cmd/headscale/headscale.go
|
|
||||||
ldflags:
|
|
||||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
|
||||||
tags:
|
|
||||||
- ts2019
|
|
||||||
|
|
||||||
- id: linux-arm64
|
|
||||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
|
||||||
env:
|
|
||||||
- CGO_ENABLED=0
|
|
||||||
goos:
|
|
||||||
- linux
|
|
||||||
goarch:
|
|
||||||
- arm64
|
|
||||||
main: ./cmd/headscale/headscale.go
|
|
||||||
ldflags:
|
|
||||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
|
||||||
tags:
|
|
||||||
- ts2019
|
|
||||||
|
|
||||||
archives:
|
archives:
|
||||||
- id: golang-cross
|
- id: golang-cross
|
||||||
builds:
|
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
|
||||||
- darwin-amd64
|
|
||||||
- darwin-arm64
|
|
||||||
- linux-amd64
|
|
||||||
- linux-arm64
|
|
||||||
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
|
|
||||||
format: binary
|
format: binary
|
||||||
|
|
||||||
|
source:
|
||||||
|
enabled: true
|
||||||
|
name_template: "{{ .ProjectName }}_{{ .Version }}"
|
||||||
|
format: tar.gz
|
||||||
|
files:
|
||||||
|
- "vendor/"
|
||||||
|
|
||||||
|
nfpms:
|
||||||
|
# Configure nFPM for .deb and .rpm releases
|
||||||
|
#
|
||||||
|
# See https://nfpm.goreleaser.com/configuration/
|
||||||
|
# and https://goreleaser.com/customization/nfpm/
|
||||||
|
#
|
||||||
|
# Useful tools for debugging .debs:
|
||||||
|
# List file contents: dpkg -c dist/headscale...deb
|
||||||
|
# Package metadata: dpkg --info dist/headscale....deb
|
||||||
|
#
|
||||||
|
- builds:
|
||||||
|
- headscale
|
||||||
|
package_name: headscale
|
||||||
|
priority: optional
|
||||||
|
vendor: headscale
|
||||||
|
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
|
||||||
|
homepage: https://github.com/juanfont/headscale
|
||||||
|
license: BSD
|
||||||
|
bindir: /usr/bin
|
||||||
|
formats:
|
||||||
|
- deb
|
||||||
|
# - rpm
|
||||||
|
contents:
|
||||||
|
- src: ./config-example.yaml
|
||||||
|
dst: /etc/headscale/config.yaml
|
||||||
|
type: config|noreplace
|
||||||
|
file_info:
|
||||||
|
mode: 0644
|
||||||
|
- src: ./docs/packaging/headscale.systemd.service
|
||||||
|
dst: /usr/lib/systemd/system/headscale.service
|
||||||
|
- dst: /var/lib/headscale
|
||||||
|
type: dir
|
||||||
|
- dst: /var/run/headscale
|
||||||
|
type: dir
|
||||||
|
scripts:
|
||||||
|
postinstall: ./docs/packaging/postinstall.sh
|
||||||
|
postremove: ./docs/packaging/postremove.sh
|
||||||
|
|
||||||
checksum:
|
checksum:
|
||||||
name_template: "checksums.txt"
|
name_template: "checksums.txt"
|
||||||
snapshot:
|
snapshot:
|
||||||
|
1
.prettierignore
Normal file
1
.prettierignore
Normal file
@@ -0,0 +1 @@
|
|||||||
|
.github/workflows/test-integration-v2*
|
110
CHANGELOG.md
110
CHANGELOG.md
@@ -1,14 +1,111 @@
|
|||||||
# CHANGELOG
|
# CHANGELOG
|
||||||
|
|
||||||
## 0.17.0 (2022-XX-XX)
|
## 0.23.0 (2023-XX-XX)
|
||||||
|
|
||||||
### BREAKING
|
|
||||||
|
|
||||||
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
|
||||||
|
|
||||||
### Changes
|
### Changes
|
||||||
|
|
||||||
|
## 0.22.2 (2023-05-10)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
|
||||||
|
- Profiles are continously generated in our integration tests.
|
||||||
|
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
|
||||||
|
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
|
||||||
|
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
|
||||||
|
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
|
||||||
|
- Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
|
||||||
|
|
||||||
|
## 0.22.1 (2023-04-20)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
|
||||||
|
|
||||||
|
## 0.22.0 (2023-04-20)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
||||||
|
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
|
||||||
|
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
||||||
|
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
|
||||||
|
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
|
||||||
|
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
|
||||||
|
|
||||||
|
## 0.21.0 (2023-03-20)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
|
||||||
|
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
|
||||||
|
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
|
||||||
|
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
|
||||||
|
|
||||||
|
## 0.20.0 (2023-02-03)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
|
||||||
|
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
|
||||||
|
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
|
||||||
|
- defaults to 180 days like Tailscale SaaS
|
||||||
|
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
|
||||||
|
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
|
||||||
|
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
|
||||||
|
|
||||||
|
## 0.19.0 (2023-01-29)
|
||||||
|
|
||||||
|
### BREAKING
|
||||||
|
|
||||||
|
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
|
||||||
|
- **BACKUP your database before upgrading**
|
||||||
|
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
|
||||||
|
|
||||||
|
## 0.18.0 (2023-01-14)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
|
||||||
|
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
|
||||||
|
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
|
||||||
|
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
|
||||||
|
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
|
||||||
|
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
|
||||||
|
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
|
||||||
|
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
|
||||||
|
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
|
||||||
|
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
|
||||||
|
|
||||||
|
## 0.17.1 (2022-12-05)
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
|
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
|
||||||
|
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
|
||||||
|
|
||||||
|
## 0.17.0 (2022-11-26)
|
||||||
|
|
||||||
|
### BREAKING
|
||||||
|
|
||||||
|
- `noise.private_key_path` has been added and is required for the new noise protocol.
|
||||||
|
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
||||||
|
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
|
||||||
|
|
||||||
|
### Important Changes
|
||||||
|
|
||||||
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
||||||
|
- Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847)
|
||||||
|
- Please note that this support should be considered _partially_ implemented
|
||||||
|
- SSH ACLs status:
|
||||||
|
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
|
||||||
|
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
|
||||||
|
- If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
|
||||||
|
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
|
||||||
|
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
|
||||||
|
|
||||||
|
### Changes
|
||||||
|
|
||||||
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
||||||
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
||||||
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
||||||
@@ -25,6 +122,9 @@
|
|||||||
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
|
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
|
||||||
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
|
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
|
||||||
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
|
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
|
||||||
|
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971)
|
||||||
|
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
|
||||||
|
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
|
||||||
|
|
||||||
## 0.16.4 (2022-08-21)
|
## 0.16.4 (2022-08-21)
|
||||||
|
|
||||||
|
@@ -1,5 +1,5 @@
|
|||||||
# Builder image
|
# Builder image
|
||||||
FROM docker.io/golang:1.19.0-bullseye AS build
|
FROM docker.io/golang:1.20-bullseye AS build
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
@@ -14,10 +14,12 @@ RUN strip /go/bin/headscale
|
|||||||
RUN test -e /go/bin/headscale
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
# Production image
|
# Production image
|
||||||
FROM gcr.io/distroless/base-debian11
|
FROM docker.io/debian:bullseye-slim
|
||||||
|
|
||||||
COPY --from=build /go/bin/headscale /bin/headscale
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
ENV TZ UTC
|
ENV TZ UTC
|
||||||
|
|
||||||
|
RUN mkdir -p /var/run/headscale
|
||||||
|
|
||||||
EXPOSE 8080/tcp
|
EXPOSE 8080/tcp
|
||||||
CMD ["headscale"]
|
CMD ["headscale"]
|
||||||
|
@@ -1,24 +0,0 @@
|
|||||||
# Builder image
|
|
||||||
FROM docker.io/golang:1.19.0-alpine AS build
|
|
||||||
ARG VERSION=dev
|
|
||||||
ENV GOPATH /go
|
|
||||||
WORKDIR /go/src/headscale
|
|
||||||
|
|
||||||
COPY go.mod go.sum /go/src/headscale/
|
|
||||||
RUN apk add gcc musl-dev
|
|
||||||
RUN go mod download
|
|
||||||
|
|
||||||
COPY . .
|
|
||||||
|
|
||||||
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
|
||||||
RUN strip /go/bin/headscale
|
|
||||||
RUN test -e /go/bin/headscale
|
|
||||||
|
|
||||||
# Production image
|
|
||||||
FROM docker.io/alpine:latest
|
|
||||||
|
|
||||||
COPY --from=build /go/bin/headscale /bin/headscale
|
|
||||||
ENV TZ UTC
|
|
||||||
|
|
||||||
EXPOSE 8080/tcp
|
|
||||||
CMD ["headscale"]
|
|
@@ -1,5 +1,5 @@
|
|||||||
# Builder image
|
# Builder image
|
||||||
FROM docker.io/golang:1.19.0-bullseye AS build
|
FROM docker.io/golang:1.20-bullseye AS build
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
@@ -13,11 +13,13 @@ RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.c
|
|||||||
RUN test -e /go/bin/headscale
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
# Debug image
|
# Debug image
|
||||||
FROM docker.io/golang:1.19.0-bullseye
|
FROM docker.io/golang:1.20.0-bullseye
|
||||||
|
|
||||||
COPY --from=build /go/bin/headscale /bin/headscale
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
ENV TZ UTC
|
ENV TZ UTC
|
||||||
|
|
||||||
|
RUN mkdir -p /var/run/headscale
|
||||||
|
|
||||||
# Need to reset the entrypoint or everything will run as a busybox script
|
# Need to reset the entrypoint or everything will run as a busybox script
|
||||||
ENTRYPOINT []
|
ENTRYPOINT []
|
||||||
EXPOSE 8080/tcp
|
EXPOSE 8080/tcp
|
||||||
|
@@ -1,17 +1,16 @@
|
|||||||
FROM ubuntu:latest
|
FROM ubuntu:22.04
|
||||||
|
|
||||||
ARG TAILSCALE_VERSION=*
|
ARG TAILSCALE_VERSION=*
|
||||||
ARG TAILSCALE_CHANNEL=stable
|
ARG TAILSCALE_CHANNEL=stable
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y gnupg curl \
|
&& apt-get install -y gnupg curl ssh dnsutils ca-certificates \
|
||||||
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
&& adduser --shell=/bin/bash ssh-it-user
|
||||||
|
|
||||||
|
# Tailscale is deliberately split into a second stage so we can cash utils as a seperate layer.
|
||||||
|
RUN curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
||||||
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
|
&& apt-get install -y tailscale=${TAILSCALE_VERSION} \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
|
||||||
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
|
||||||
|
|
||||||
RUN update-ca-certificates
|
|
||||||
|
@@ -1,23 +1,17 @@
|
|||||||
FROM golang:latest
|
FROM golang:latest
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y ca-certificates dnsutils git iptables \
|
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
RUN useradd --shell=/bin/bash --create-home ssh-it-user
|
||||||
|
|
||||||
RUN git clone https://github.com/tailscale/tailscale.git
|
RUN git clone https://github.com/tailscale/tailscale.git
|
||||||
|
|
||||||
WORKDIR /go/tailscale
|
WORKDIR /go/tailscale
|
||||||
|
|
||||||
RUN git checkout main
|
RUN git checkout main \
|
||||||
|
&& sh build_dist.sh tailscale.com/cmd/tailscale \
|
||||||
RUN sh build_dist.sh tailscale.com/cmd/tailscale
|
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
|
||||||
RUN sh build_dist.sh tailscale.com/cmd/tailscaled
|
&& cp tailscale /usr/local/bin/ \
|
||||||
|
&& cp tailscaled /usr/local/bin/
|
||||||
RUN cp tailscale /usr/local/bin/
|
|
||||||
RUN cp tailscaled /usr/local/bin/
|
|
||||||
|
|
||||||
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
|
||||||
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
|
||||||
|
|
||||||
RUN update-ca-certificates
|
|
||||||
|
52
Makefile
52
Makefile
@@ -26,7 +26,7 @@ dev: lint test build
|
|||||||
test:
|
test:
|
||||||
@go test $(TAGS) -short -coverprofile=coverage.out ./...
|
@go test $(TAGS) -short -coverprofile=coverage.out ./...
|
||||||
|
|
||||||
test_integration: test_integration_cli test_integration_derp test_integration_oidc test_integration_v2_general
|
test_integration: test_integration_cli test_integration_derp test_integration_v2_general
|
||||||
|
|
||||||
test_integration_cli:
|
test_integration_cli:
|
||||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||||
@@ -36,27 +36,7 @@ test_integration_cli:
|
|||||||
-v ~/.cache/hs-integration-go:/go \
|
-v ~/.cache/hs-integration-go:/go \
|
||||||
-v $$PWD:$$PWD -w $$PWD \
|
-v $$PWD:$$PWD -w $$PWD \
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||||
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
|
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
|
||||||
|
|
||||||
test_integration_derp:
|
|
||||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
|
||||||
docker network create headscale-test || true
|
|
||||||
docker run -t --rm \
|
|
||||||
--network headscale-test \
|
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
|
||||||
-v $$PWD:$$PWD -w $$PWD \
|
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
|
||||||
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
|
|
||||||
|
|
||||||
test_integration_oidc:
|
|
||||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
|
||||||
docker network create headscale-test || true
|
|
||||||
docker run -t --rm \
|
|
||||||
--network headscale-test \
|
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
|
||||||
-v $$PWD:$$PWD -w $$PWD \
|
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
|
||||||
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationOIDC ./...
|
|
||||||
|
|
||||||
test_integration_v2_general:
|
test_integration_v2_general:
|
||||||
docker run \
|
docker run \
|
||||||
@@ -66,24 +46,7 @@ test_integration_v2_general:
|
|||||||
-v $$PWD:$$PWD -w $$PWD/integration \
|
-v $$PWD:$$PWD -w $$PWD/integration \
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||||
golang:1 \
|
golang:1 \
|
||||||
go test $(TAGS) -failfast ./... -timeout 60m -parallel 6
|
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
|
||||||
|
|
||||||
|
|
||||||
test_integration_v2_auth_web_flow:
|
|
||||||
docker run \
|
|
||||||
-t --rm \
|
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
-v $$PWD:$$PWD -w $$PWD/integration \
|
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... -timeout 60m -parallel 6 -run TestAuthWebFlow
|
|
||||||
|
|
||||||
coverprofile_func:
|
|
||||||
go tool cover -func=coverage.out
|
|
||||||
|
|
||||||
coverprofile_html:
|
|
||||||
go tool cover -html=coverage.out
|
|
||||||
|
|
||||||
lint:
|
lint:
|
||||||
golangci-lint run --fix --timeout 10m
|
golangci-lint run --fix --timeout 10m
|
||||||
@@ -101,11 +64,4 @@ compress: build
|
|||||||
|
|
||||||
generate:
|
generate:
|
||||||
rm -rf gen
|
rm -rf gen
|
||||||
go run github.com/bufbuild/buf/cmd/buf generate proto
|
buf generate proto
|
||||||
|
|
||||||
install-protobuf-plugins:
|
|
||||||
go install \
|
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
|
|
||||||
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
|
|
||||||
google.golang.org/protobuf/cmd/protoc-gen-go \
|
|
||||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc
|
|
||||||
|
397
README.md
397
README.md
@@ -32,22 +32,18 @@ organisation.
|
|||||||
|
|
||||||
## Design goal
|
## Design goal
|
||||||
|
|
||||||
`headscale` aims to implement a self-hosted, open source alternative to the Tailscale
|
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
||||||
control server. `headscale` has a narrower scope and an instance of `headscale`
|
control server.
|
||||||
implements a _single_ Tailnet, which is typically what a single organisation, or
|
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
||||||
home/personal setup would use.
|
server they can use for their projects and labs.
|
||||||
|
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
|
||||||
|
open-source organisation.
|
||||||
|
|
||||||
`headscale` uses terms that maps to Tailscale's control server, consult the
|
## Supporting Headscale
|
||||||
[glossary](./docs/glossary.md) for explainations.
|
|
||||||
|
|
||||||
## Support
|
|
||||||
|
|
||||||
If you like `headscale` and find it useful, there is a sponsorship and donation
|
If you like `headscale` and find it useful, there is a sponsorship and donation
|
||||||
buttons available in the repo.
|
buttons available in the repo.
|
||||||
|
|
||||||
If you would like to sponsor features, bugs or prioritisation, reach out to
|
|
||||||
one of the maintainers.
|
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Full "base" support of Tailscale's features
|
- Full "base" support of Tailscale's features
|
||||||
@@ -75,19 +71,39 @@ one of the maintainers.
|
|||||||
| macOS | Yes (see `/apple` on your headscale for more information) |
|
| macOS | Yes (see `/apple` on your headscale for more information) |
|
||||||
| Windows | Yes [docs](./docs/windows-client.md) |
|
| Windows | Yes [docs](./docs/windows-client.md) |
|
||||||
| Android | Yes [docs](./docs/android-client.md) |
|
| Android | Yes [docs](./docs/android-client.md) |
|
||||||
| iOS | Not yet |
|
| iOS | Yes [docs](./docs/iOS-client.md) |
|
||||||
|
|
||||||
## Running headscale
|
## Running headscale
|
||||||
|
|
||||||
Please have a look at the documentation under [`docs/`](docs/).
|
**Please note that we do not support nor encourage the use of reverse proxies
|
||||||
|
and container to run Headscale.**
|
||||||
|
|
||||||
|
Please have a look at the [`documentation`](https://headscale.net/).
|
||||||
|
|
||||||
|
## Talks
|
||||||
|
|
||||||
|
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
||||||
|
- presented by Juan Font Alonso and Kristoffer Dalby
|
||||||
|
|
||||||
## Disclaimer
|
## Disclaimer
|
||||||
|
|
||||||
1. We have nothing to do with Tailscale, or Tailscale Inc.
|
1. This project is not associated with Tailscale Inc.
|
||||||
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
|
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||||
|
contribution will have to be discussed with the Maintainers before being submitted.
|
||||||
|
|
||||||
|
This model has been chosen to reduce the risk of burnout by limiting the
|
||||||
|
maintenance overhead of reviewing and validating third-party code.
|
||||||
|
|
||||||
|
Headscale is open to code contributions for bug fixes without discussion.
|
||||||
|
|
||||||
|
If you find mistakes in the documentation, please submit a fix to the documentation.
|
||||||
|
|
||||||
|
### Requirements
|
||||||
|
|
||||||
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
||||||
and [Buf](https://buf.build)(Protobuf generator).
|
and [Buf](https://buf.build)(Protobuf generator).
|
||||||
|
|
||||||
@@ -95,8 +111,6 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
|
|||||||
be done with `nix develop`, which will install the tools and give you a shell.
|
be done with `nix develop`, which will install the tools and give you a shell.
|
||||||
This guarantees that you will have the same dev env as `headscale` maintainers.
|
This guarantees that you will have the same dev env as `headscale` maintainers.
|
||||||
|
|
||||||
PRs and suggestions are welcome.
|
|
||||||
|
|
||||||
### Code style
|
### Code style
|
||||||
|
|
||||||
To ensure we have some consistency with a growing number of contributions,
|
To ensure we have some consistency with a growing number of contributions,
|
||||||
@@ -174,13 +188,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/restanrm>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/cure>
|
<a href=https://github.com/cure>
|
||||||
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
||||||
@@ -202,8 +209,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
|
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/reynico>
|
<a href=https://github.com/reynico>
|
||||||
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
||||||
@@ -211,6 +216,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Nico</b></sub>
|
<sub style="font-size:14px"><b>Nico</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/evenh>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Even Holthe</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/e-zk>
|
<a href=https://github.com/e-zk>
|
||||||
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
||||||
@@ -239,6 +253,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>unreality</b></sub>
|
<sub style="font-size:14px"><b>unreality</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/mpldr>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ohdearaugustin>
|
<a href=https://github.com/ohdearaugustin>
|
||||||
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
||||||
@@ -246,13 +269,11 @@ make build
|
|||||||
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/mpldr>
|
<a href=https://github.com/restanrm>
|
||||||
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
|
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -262,6 +283,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/christian-heusel>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/mike-lloyd03>
|
<a href=https://github.com/mike-lloyd03>
|
||||||
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
|
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
|
||||||
@@ -269,6 +297,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
|
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/iSchluff>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/Niek>
|
<a href=https://github.com/Niek>
|
||||||
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
||||||
@@ -290,15 +327,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Azz</b></sub>
|
<sub style="font-size:14px"><b>Azz</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/iSchluff>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/qbit>
|
<a href=https://github.com/qbit>
|
||||||
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
||||||
@@ -320,6 +348,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/majst01>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/fdelucchijr>
|
<a href=https://github.com/fdelucchijr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
||||||
@@ -327,6 +364,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/OrvilleQ>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Orville Q. Song</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/hdhoang>
|
<a href=https://github.com/hdhoang>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
|
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
|
||||||
@@ -334,8 +378,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>hdhoang</b></sub>
|
<sub style="font-size:14px"><b>hdhoang</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/bravechamp>
|
<a href=https://github.com/bravechamp>
|
||||||
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
||||||
@@ -343,6 +385,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/bravechamp>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/deonthomasgy>
|
<a href=https://github.com/deonthomasgy>
|
||||||
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
||||||
@@ -378,8 +429,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Michael G.</b></sub>
|
<sub style="font-size:14px"><b>Michael G.</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ptman>
|
<a href=https://github.com/ptman>
|
||||||
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
|
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
|
||||||
@@ -387,6 +436,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/samson4649>
|
<a href=https://github.com/samson4649>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
||||||
@@ -394,13 +445,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
|
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/majst01>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/kevin1sMe>
|
<a href=https://github.com/kevin1sMe>
|
||||||
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
|
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
|
||||||
@@ -408,6 +452,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>kevinlin</b></sub>
|
<sub style="font-size:14px"><b>kevinlin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/QZAiXH>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Snack</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/artemklevtsov>
|
<a href=https://github.com/artemklevtsov>
|
||||||
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
|
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
|
||||||
@@ -422,8 +473,22 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/dbevacqua>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>dbevacqua</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/joshuataylor>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/CNLHC>
|
<a href=https://github.com/CNLHC>
|
||||||
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
|
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
|
||||||
@@ -431,6 +496,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
|
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/motiejus>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/pvinis>
|
<a href=https://github.com/pvinis>
|
||||||
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
||||||
@@ -445,6 +517,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
|
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/snh>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/2051768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Steven Honson/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Steven Honson</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ratsclub>
|
<a href=https://github.com/ratsclub>
|
||||||
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
||||||
@@ -452,13 +533,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/lachy2849>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>lachy2849</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/t56k>
|
<a href=https://github.com/t56k>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
||||||
@@ -466,8 +540,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>thomas</b></sub>
|
<sub style="font-size:14px"><b>thomas</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<tr>
|
<a href=https://github.com/linsomniac>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/aberoham>
|
<a href=https://github.com/aberoham>
|
||||||
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
||||||
@@ -475,6 +554,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/iFargle>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/puzpuzpuz>
|
<a href=https://github.com/puzpuzpuz>
|
||||||
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
|
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
|
||||||
@@ -482,6 +568,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
|
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/theryecatcher>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/apognu>
|
<a href=https://github.com/apognu>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
||||||
@@ -489,6 +584,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/tony1661>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/aofei>
|
<a href=https://github.com/aofei>
|
||||||
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
||||||
@@ -497,12 +599,21 @@ make build
|
|||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/awoimbee>
|
<a href=https://github.com/arnarg>
|
||||||
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
|
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
|
<sub style="font-size:14px"><b>Arnar</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/avirut>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/stensonb>
|
<a href=https://github.com/stensonb>
|
||||||
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
||||||
@@ -510,8 +621,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
|
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/yangchuansheng>
|
<a href=https://github.com/yangchuansheng>
|
||||||
<img src=https://avatars.githubusercontent.com/u/15308462?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt= Carson Yang/>
|
<img src=https://avatars.githubusercontent.com/u/15308462?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt= Carson Yang/>
|
||||||
@@ -526,6 +635,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>kundel</b></sub>
|
<sub style="font-size:14px"><b>kundel</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/fatih-acar>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>fatih-acar</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/fkr>
|
<a href=https://github.com/fkr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
||||||
@@ -540,6 +656,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/gabe565>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/JJGadgets>
|
<a href=https://github.com/JJGadgets>
|
||||||
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
||||||
@@ -547,6 +672,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>JJGadgets</b></sub>
|
<sub style="font-size:14px"><b>JJGadgets</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/hrtkpf>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>hrtkpf</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/jimt>
|
<a href=https://github.com/jimt>
|
||||||
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
|
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
|
||||||
@@ -554,6 +686,20 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/jsiebens>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/johnae>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -563,6 +709,43 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
|
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/JulienFloris>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/win-t>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/foxtrot>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Marc</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/magf>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/mikejsavage>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Michael Savage</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/piec>
|
<a href=https://github.com/piec>
|
||||||
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
|
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
|
||||||
@@ -598,8 +781,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
|
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ryanfowler>
|
<a href=https://github.com/ryanfowler>
|
||||||
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
|
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
|
||||||
@@ -607,6 +788,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/shaananc>
|
<a href=https://github.com/shaananc>
|
||||||
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
||||||
@@ -642,6 +825,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
<sub style="font-size:14px"><b>Teteros</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/Teteros>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Teteros</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -688,6 +878,13 @@ make build
|
|||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/newellz2>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/zekker6>
|
<a href=https://github.com/zekker6>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
||||||
@@ -709,6 +906,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/caelansar>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>caelansar</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/derelm>
|
<a href=https://github.com/derelm>
|
||||||
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
||||||
@@ -716,6 +920,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>derelm</b></sub>
|
<sub style="font-size:14px"><b>derelm</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/dnaq>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>dnaq</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/nning>
|
<a href=https://github.com/nning>
|
||||||
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
||||||
@@ -730,8 +943,20 @@ make build
|
|||||||
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<tr>
|
<a href=https://github.com/jimyag>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>jimyag</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/magichuihui>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>suhelen</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/lion24>
|
<a href=https://github.com/lion24>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
|
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
|
||||||
@@ -739,11 +964,34 @@ make build
|
|||||||
<sub style="font-size:14px"><b>sharkonet</b></sub>
|
<sub style="font-size:14px"><b>sharkonet</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/ma6174>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>ma6174</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/manju-rn>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>manju-rn</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/nicholas-yap>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/pernila>
|
<a href=https://github.com/pernila>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
|
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>pernila</b></sub>
|
<sub style="font-size:14px"><b>Tommi Pernila</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -760,6 +1008,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/xpzouying>
|
<a href=https://github.com/xpzouying>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
||||||
@@ -767,5 +1017,12 @@ make build
|
|||||||
<sub style="font-size:14px"><b>zy</b></sub>
|
<sub style="font-size:14px"><b>zy</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/atorregrosa-smd>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
591
acls.go
591
acls.go
@@ -10,10 +10,13 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/tailscale/hujson"
|
"github.com/tailscale/hujson"
|
||||||
|
"go4.org/netipx"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
|
"tailscale.com/envknob"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -54,6 +57,8 @@ const (
|
|||||||
ProtocolFC = 133 // Fibre Channel
|
ProtocolFC = 133 // Fibre Channel
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var featureEnableSSH = envknob.RegisterBool("HEADSCALE_EXPERIMENTAL_FEATURE_SSH")
|
||||||
|
|
||||||
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
|
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
|
||||||
func (h *Headscale) LoadACLPolicy(path string) error {
|
func (h *Headscale) LoadACLPolicy(path string) error {
|
||||||
log.Debug().
|
log.Debug().
|
||||||
@@ -113,39 +118,62 @@ func (h *Headscale) LoadACLPolicy(path string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) UpdateACLRules() error {
|
func (h *Headscale) UpdateACLRules() error {
|
||||||
rules, err := h.generateACLRules()
|
machines, err := h.ListMachines()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if h.aclPolicy == nil {
|
||||||
|
return errEmptyPolicy
|
||||||
|
}
|
||||||
|
|
||||||
|
rules, err := h.aclPolicy.generateFilterRules(machines, h.cfg.OIDC.StripEmaildomain)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
|
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
|
||||||
h.aclRules = rules
|
h.aclRules = rules
|
||||||
|
|
||||||
|
if featureEnableSSH() {
|
||||||
|
sshRules, err := h.generateSSHRules()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Trace().Interface("SSH", sshRules).Msg("SSH rules generated")
|
||||||
|
if h.sshPolicy == nil {
|
||||||
|
h.sshPolicy = &tailcfg.SSHPolicy{}
|
||||||
|
}
|
||||||
|
h.sshPolicy.Rules = sshRules
|
||||||
|
} else if h.aclPolicy != nil && len(h.aclPolicy.SSHs) > 0 {
|
||||||
|
log.Info().Msg("SSH ACLs has been defined, but HEADSCALE_EXPERIMENTAL_FEATURE_SSH is not enabled, this is a unstable feature, check docs before activating")
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
// generateFilterRules takes a set of machines and an ACLPolicy and generates a
|
||||||
|
// set of Tailscale compatible FilterRules used to allow traffic on clients.
|
||||||
|
func (pol *ACLPolicy) generateFilterRules(
|
||||||
|
machines []Machine,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) ([]tailcfg.FilterRule, error) {
|
||||||
rules := []tailcfg.FilterRule{}
|
rules := []tailcfg.FilterRule{}
|
||||||
|
|
||||||
if h.aclPolicy == nil {
|
for index, acl := range pol.ACLs {
|
||||||
return nil, errEmptyPolicy
|
|
||||||
}
|
|
||||||
|
|
||||||
machines, err := h.ListMachines()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for index, acl := range h.aclPolicy.ACLs {
|
|
||||||
if acl.Action != "accept" {
|
if acl.Action != "accept" {
|
||||||
return nil, errInvalidAction
|
return nil, errInvalidAction
|
||||||
}
|
}
|
||||||
|
|
||||||
srcIPs := []string{}
|
srcIPs := []string{}
|
||||||
for innerIndex, src := range acl.Sources {
|
for srcIndex, src := range acl.Sources {
|
||||||
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, src)
|
srcs, err := pol.getIPsFromSource(src, machines, stripEmailDomain)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
|
Interface("src", src).
|
||||||
|
Int("ACL index", index).
|
||||||
|
Int("Src index", srcIndex).
|
||||||
|
Msgf("Error parsing ACL")
|
||||||
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -161,16 +189,19 @@ func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
destPorts := []tailcfg.NetPortRange{}
|
destPorts := []tailcfg.NetPortRange{}
|
||||||
for innerIndex, dest := range acl.Destinations {
|
for destIndex, dest := range acl.Destinations {
|
||||||
dests, err := h.generateACLPolicyDest(
|
dests, err := pol.getNetPortRangeFromDestination(
|
||||||
machines,
|
|
||||||
*h.aclPolicy,
|
|
||||||
dest,
|
dest,
|
||||||
|
machines,
|
||||||
needsWildcard,
|
needsWildcard,
|
||||||
|
stripEmailDomain,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Msgf("Error parsing ACL %d, Destination %d", index, innerIndex)
|
Interface("dest", dest).
|
||||||
|
Int("ACL index", index).
|
||||||
|
Int("dest index", destIndex).
|
||||||
|
Msgf("Error parsing ACL")
|
||||||
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -187,29 +218,192 @@ func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
|||||||
return rules, nil
|
return rules, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) generateACLPolicySrcIP(
|
func (h *Headscale) generateSSHRules() ([]*tailcfg.SSHRule, error) {
|
||||||
machines []Machine,
|
rules := []*tailcfg.SSHRule{}
|
||||||
aclPolicy ACLPolicy,
|
|
||||||
src string,
|
if h.aclPolicy == nil {
|
||||||
) ([]string, error) {
|
return nil, errEmptyPolicy
|
||||||
return expandAlias(machines, aclPolicy, src, h.cfg.OIDC.StripEmaildomain)
|
}
|
||||||
|
|
||||||
|
machines, err := h.ListMachines()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
acceptAction := tailcfg.SSHAction{
|
||||||
|
Message: "",
|
||||||
|
Reject: false,
|
||||||
|
Accept: true,
|
||||||
|
SessionDuration: 0,
|
||||||
|
AllowAgentForwarding: false,
|
||||||
|
HoldAndDelegate: "",
|
||||||
|
AllowLocalPortForwarding: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
rejectAction := tailcfg.SSHAction{
|
||||||
|
Message: "",
|
||||||
|
Reject: true,
|
||||||
|
Accept: false,
|
||||||
|
SessionDuration: 0,
|
||||||
|
AllowAgentForwarding: false,
|
||||||
|
HoldAndDelegate: "",
|
||||||
|
AllowLocalPortForwarding: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
for index, sshACL := range h.aclPolicy.SSHs {
|
||||||
|
action := rejectAction
|
||||||
|
switch sshACL.Action {
|
||||||
|
case "accept":
|
||||||
|
action = acceptAction
|
||||||
|
case "check":
|
||||||
|
checkAction, err := sshCheckAction(sshACL.CheckPeriod)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing SSH %d, check action with unparsable duration '%s'", index, sshACL.CheckPeriod)
|
||||||
|
} else {
|
||||||
|
action = *checkAction
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing SSH %d, unknown action '%s'", index, sshACL.Action)
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
|
||||||
|
for innerIndex, rawSrc := range sshACL.Sources {
|
||||||
|
if isWildcard(rawSrc) {
|
||||||
|
principals = append(principals, &tailcfg.SSHPrincipal{
|
||||||
|
Any: true,
|
||||||
|
})
|
||||||
|
} else if isGroup(rawSrc) {
|
||||||
|
users, err := h.aclPolicy.getUsersInGroup(rawSrc, h.cfg.OIDC.StripEmaildomain)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, user := range users {
|
||||||
|
principals = append(principals, &tailcfg.SSHPrincipal{
|
||||||
|
UserLogin: user,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
expandedSrcs, err := h.aclPolicy.expandAlias(
|
||||||
|
machines,
|
||||||
|
rawSrc,
|
||||||
|
h.cfg.OIDC.StripEmaildomain,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for _, expandedSrc := range expandedSrcs.Prefixes() {
|
||||||
|
principals = append(principals, &tailcfg.SSHPrincipal{
|
||||||
|
NodeIP: expandedSrc.Addr().String(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
userMap := make(map[string]string, len(sshACL.Users))
|
||||||
|
for _, user := range sshACL.Users {
|
||||||
|
userMap[user] = "="
|
||||||
|
}
|
||||||
|
rules = append(rules, &tailcfg.SSHRule{
|
||||||
|
Principals: principals,
|
||||||
|
SSHUsers: userMap,
|
||||||
|
Action: &action,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return rules, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) generateACLPolicyDest(
|
func sshCheckAction(duration string) (*tailcfg.SSHAction, error) {
|
||||||
machines []Machine,
|
sessionLength, err := time.ParseDuration(duration)
|
||||||
aclPolicy ACLPolicy,
|
if err != nil {
|
||||||
dest string,
|
return nil, err
|
||||||
needsWildcard bool,
|
|
||||||
) ([]tailcfg.NetPortRange, error) {
|
|
||||||
tokens := strings.Split(dest, ":")
|
|
||||||
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
|
||||||
return nil, errInvalidPortFormat
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return &tailcfg.SSHAction{
|
||||||
|
Message: "",
|
||||||
|
Reject: false,
|
||||||
|
Accept: true,
|
||||||
|
SessionDuration: sessionLength,
|
||||||
|
AllowAgentForwarding: false,
|
||||||
|
HoldAndDelegate: "",
|
||||||
|
AllowLocalPortForwarding: true,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getIPsFromSource returns a set of Source IPs that would be associated
|
||||||
|
// with the given src alias.
|
||||||
|
func (pol *ACLPolicy) getIPsFromSource(
|
||||||
|
src string,
|
||||||
|
machines []Machine,
|
||||||
|
stripEmaildomain bool,
|
||||||
|
) ([]string, error) {
|
||||||
|
ipSet, err := pol.expandAlias(machines, src, stripEmaildomain)
|
||||||
|
if err != nil {
|
||||||
|
return []string{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
prefixes := []string{}
|
||||||
|
|
||||||
|
for _, prefix := range ipSet.Prefixes() {
|
||||||
|
prefixes = append(prefixes, prefix.String())
|
||||||
|
}
|
||||||
|
|
||||||
|
return prefixes, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getNetPortRangeFromDestination returns a set of tailcfg.NetPortRange
|
||||||
|
// which are associated with the dest alias.
|
||||||
|
func (pol *ACLPolicy) getNetPortRangeFromDestination(
|
||||||
|
dest string,
|
||||||
|
machines []Machine,
|
||||||
|
needsWildcard bool,
|
||||||
|
stripEmaildomain bool,
|
||||||
|
) ([]tailcfg.NetPortRange, error) {
|
||||||
|
var tokens []string
|
||||||
|
|
||||||
|
log.Trace().Str("destination", dest).Msg("generating policy destination")
|
||||||
|
|
||||||
|
// Check if there is a IPv4/6:Port combination, IPv6 has more than
|
||||||
|
// three ":".
|
||||||
|
tokens = strings.Split(dest, ":")
|
||||||
|
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
||||||
|
port := tokens[len(tokens)-1]
|
||||||
|
|
||||||
|
maybeIPv6Str := strings.TrimSuffix(dest, ":"+port)
|
||||||
|
log.Trace().Str("maybeIPv6Str", maybeIPv6Str).Msg("")
|
||||||
|
|
||||||
|
if maybeIPv6, err := netip.ParseAddr(maybeIPv6Str); err != nil && !maybeIPv6.Is6() {
|
||||||
|
log.Trace().Err(err).Msg("trying to parse as IPv6")
|
||||||
|
|
||||||
|
return nil, fmt.Errorf(
|
||||||
|
"failed to parse destination, tokens %v: %w",
|
||||||
|
tokens,
|
||||||
|
errInvalidPortFormat,
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
tokens = []string{maybeIPv6Str, port}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Trace().Strs("tokens", tokens).Msg("generating policy destination")
|
||||||
|
|
||||||
var alias string
|
var alias string
|
||||||
// We can have here stuff like:
|
// We can have here stuff like:
|
||||||
// git-server:*
|
// git-server:*
|
||||||
// 192.168.1.0/24:22
|
// 192.168.1.0/24:22
|
||||||
|
// fd7a:115c:a1e0::2:22
|
||||||
|
// fd7a:115c:a1e0::2/128:22
|
||||||
// tag:montreal-webserver:80,443
|
// tag:montreal-webserver:80,443
|
||||||
// tag:api-server:443
|
// tag:api-server:443
|
||||||
// example-host-1:*
|
// example-host-1:*
|
||||||
@@ -219,11 +413,10 @@ func (h *Headscale) generateACLPolicyDest(
|
|||||||
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
|
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
|
||||||
}
|
}
|
||||||
|
|
||||||
expanded, err := expandAlias(
|
expanded, err := pol.expandAlias(
|
||||||
machines,
|
machines,
|
||||||
aclPolicy,
|
|
||||||
alias,
|
alias,
|
||||||
h.cfg.OIDC.StripEmaildomain,
|
stripEmaildomain,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -234,11 +427,11 @@ func (h *Headscale) generateACLPolicyDest(
|
|||||||
}
|
}
|
||||||
|
|
||||||
dests := []tailcfg.NetPortRange{}
|
dests := []tailcfg.NetPortRange{}
|
||||||
for _, d := range expanded {
|
for _, dest := range expanded.Prefixes() {
|
||||||
for _, p := range *ports {
|
for _, port := range *ports {
|
||||||
pr := tailcfg.NetPortRange{
|
pr := tailcfg.NetPortRange{
|
||||||
IP: d,
|
IP: dest.String(),
|
||||||
Ports: p,
|
Ports: port,
|
||||||
}
|
}
|
||||||
dests = append(dests, pr)
|
dests = append(dests, pr)
|
||||||
}
|
}
|
||||||
@@ -260,12 +453,7 @@ func (h *Headscale) generateACLPolicyDest(
|
|||||||
func parseProtocol(protocol string) ([]int, bool, error) {
|
func parseProtocol(protocol string) ([]int, bool, error) {
|
||||||
switch protocol {
|
switch protocol {
|
||||||
case "":
|
case "":
|
||||||
return []int{
|
return nil, false, nil
|
||||||
protocolICMP,
|
|
||||||
protocolIPv6ICMP,
|
|
||||||
protocolTCP,
|
|
||||||
protocolUDP,
|
|
||||||
}, false, nil
|
|
||||||
case "igmp":
|
case "igmp":
|
||||||
return []int{protocolIGMP}, true, nil
|
return []int{protocolIGMP}, true, nil
|
||||||
case "ipv4", "ip-in-ip":
|
case "ipv4", "ip-in-ip":
|
||||||
@@ -303,128 +491,81 @@ func parseProtocol(protocol string) ([]int, bool, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// expandalias has an input of either
|
// expandalias has an input of either
|
||||||
// - a namespace
|
// - a user
|
||||||
// - a group
|
// - a group
|
||||||
// - a tag
|
// - a tag
|
||||||
|
// - a host
|
||||||
|
// - an ip
|
||||||
|
// - a cidr
|
||||||
// and transform these in IPAddresses.
|
// and transform these in IPAddresses.
|
||||||
func expandAlias(
|
func (pol *ACLPolicy) expandAlias(
|
||||||
machines []Machine,
|
machines Machines,
|
||||||
aclPolicy ACLPolicy,
|
|
||||||
alias string,
|
alias string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) ([]string, error) {
|
) (*netipx.IPSet, error) {
|
||||||
ips := []string{}
|
if isWildcard(alias) {
|
||||||
if alias == "*" {
|
return parseIPSet("*", nil)
|
||||||
return []string{"*"}, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
build := netipx.IPSetBuilder{}
|
||||||
|
|
||||||
log.Debug().
|
log.Debug().
|
||||||
Str("alias", alias).
|
Str("alias", alias).
|
||||||
Msg("Expanding")
|
Msg("Expanding")
|
||||||
|
|
||||||
if strings.HasPrefix(alias, "group:") {
|
// if alias is a group
|
||||||
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
|
if isGroup(alias) {
|
||||||
if err != nil {
|
return pol.getIPsFromGroup(alias, machines, stripEmailDomain)
|
||||||
|
}
|
||||||
|
|
||||||
|
// if alias is a tag
|
||||||
|
if isTag(alias) {
|
||||||
|
return pol.getIPsFromTag(alias, machines, stripEmailDomain)
|
||||||
|
}
|
||||||
|
|
||||||
|
// if alias is a user
|
||||||
|
if ips, err := pol.getIPsForUser(alias, machines, stripEmailDomain); ips != nil {
|
||||||
return ips, err
|
return ips, err
|
||||||
}
|
}
|
||||||
for _, n := range namespaces {
|
|
||||||
nodes := filterMachinesByNamespace(machines, n)
|
|
||||||
for _, node := range nodes {
|
|
||||||
ips = append(ips, node.IPAddresses.ToStringSlice()...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return ips, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.HasPrefix(alias, "tag:") {
|
|
||||||
// check for forced tags
|
|
||||||
for _, machine := range machines {
|
|
||||||
if contains(machine.ForcedTags, alias) {
|
|
||||||
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// find tag owners
|
|
||||||
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
|
|
||||||
if err != nil {
|
|
||||||
if errors.Is(err, errInvalidTag) {
|
|
||||||
if len(ips) == 0 {
|
|
||||||
return ips, fmt.Errorf(
|
|
||||||
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
|
||||||
errInvalidTag,
|
|
||||||
alias,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
return ips, nil
|
|
||||||
} else {
|
|
||||||
return ips, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// filter out machines per tag owner
|
|
||||||
for _, namespace := range owners {
|
|
||||||
machines := filterMachinesByNamespace(machines, namespace)
|
|
||||||
for _, machine := range machines {
|
|
||||||
hi := machine.GetHostInfo()
|
|
||||||
if contains(hi.RequestTags, alias) {
|
|
||||||
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return ips, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// if alias is a namespace
|
|
||||||
nodes := filterMachinesByNamespace(machines, alias)
|
|
||||||
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
|
|
||||||
|
|
||||||
for _, n := range nodes {
|
|
||||||
ips = append(ips, n.IPAddresses.ToStringSlice()...)
|
|
||||||
}
|
|
||||||
if len(ips) > 0 {
|
|
||||||
return ips, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// if alias is an host
|
// if alias is an host
|
||||||
if h, ok := aclPolicy.Hosts[alias]; ok {
|
// Note, this is recursive.
|
||||||
return []string{h.String()}, nil
|
if h, ok := pol.Hosts[alias]; ok {
|
||||||
|
log.Trace().Str("host", h.String()).Msg("expandAlias got hosts entry")
|
||||||
|
|
||||||
|
return pol.expandAlias(machines, h.String(), stripEmailDomain)
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is an IP
|
// if alias is an IP
|
||||||
ip, err := netip.ParseAddr(alias)
|
if ip, err := netip.ParseAddr(alias); err == nil {
|
||||||
if err == nil {
|
return pol.getIPsFromSingleIP(ip, machines)
|
||||||
return []string{ip.String()}, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is an CIDR
|
// if alias is an IP Prefix (CIDR)
|
||||||
cidr, err := netip.ParsePrefix(alias)
|
if prefix, err := netip.ParsePrefix(alias); err == nil {
|
||||||
if err == nil {
|
return pol.getIPsFromIPPrefix(prefix, machines)
|
||||||
return []string{cidr.String()}, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Warn().Msgf("No IPs found with the alias %v", alias)
|
log.Warn().Msgf("No IPs found with the alias %v", alias)
|
||||||
|
|
||||||
return ips, nil
|
return build.IPSet()
|
||||||
}
|
}
|
||||||
|
|
||||||
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
|
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
|
||||||
// that are correctly tagged since they should not be listed as being in the namespace
|
// that are correctly tagged since they should not be listed as being in the user
|
||||||
// we assume in this function that we only have nodes from 1 namespace.
|
// we assume in this function that we only have nodes from 1 user.
|
||||||
func excludeCorrectlyTaggedNodes(
|
func excludeCorrectlyTaggedNodes(
|
||||||
aclPolicy ACLPolicy,
|
aclPolicy *ACLPolicy,
|
||||||
nodes []Machine,
|
nodes []Machine,
|
||||||
namespace string,
|
user string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) []Machine {
|
) []Machine {
|
||||||
out := []Machine{}
|
out := []Machine{}
|
||||||
tags := []string{}
|
tags := []string{}
|
||||||
for tag := range aclPolicy.TagOwners {
|
for tag := range aclPolicy.TagOwners {
|
||||||
owners, _ := expandTagOwners(aclPolicy, namespace, stripEmailDomain)
|
owners, _ := getTagOwners(aclPolicy, user, stripEmailDomain)
|
||||||
ns := append(owners, namespace)
|
ns := append(owners, user)
|
||||||
if contains(ns, namespace) {
|
if contains(ns, user) {
|
||||||
tags = append(tags, tag)
|
tags = append(tags, tag)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -452,7 +593,7 @@ func excludeCorrectlyTaggedNodes(
|
|||||||
}
|
}
|
||||||
|
|
||||||
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
|
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
|
||||||
if portsStr == "*" {
|
if isWildcard(portsStr) {
|
||||||
return &[]tailcfg.PortRange{
|
return &[]tailcfg.PortRange{
|
||||||
{First: portRangeBegin, Last: portRangeEnd},
|
{First: portRangeBegin, Last: portRangeEnd},
|
||||||
}, nil
|
}, nil
|
||||||
@@ -464,6 +605,7 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
|
|||||||
|
|
||||||
ports := []tailcfg.PortRange{}
|
ports := []tailcfg.PortRange{}
|
||||||
for _, portStr := range strings.Split(portsStr, ",") {
|
for _, portStr := range strings.Split(portsStr, ",") {
|
||||||
|
log.Trace().Msgf("parsing portstring: %s", portStr)
|
||||||
rang := strings.Split(portStr, "-")
|
rang := strings.Split(portStr, "-")
|
||||||
switch len(rang) {
|
switch len(rang) {
|
||||||
case 1:
|
case 1:
|
||||||
@@ -498,10 +640,10 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
|
|||||||
return &ports, nil
|
return &ports, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
|
func filterMachinesByUser(machines []Machine, user string) []Machine {
|
||||||
out := []Machine{}
|
out := []Machine{}
|
||||||
for _, machine := range machines {
|
for _, machine := range machines {
|
||||||
if machine.Namespace.Name == namespace {
|
if machine.User.Name == user {
|
||||||
out = append(out, machine)
|
out = append(out, machine)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -509,15 +651,15 @@ func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
|
|||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
|
// getTagOwners will return a list of user. An owner can be either a user or a group
|
||||||
// a group cannot be composed of groups.
|
// a group cannot be composed of groups.
|
||||||
func expandTagOwners(
|
func getTagOwners(
|
||||||
aclPolicy ACLPolicy,
|
pol *ACLPolicy,
|
||||||
tag string,
|
tag string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
var owners []string
|
var owners []string
|
||||||
ows, ok := aclPolicy.TagOwners[tag]
|
ows, ok := pol.TagOwners[tag]
|
||||||
if !ok {
|
if !ok {
|
||||||
return []string{}, fmt.Errorf(
|
return []string{}, fmt.Errorf(
|
||||||
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
|
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
|
||||||
@@ -526,8 +668,8 @@ func expandTagOwners(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
for _, owner := range ows {
|
for _, owner := range ows {
|
||||||
if strings.HasPrefix(owner, "group:") {
|
if isGroup(owner) {
|
||||||
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
|
gs, err := pol.getUsersInGroup(owner, stripEmailDomain)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []string{}, err
|
return []string{}, err
|
||||||
}
|
}
|
||||||
@@ -540,15 +682,15 @@ func expandTagOwners(
|
|||||||
return owners, nil
|
return owners, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// expandGroup will return the list of namespace inside the group
|
// getUsersInGroup will return the list of user inside the group
|
||||||
// after some validation.
|
// after some validation.
|
||||||
func expandGroup(
|
func (pol *ACLPolicy) getUsersInGroup(
|
||||||
aclPolicy ACLPolicy,
|
|
||||||
group string,
|
group string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
outGroups := []string{}
|
users := []string{}
|
||||||
aclGroups, ok := aclPolicy.Groups[group]
|
log.Trace().Caller().Interface("pol", pol).Msg("test")
|
||||||
|
aclGroups, ok := pol.Groups[group]
|
||||||
if !ok {
|
if !ok {
|
||||||
return []string{}, fmt.Errorf(
|
return []string{}, fmt.Errorf(
|
||||||
"group %v isn't registered. %w",
|
"group %v isn't registered. %w",
|
||||||
@@ -557,7 +699,7 @@ func expandGroup(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
for _, group := range aclGroups {
|
for _, group := range aclGroups {
|
||||||
if strings.HasPrefix(group, "group:") {
|
if isGroup(group) {
|
||||||
return []string{}, fmt.Errorf(
|
return []string{}, fmt.Errorf(
|
||||||
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
|
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
|
||||||
errInvalidGroup,
|
errInvalidGroup,
|
||||||
@@ -571,8 +713,151 @@ func expandGroup(
|
|||||||
errInvalidGroup,
|
errInvalidGroup,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
outGroups = append(outGroups, grp)
|
users = append(users, grp)
|
||||||
}
|
}
|
||||||
|
|
||||||
return outGroups, nil
|
return users, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pol *ACLPolicy) getIPsFromGroup(
|
||||||
|
group string,
|
||||||
|
machines Machines,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) (*netipx.IPSet, error) {
|
||||||
|
build := netipx.IPSetBuilder{}
|
||||||
|
|
||||||
|
users, err := pol.getUsersInGroup(group, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
return &netipx.IPSet{}, err
|
||||||
|
}
|
||||||
|
for _, user := range users {
|
||||||
|
filteredMachines := filterMachinesByUser(machines, user)
|
||||||
|
for _, machine := range filteredMachines {
|
||||||
|
machine.IPAddresses.AppendToIPSet(&build)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.IPSet()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pol *ACLPolicy) getIPsFromTag(
|
||||||
|
alias string,
|
||||||
|
machines Machines,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) (*netipx.IPSet, error) {
|
||||||
|
build := netipx.IPSetBuilder{}
|
||||||
|
|
||||||
|
// check for forced tags
|
||||||
|
for _, machine := range machines {
|
||||||
|
if contains(machine.ForcedTags, alias) {
|
||||||
|
machine.IPAddresses.AppendToIPSet(&build)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// find tag owners
|
||||||
|
owners, err := getTagOwners(pol, alias, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, errInvalidTag) {
|
||||||
|
ipSet, _ := build.IPSet()
|
||||||
|
if len(ipSet.Prefixes()) == 0 {
|
||||||
|
return ipSet, fmt.Errorf(
|
||||||
|
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
||||||
|
errInvalidTag,
|
||||||
|
alias,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.IPSet()
|
||||||
|
} else {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// filter out machines per tag owner
|
||||||
|
for _, user := range owners {
|
||||||
|
machines := filterMachinesByUser(machines, user)
|
||||||
|
for _, machine := range machines {
|
||||||
|
hi := machine.GetHostInfo()
|
||||||
|
if contains(hi.RequestTags, alias) {
|
||||||
|
machine.IPAddresses.AppendToIPSet(&build)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.IPSet()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pol *ACLPolicy) getIPsForUser(
|
||||||
|
user string,
|
||||||
|
machines Machines,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) (*netipx.IPSet, error) {
|
||||||
|
build := netipx.IPSetBuilder{}
|
||||||
|
|
||||||
|
filteredMachines := filterMachinesByUser(machines, user)
|
||||||
|
filteredMachines = excludeCorrectlyTaggedNodes(pol, filteredMachines, user, stripEmailDomain)
|
||||||
|
|
||||||
|
// shortcurcuit if we have no machines to get ips from.
|
||||||
|
if len(filteredMachines) == 0 {
|
||||||
|
return nil, nil //nolint
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, machine := range filteredMachines {
|
||||||
|
machine.IPAddresses.AppendToIPSet(&build)
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.IPSet()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pol *ACLPolicy) getIPsFromSingleIP(
|
||||||
|
ip netip.Addr,
|
||||||
|
machines Machines,
|
||||||
|
) (*netipx.IPSet, error) {
|
||||||
|
log.Trace().Str("ip", ip.String()).Msg("expandAlias got ip")
|
||||||
|
|
||||||
|
matches := machines.FilterByIP(ip)
|
||||||
|
|
||||||
|
build := netipx.IPSetBuilder{}
|
||||||
|
build.Add(ip)
|
||||||
|
|
||||||
|
for _, machine := range matches {
|
||||||
|
machine.IPAddresses.AppendToIPSet(&build)
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.IPSet()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (pol *ACLPolicy) getIPsFromIPPrefix(
|
||||||
|
prefix netip.Prefix,
|
||||||
|
machines Machines,
|
||||||
|
) (*netipx.IPSet, error) {
|
||||||
|
log.Trace().Str("prefix", prefix.String()).Msg("expandAlias got prefix")
|
||||||
|
build := netipx.IPSetBuilder{}
|
||||||
|
build.AddPrefix(prefix)
|
||||||
|
|
||||||
|
// This is suboptimal and quite expensive, but if we only add the prefix, we will miss all the relevant IPv6
|
||||||
|
// addresses for the hosts that belong to tailscale. This doesnt really affect stuff like subnet routers.
|
||||||
|
for _, machine := range machines {
|
||||||
|
for _, ip := range machine.IPAddresses {
|
||||||
|
// log.Trace().
|
||||||
|
// Msgf("checking if machine ip (%s) is part of prefix (%s): %v, is single ip prefix (%v), addr: %s", ip.String(), prefix.String(), prefix.Contains(ip), prefix.IsSingleIP(), prefix.Addr().String())
|
||||||
|
if prefix.Contains(ip) {
|
||||||
|
machine.IPAddresses.AppendToIPSet(&build)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return build.IPSet()
|
||||||
|
}
|
||||||
|
|
||||||
|
func isWildcard(str string) bool {
|
||||||
|
return str == "*"
|
||||||
|
}
|
||||||
|
|
||||||
|
func isGroup(str string) bool {
|
||||||
|
return strings.HasPrefix(str, "group:")
|
||||||
|
}
|
||||||
|
|
||||||
|
func isTag(str string) bool {
|
||||||
|
return strings.HasPrefix(str, "tag:")
|
||||||
}
|
}
|
||||||
|
817
acls_test.go
817
acls_test.go
File diff suppressed because it is too large
Load Diff
@@ -17,6 +17,7 @@ type ACLPolicy struct {
|
|||||||
ACLs []ACL `json:"acls" yaml:"acls"`
|
ACLs []ACL `json:"acls" yaml:"acls"`
|
||||||
Tests []ACLTest `json:"tests" yaml:"tests"`
|
Tests []ACLTest `json:"tests" yaml:"tests"`
|
||||||
AutoApprovers AutoApprovers `json:"autoApprovers" yaml:"autoApprovers"`
|
AutoApprovers AutoApprovers `json:"autoApprovers" yaml:"autoApprovers"`
|
||||||
|
SSHs []SSH `json:"ssh" yaml:"ssh"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ACL is a basic rule for the ACL Policy.
|
// ACL is a basic rule for the ACL Policy.
|
||||||
@@ -33,7 +34,7 @@ type Groups map[string][]string
|
|||||||
// Hosts are alias for IP addresses or subnets.
|
// Hosts are alias for IP addresses or subnets.
|
||||||
type Hosts map[string]netip.Prefix
|
type Hosts map[string]netip.Prefix
|
||||||
|
|
||||||
// TagOwners specify what users (namespaces?) are allow to use certain tags.
|
// TagOwners specify what users (users?) are allow to use certain tags.
|
||||||
type TagOwners map[string][]string
|
type TagOwners map[string][]string
|
||||||
|
|
||||||
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
||||||
@@ -43,13 +44,22 @@ type ACLTest struct {
|
|||||||
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AutoApprovers specify which users (namespaces?), groups or tags have their advertised routes
|
// AutoApprovers specify which users (users?), groups or tags have their advertised routes
|
||||||
// or exit node status automatically enabled.
|
// or exit node status automatically enabled.
|
||||||
type AutoApprovers struct {
|
type AutoApprovers struct {
|
||||||
Routes map[string][]string `json:"routes" yaml:"routes"`
|
Routes map[string][]string `json:"routes" yaml:"routes"`
|
||||||
ExitNode []string `json:"exitNode" yaml:"exitNode"`
|
ExitNode []string `json:"exitNode" yaml:"exitNode"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SSH controls who can ssh into which machines.
|
||||||
|
type SSH struct {
|
||||||
|
Action string `json:"action" yaml:"action"`
|
||||||
|
Sources []string `json:"src" yaml:"src"`
|
||||||
|
Destinations []string `json:"dst" yaml:"dst"`
|
||||||
|
Users []string `json:"users" yaml:"users"`
|
||||||
|
CheckPeriod string `json:"checkPeriod,omitempty" yaml:"checkPeriod,omitempty"`
|
||||||
|
}
|
||||||
|
|
||||||
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
||||||
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
||||||
newHosts := Hosts{}
|
newHosts := Hosts{}
|
||||||
@@ -101,15 +111,15 @@ func (hosts *Hosts) UnmarshalYAML(data []byte) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// IsZero is perhaps a bit naive here.
|
// IsZero is perhaps a bit naive here.
|
||||||
func (policy ACLPolicy) IsZero() bool {
|
func (pol ACLPolicy) IsZero() bool {
|
||||||
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
|
if len(pol.Groups) == 0 && len(pol.Hosts) == 0 && len(pol.ACLs) == 0 {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Returns the list of autoApproving namespaces, groups or tags for a given IPPrefix.
|
// Returns the list of autoApproving users, groups or tags for a given IPPrefix.
|
||||||
func (autoApprovers *AutoApprovers) GetRouteApprovers(
|
func (autoApprovers *AutoApprovers) GetRouteApprovers(
|
||||||
prefix netip.Prefix,
|
prefix netip.Prefix,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
|
2
api.go
2
api.go
@@ -78,7 +78,7 @@ var registerWebAPITemplate = template.Must(
|
|||||||
<p>
|
<p>
|
||||||
Run the command below in the headscale server to add this machine to your network:
|
Run the command below in the headscale server to add this machine to your network:
|
||||||
</p>
|
</p>
|
||||||
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
|
<pre><code>headscale nodes register --user USERNAME --key {{.Key}}</code></pre>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
`))
|
`))
|
||||||
|
@@ -1,6 +1,8 @@
|
|||||||
package headscale
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
@@ -13,7 +15,7 @@ func (h *Headscale) generateMapResponse(
|
|||||||
Str("func", "generateMapResponse").
|
Str("func", "generateMapResponse").
|
||||||
Str("machine", mapRequest.Hostinfo.Hostname).
|
Str("machine", mapRequest.Hostinfo.Hostname).
|
||||||
Msg("Creating Map response")
|
Msg("Creating Map response")
|
||||||
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig)
|
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
@@ -35,9 +37,9 @@ func (h *Headscale) generateMapResponse(
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
profiles := getMapResponseUserProfiles(*machine, peers)
|
profiles := h.getMapResponseUserProfiles(*machine, peers)
|
||||||
|
|
||||||
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig)
|
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
@@ -55,15 +57,46 @@ func (h *Headscale) generateMapResponse(
|
|||||||
peers,
|
peers,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
resp := tailcfg.MapResponse{
|
resp := tailcfg.MapResponse{
|
||||||
KeepAlive: false,
|
KeepAlive: false,
|
||||||
Node: node,
|
Node: node,
|
||||||
Peers: nodePeers,
|
|
||||||
DNSConfig: dnsConfig,
|
// TODO: Only send if updated
|
||||||
Domain: h.cfg.BaseDomain,
|
|
||||||
PacketFilter: h.aclRules,
|
|
||||||
DERPMap: h.DERPMap,
|
DERPMap: h.DERPMap,
|
||||||
|
|
||||||
|
// TODO: Only send if updated
|
||||||
|
Peers: nodePeers,
|
||||||
|
|
||||||
|
// TODO(kradalby): Implement:
|
||||||
|
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
|
||||||
|
// PeersChanged
|
||||||
|
// PeersRemoved
|
||||||
|
// PeersChangedPatch
|
||||||
|
// PeerSeenChange
|
||||||
|
// OnlineChange
|
||||||
|
|
||||||
|
// TODO: Only send if updated
|
||||||
|
DNSConfig: dnsConfig,
|
||||||
|
|
||||||
|
// TODO: Only send if updated
|
||||||
|
Domain: h.cfg.BaseDomain,
|
||||||
|
|
||||||
|
// Do not instruct clients to collect services, we do not
|
||||||
|
// support or do anything with them
|
||||||
|
CollectServices: "false",
|
||||||
|
|
||||||
|
// TODO: Only send if updated
|
||||||
|
PacketFilter: h.aclRules,
|
||||||
|
|
||||||
UserProfiles: profiles,
|
UserProfiles: profiles,
|
||||||
|
|
||||||
|
// TODO: Only send if updated
|
||||||
|
SSHPolicy: h.sshPolicy,
|
||||||
|
|
||||||
|
ControlTime: &now,
|
||||||
|
|
||||||
Debug: &tailcfg.Debug{
|
Debug: &tailcfg.Debug{
|
||||||
DisableLogTail: !h.cfg.LogTail.Enabled,
|
DisableLogTail: !h.cfg.LogTail.Enabled,
|
||||||
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
||||||
|
@@ -29,7 +29,7 @@ type APIKey struct {
|
|||||||
LastSeen *time.Time
|
LastSeen *time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
|
// CreateAPIKey creates a new ApiKey in a user, and returns it.
|
||||||
func (h *Headscale) CreateAPIKey(
|
func (h *Headscale) CreateAPIKey(
|
||||||
expiration *time.Time,
|
expiration *time.Time,
|
||||||
) (string, *APIKey, error) {
|
) (string, *APIKey, error) {
|
||||||
@@ -64,7 +64,7 @@ func (h *Headscale) CreateAPIKey(
|
|||||||
return keyStr, &key, nil
|
return keyStr, &key, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListAPIKeys returns the list of ApiKeys for a namespace.
|
// ListAPIKeys returns the list of ApiKeys for a user.
|
||||||
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
||||||
keys := []APIKey{}
|
keys := []APIKey{}
|
||||||
if err := h.db.Find(&keys).Error; err != nil {
|
if err := h.db.Find(&keys).Error; err != nil {
|
||||||
|
167
app.go
167
app.go
@@ -11,6 +11,7 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"sort"
|
"sort"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"syscall"
|
"syscall"
|
||||||
@@ -80,13 +81,12 @@ type Headscale struct {
|
|||||||
privateKey *key.MachinePrivate
|
privateKey *key.MachinePrivate
|
||||||
noisePrivateKey *key.MachinePrivate
|
noisePrivateKey *key.MachinePrivate
|
||||||
|
|
||||||
noiseMux *mux.Router
|
|
||||||
|
|
||||||
DERPMap *tailcfg.DERPMap
|
DERPMap *tailcfg.DERPMap
|
||||||
DERPServer *DERPServer
|
DERPServer *DERPServer
|
||||||
|
|
||||||
aclPolicy *ACLPolicy
|
aclPolicy *ACLPolicy
|
||||||
aclRules []tailcfg.FilterRule
|
aclRules []tailcfg.FilterRule
|
||||||
|
sshPolicy *tailcfg.SSHPolicy
|
||||||
|
|
||||||
lastStateChange *xsync.MapOf[string, time.Time]
|
lastStateChange *xsync.MapOf[string, time.Time]
|
||||||
|
|
||||||
@@ -101,27 +101,6 @@ type Headscale struct {
|
|||||||
pollNetMapStreamWG sync.WaitGroup
|
pollNetMapStreamWG sync.WaitGroup
|
||||||
}
|
}
|
||||||
|
|
||||||
// Look up the TLS constant relative to user-supplied TLS client
|
|
||||||
// authentication mode. If an unknown mode is supplied, the default
|
|
||||||
// value, tls.RequireAnyClientCert, is returned. The returned boolean
|
|
||||||
// indicates if the supplied mode was valid.
|
|
||||||
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
|
|
||||||
switch mode {
|
|
||||||
case DisabledClientAuth:
|
|
||||||
// Client cert is _not_ required.
|
|
||||||
return tls.NoClientCert, true
|
|
||||||
case RelaxedClientAuth:
|
|
||||||
// Client cert required, but _not verified_.
|
|
||||||
return tls.RequireAnyClientCert, true
|
|
||||||
case EnforcedClientAuth:
|
|
||||||
// Client cert is _required and verified_.
|
|
||||||
return tls.RequireAndVerifyClientCert, true
|
|
||||||
default:
|
|
||||||
// Return the default when an unknown value is supplied.
|
|
||||||
return tls.RequireAnyClientCert, false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
||||||
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -148,9 +127,13 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
cfg.DBuser,
|
cfg.DBuser,
|
||||||
)
|
)
|
||||||
|
|
||||||
if !cfg.DBssl {
|
if sslEnabled, err := strconv.ParseBool(cfg.DBssl); err == nil {
|
||||||
|
if !sslEnabled {
|
||||||
dbString += " sslmode=disable"
|
dbString += " sslmode=disable"
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
dbString += fmt.Sprintf(" sslmode=%s", cfg.DBssl)
|
||||||
|
}
|
||||||
|
|
||||||
if cfg.DBport != 0 {
|
if cfg.DBport != 0 {
|
||||||
dbString += fmt.Sprintf(" port=%d", cfg.DBport)
|
dbString += fmt.Sprintf(" port=%d", cfg.DBport)
|
||||||
@@ -179,6 +162,7 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
aclRules: tailcfg.FilterAllowAll, // default allowall
|
aclRules: tailcfg.FilterAllowAll, // default allowall
|
||||||
registrationCache: registrationCache,
|
registrationCache: registrationCache,
|
||||||
pollNetMapStreamWG: sync.WaitGroup{},
|
pollNetMapStreamWG: sync.WaitGroup{},
|
||||||
|
lastStateChange: xsync.NewMapOf[time.Time](),
|
||||||
}
|
}
|
||||||
|
|
||||||
err = app.initDB()
|
err = app.initDB()
|
||||||
@@ -234,29 +218,47 @@ func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) expireEphemeralNodesWorker() {
|
// expireExpiredMachines expires machines that have an explicit expiry set
|
||||||
namespaces, err := h.ListNamespaces()
|
// after that expiry time has passed.
|
||||||
|
func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
|
||||||
|
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
||||||
|
for range ticker.C {
|
||||||
|
h.expireExpiredMachinesWorker()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
|
||||||
|
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
||||||
|
for range ticker.C {
|
||||||
|
err := h.handlePrimarySubnetFailover()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().Err(err).Msg("Error listing namespaces")
|
log.Error().Err(err).Msg("failed to handle primary subnet failover")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) expireEphemeralNodesWorker() {
|
||||||
|
users, err := h.ListUsers()
|
||||||
|
if err != nil {
|
||||||
|
log.Error().Err(err).Msg("Error listing users")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, namespace := range namespaces {
|
for _, user := range users {
|
||||||
machines, err := h.ListMachinesInNamespace(namespace.Name)
|
machines, err := h.ListMachinesByUser(user.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Err(err).
|
Err(err).
|
||||||
Str("namespace", namespace.Name).
|
Str("user", user.Name).
|
||||||
Msg("Error listing machines in namespace")
|
Msg("Error listing machines in user")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
expiredFound := false
|
expiredFound := false
|
||||||
for _, machine := range machines {
|
for _, machine := range machines {
|
||||||
if machine.AuthKey != nil && machine.LastSeen != nil &&
|
if machine.isEphemeral() && machine.LastSeen != nil &&
|
||||||
machine.AuthKey.Ephemeral &&
|
|
||||||
time.Now().
|
time.Now().
|
||||||
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
||||||
expiredFound = true
|
expiredFound = true
|
||||||
@@ -280,6 +282,53 @@ func (h *Headscale) expireEphemeralNodesWorker() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) expireExpiredMachinesWorker() {
|
||||||
|
users, err := h.ListUsers()
|
||||||
|
if err != nil {
|
||||||
|
log.Error().Err(err).Msg("Error listing users")
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, user := range users {
|
||||||
|
machines, err := h.ListMachinesByUser(user.Name)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Err(err).
|
||||||
|
Str("user", user.Name).
|
||||||
|
Msg("Error listing machines in user")
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
expiredFound := false
|
||||||
|
for index, machine := range machines {
|
||||||
|
if machine.isExpired() &&
|
||||||
|
machine.Expiry.After(h.getLastStateChange(user)) {
|
||||||
|
expiredFound = true
|
||||||
|
|
||||||
|
err := h.ExpireMachine(&machines[index])
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Err(err).
|
||||||
|
Str("machine", machine.Hostname).
|
||||||
|
Str("name", machine.GivenName).
|
||||||
|
Msg("🤮 Cannot expire machine")
|
||||||
|
} else {
|
||||||
|
log.Info().
|
||||||
|
Str("machine", machine.Hostname).
|
||||||
|
Str("name", machine.GivenName).
|
||||||
|
Msg("Machine successfully expired")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if expiredFound {
|
||||||
|
h.setLastStateChangeToNow()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
||||||
req interface{},
|
req interface{},
|
||||||
info *grpc.UnaryServerInfo,
|
info *grpc.UnaryServerInfo,
|
||||||
@@ -472,17 +521,7 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
|||||||
apiRouter.Use(h.httpAuthenticationMiddleware)
|
apiRouter.Use(h.httpAuthenticationMiddleware)
|
||||||
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
||||||
|
|
||||||
router.PathPrefix("/").HandlerFunc(stdoutHandler)
|
router.PathPrefix("/").HandlerFunc(notFoundHandler)
|
||||||
|
|
||||||
return router
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) createNoiseMux() *mux.Router {
|
|
||||||
router := mux.NewRouter()
|
|
||||||
|
|
||||||
router.HandleFunc("/machine/register", h.NoiseRegistrationHandler).
|
|
||||||
Methods(http.MethodPost)
|
|
||||||
router.HandleFunc("/machine/map", h.NoisePollNetMapHandler)
|
|
||||||
|
|
||||||
return router
|
return router
|
||||||
}
|
}
|
||||||
@@ -511,6 +550,9 @@ func (h *Headscale) Serve() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
go h.expireEphemeralNodes(updateInterval)
|
go h.expireEphemeralNodes(updateInterval)
|
||||||
|
go h.expireExpiredMachines(updateInterval)
|
||||||
|
|
||||||
|
go h.failoverSubnetRoutes(updateInterval)
|
||||||
|
|
||||||
if zl.GlobalLevel() == zl.TraceLevel {
|
if zl.GlobalLevel() == zl.TraceLevel {
|
||||||
zerolog.RespLog = true
|
zerolog.RespLog = true
|
||||||
@@ -644,12 +686,6 @@ func (h *Headscale) Serve() error {
|
|||||||
// over our main Addr. It also serves the legacy Tailcale API
|
// over our main Addr. It also serves the legacy Tailcale API
|
||||||
router := h.createRouter(grpcGatewayMux)
|
router := h.createRouter(grpcGatewayMux)
|
||||||
|
|
||||||
// This router is served only over the Noise connection, and exposes only the new API.
|
|
||||||
//
|
|
||||||
// The HTTP2 server that exposes this router is created for
|
|
||||||
// a single hijacked connection from /ts2021, using netutil.NewOneConnListener
|
|
||||||
h.noiseMux = h.createNoiseMux()
|
|
||||||
|
|
||||||
httpServer := &http.Server{
|
httpServer := &http.Server{
|
||||||
Addr: h.cfg.Addr,
|
Addr: h.cfg.Addr,
|
||||||
Handler: router,
|
Handler: router,
|
||||||
@@ -782,7 +818,6 @@ func (h *Headscale) Serve() error {
|
|||||||
|
|
||||||
// And we're done:
|
// And we're done:
|
||||||
cancel()
|
cancel()
|
||||||
os.Exit(0)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -855,12 +890,7 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
|
|||||||
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info().Msg(fmt.Sprintf(
|
|
||||||
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
|
|
||||||
h.cfg.TLS.ClientAuthMode))
|
|
||||||
|
|
||||||
tlsConfig := &tls.Config{
|
tlsConfig := &tls.Config{
|
||||||
ClientAuth: h.cfg.TLS.ClientAuthMode,
|
|
||||||
NextProtos: []string{"http/1.1"},
|
NextProtos: []string{"http/1.1"},
|
||||||
Certificates: make([]tls.Certificate, 1),
|
Certificates: make([]tls.Certificate, 1),
|
||||||
MinVersion: tls.VersionTLS12,
|
MinVersion: tls.VersionTLS12,
|
||||||
@@ -877,31 +907,31 @@ func (h *Headscale) setLastStateChangeToNow() {
|
|||||||
|
|
||||||
now := time.Now().UTC()
|
now := time.Now().UTC()
|
||||||
|
|
||||||
namespaces, err := h.ListNamespaces()
|
users, err := h.ListUsers()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
Err(err).
|
Err(err).
|
||||||
Msg("failed to fetch all namespaces, failing to update last changed state.")
|
Msg("failed to fetch all users, failing to update last changed state.")
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, namespace := range namespaces {
|
for _, user := range users {
|
||||||
lastStateUpdate.WithLabelValues(namespace.Name, "headscale").Set(float64(now.Unix()))
|
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix()))
|
||||||
if h.lastStateChange == nil {
|
if h.lastStateChange == nil {
|
||||||
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
||||||
}
|
}
|
||||||
h.lastStateChange.Store(namespace.Name, now)
|
h.lastStateChange.Store(user.Name, now)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
|
func (h *Headscale) getLastStateChange(users ...User) time.Time {
|
||||||
times := []time.Time{}
|
times := []time.Time{}
|
||||||
|
|
||||||
// getLastStateChange takes a list of namespaces as a "filter", if no namespaces
|
// getLastStateChange takes a list of users as a "filter", if no users
|
||||||
// are past, then use the entier list of namespaces and look for the last update
|
// are past, then use the entier list of users and look for the last update
|
||||||
if len(namespaces) > 0 {
|
if len(users) > 0 {
|
||||||
for _, namespace := range namespaces {
|
for _, user := range users {
|
||||||
if lastChange, ok := h.lastStateChange.Load(namespace.Name); ok {
|
if lastChange, ok := h.lastStateChange.Load(user.Name); ok {
|
||||||
times = append(times, lastChange)
|
times = append(times, lastChange)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -926,7 +956,7 @@ func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func stdoutHandler(
|
func notFoundHandler(
|
||||||
writer http.ResponseWriter,
|
writer http.ResponseWriter,
|
||||||
req *http.Request,
|
req *http.Request,
|
||||||
) {
|
) {
|
||||||
@@ -938,6 +968,7 @@ func stdoutHandler(
|
|||||||
Interface("url", req.URL).
|
Interface("url", req.URL).
|
||||||
Bytes("body", body).
|
Bytes("body", body).
|
||||||
Msg("Request did not match")
|
Msg("Request did not match")
|
||||||
|
writer.WriteHeader(http.StatusNotFound)
|
||||||
}
|
}
|
||||||
|
|
||||||
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
||||||
|
17
app_test.go
17
app_test.go
@@ -59,20 +59,3 @@ func (s *Suite) ResetDB(c *check.C) {
|
|||||||
}
|
}
|
||||||
app.db = db
|
app.db = db
|
||||||
}
|
}
|
||||||
|
|
||||||
// Enusre an error is returned when an invalid auth mode
|
|
||||||
// is supplied.
|
|
||||||
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
|
|
||||||
_, isValid := LookupTLSClientAuthMode("invalid")
|
|
||||||
c.Assert(isValid, check.Equals, false)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure that all client auth modes return a nil error.
|
|
||||||
func (s *Suite) TestAuthModes(c *check.C) {
|
|
||||||
modes := []string{"disabled", "relaxed", "enforced"}
|
|
||||||
|
|
||||||
for _, v := range modes {
|
|
||||||
_, isValid := LookupTLSClientAuthMode(v)
|
|
||||||
c.Assert(isValid, check.Equals, true)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
47
cmd/build-docker-img/main.go
Normal file
47
cmd/build-docker-img/main.go
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
|
||||||
|
"github.com/juanfont/headscale/integration"
|
||||||
|
"github.com/juanfont/headscale/integration/tsic"
|
||||||
|
"github.com/ory/dockertest/v3"
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
log.Printf("creating docker pool")
|
||||||
|
pool, err := dockertest.NewPool("")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("could not connect to docker: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("creating docker network")
|
||||||
|
network, err := pool.CreateNetwork("docker-integration-net")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to create or get network: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, version := range integration.TailscaleVersions {
|
||||||
|
log.Printf("creating container image for Tailscale (%s)", version)
|
||||||
|
|
||||||
|
tsClient, err := tsic.New(
|
||||||
|
pool,
|
||||||
|
version,
|
||||||
|
network,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to create tailscale node: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = tsClient.Shutdown()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to shut down container: %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
network.Close()
|
||||||
|
err = pool.RemoveNetwork(network)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to remove network: %s", err)
|
||||||
|
}
|
||||||
|
}
|
169
cmd/gh-action-integration-generator/main.go
Normal file
169
cmd/gh-action-integration-generator/main.go
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
//go:generate go run ./main.go
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"text/template"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
githubWorkflowPath = "../../.github/workflows/"
|
||||||
|
jobFileNameTemplate = `test-integration-v2-%s.yaml`
|
||||||
|
jobTemplate = template.Must(
|
||||||
|
template.New("jobTemplate").
|
||||||
|
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||||
|
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||||
|
|
||||||
|
name: Integration Test v2 - {{.Name}}
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v18
|
||||||
|
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: |
|
||||||
|
nix develop --command -- docker run \
|
||||||
|
--tty --rm \
|
||||||
|
--volume ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
--volume $PWD:$PWD -w $PWD/integration \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
--volume $PWD/control_logs:/tmp/control \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... \
|
||||||
|
-tags ts2019 \
|
||||||
|
-failfast \
|
||||||
|
-timeout 120m \
|
||||||
|
-parallel 1 \
|
||||||
|
-run "^{{.Name}}$"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: logs
|
||||||
|
path: "control_logs/*.log"
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@v3
|
||||||
|
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
with:
|
||||||
|
name: pprof
|
||||||
|
path: "control_logs/*.pprof.tar"
|
||||||
|
`),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
const workflowFilePerm = 0o600
|
||||||
|
|
||||||
|
func removeTests() {
|
||||||
|
glob := fmt.Sprintf(jobFileNameTemplate, "*")
|
||||||
|
|
||||||
|
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to find test files")
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, file := range files {
|
||||||
|
err := os.Remove(file)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("failed to remove: %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func findTests() []string {
|
||||||
|
rgBin, err := exec.LookPath("rg")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to find rg (ripgrep) binary")
|
||||||
|
}
|
||||||
|
|
||||||
|
args := []string{
|
||||||
|
"--regexp", "func (Test.+)\\(.*",
|
||||||
|
"../../integration/",
|
||||||
|
"--replace", "$1",
|
||||||
|
"--sort", "path",
|
||||||
|
"--no-line-number",
|
||||||
|
"--no-filename",
|
||||||
|
"--no-heading",
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
|
||||||
|
|
||||||
|
ripgrep := exec.Command(
|
||||||
|
rgBin,
|
||||||
|
args...,
|
||||||
|
)
|
||||||
|
|
||||||
|
result, err := ripgrep.CombinedOutput()
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("out: %s", result)
|
||||||
|
log.Fatalf("failed to run ripgrep: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tests := strings.Split(string(result), "\n")
|
||||||
|
tests = tests[:len(tests)-1]
|
||||||
|
|
||||||
|
return tests
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
type testConfig struct {
|
||||||
|
Name string
|
||||||
|
}
|
||||||
|
|
||||||
|
tests := findTests()
|
||||||
|
|
||||||
|
removeTests()
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
log.Printf("generating workflow for %s", test)
|
||||||
|
|
||||||
|
var content bytes.Buffer
|
||||||
|
|
||||||
|
if err := jobTemplate.Execute(&content, testConfig{
|
||||||
|
Name: test,
|
||||||
|
}); err != nil {
|
||||||
|
log.Fatalf("failed to render template: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
|
||||||
|
|
||||||
|
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to write github job: %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
22
cmd/headscale/cli/configtest.go
Normal file
22
cmd/headscale/cli/configtest.go
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
package cli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(configTestCmd)
|
||||||
|
}
|
||||||
|
|
||||||
|
var configTestCmd = &cobra.Command{
|
||||||
|
Use: "configtest",
|
||||||
|
Short: "Test the configuration.",
|
||||||
|
Long: "Run a test of the configuration and exit.",
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
_, err := getHeadscaleApp()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
@@ -27,8 +27,14 @@ func init() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||||
err = createNodeCmd.MarkFlagRequired("namespace")
|
|
||||||
|
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
|
||||||
|
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
createNodeNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err = createNodeCmd.MarkFlagRequired("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
@@ -55,9 +61,9 @@ var createNodeCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -112,7 +118,7 @@ var createNodeCmd = &cobra.Command{
|
|||||||
request := &v1.DebugCreateMachineRequest{
|
request := &v1.DebugCreateMachineRequest{
|
||||||
Key: machineKey,
|
Key: machineKey,
|
||||||
Name: name,
|
Name: name,
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
Routes: routes,
|
Routes: routes,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -16,10 +16,11 @@ const (
|
|||||||
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
||||||
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
||||||
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
||||||
accessTTL = 10 * time.Minute
|
|
||||||
refreshTTL = 60 * time.Minute
|
refreshTTL = 60 * time.Minute
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var accessTTL = 2 * time.Minute
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(mockOidcCmd)
|
rootCmd.AddCommand(mockOidcCmd)
|
||||||
}
|
}
|
||||||
@@ -54,6 +55,16 @@ func mockOIDC() error {
|
|||||||
if portStr == "" {
|
if portStr == "" {
|
||||||
return errMockOidcPortNotDefined
|
return errMockOidcPortNotDefined
|
||||||
}
|
}
|
||||||
|
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
||||||
|
if accessTTLOverride != "" {
|
||||||
|
newTTL, err := time.ParseDuration(accessTTLOverride)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
accessTTL = newTTL
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
||||||
|
|
||||||
port, err := strconv.Atoi(portStr)
|
port, err := strconv.Atoi(portStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@@ -19,12 +19,24 @@ import (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(nodeCmd)
|
rootCmd.AddCommand(nodeCmd)
|
||||||
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
|
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
||||||
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
||||||
|
|
||||||
|
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
|
||||||
|
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
listNodesNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
nodeCmd.AddCommand(listNodesCmd)
|
nodeCmd.AddCommand(listNodesCmd)
|
||||||
|
|
||||||
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||||
err := registerNodeCmd.MarkFlagRequired("namespace")
|
|
||||||
|
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
|
||||||
|
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
registerNodeNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err := registerNodeCmd.MarkFlagRequired("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
@@ -63,9 +75,14 @@ func init() {
|
|||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("namespace", "n", "", "New namespace")
|
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
|
||||||
|
|
||||||
err = moveNodeCmd.MarkFlagRequired("namespace")
|
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
|
||||||
|
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
moveNodeNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err = moveNodeCmd.MarkFlagRequired("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
@@ -93,9 +110,9 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
Short: "Registers a machine to your network",
|
Short: "Registers a machine to your network",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -117,7 +134,7 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
request := &v1.RegisterMachineRequest{
|
request := &v1.RegisterMachineRequest{
|
||||||
Key: machineKey,
|
Key: machineKey,
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RegisterMachine(ctx, request)
|
response, err := client.RegisterMachine(ctx, request)
|
||||||
@@ -134,7 +151,9 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.Machine, fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
|
SuccessOutput(
|
||||||
|
response.Machine,
|
||||||
|
fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -144,9 +163,9 @@ var listNodesCmd = &cobra.Command{
|
|||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -162,7 +181,7 @@ var listNodesCmd = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListMachinesRequest{
|
request := &v1.ListMachinesRequest{
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListMachines(ctx, request)
|
response, err := client.ListMachines(ctx, request)
|
||||||
@@ -182,7 +201,7 @@ var listNodesCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData, err := nodesToPtables(namespace, showTags, response.Machines)
|
tableData, err := nodesToPtables(user, showTags, response.Machines)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
|
||||||
@@ -386,7 +405,7 @@ var deleteNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
var moveNodeCmd = &cobra.Command{
|
var moveNodeCmd = &cobra.Command{
|
||||||
Use: "move",
|
Use: "move",
|
||||||
Short: "Move node to another namespace",
|
Short: "Move node to another user",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -402,11 +421,11 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Error getting namespace: %s", err),
|
fmt.Sprintf("Error getting user: %s", err),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -437,7 +456,7 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
moveRequest := &v1.MoveMachineRequest{
|
moveRequest := &v1.MoveMachineRequest{
|
||||||
MachineId: identifier,
|
MachineId: identifier,
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
}
|
}
|
||||||
|
|
||||||
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
||||||
@@ -454,12 +473,12 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(moveResponse.Machine, "Node moved to another namespace", output)
|
SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func nodesToPtables(
|
func nodesToPtables(
|
||||||
currentNamespace string,
|
currentUser string,
|
||||||
showTags bool,
|
showTags bool,
|
||||||
machines []*v1.Machine,
|
machines []*v1.Machine,
|
||||||
) (pterm.TableData, error) {
|
) (pterm.TableData, error) {
|
||||||
@@ -467,11 +486,13 @@ func nodesToPtables(
|
|||||||
"ID",
|
"ID",
|
||||||
"Hostname",
|
"Hostname",
|
||||||
"Name",
|
"Name",
|
||||||
|
"MachineKey",
|
||||||
"NodeKey",
|
"NodeKey",
|
||||||
"Namespace",
|
"User",
|
||||||
"IP addresses",
|
"IP addresses",
|
||||||
"Ephemeral",
|
"Ephemeral",
|
||||||
"Last seen",
|
"Last seen",
|
||||||
|
"Expiration",
|
||||||
"Online",
|
"Online",
|
||||||
"Expired",
|
"Expired",
|
||||||
}
|
}
|
||||||
@@ -498,12 +519,24 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var expiry time.Time
|
var expiry time.Time
|
||||||
|
var expiryTime string
|
||||||
if machine.Expiry != nil {
|
if machine.Expiry != nil {
|
||||||
expiry = machine.Expiry.AsTime()
|
expiry = machine.Expiry.AsTime()
|
||||||
|
expiryTime = expiry.Format("2006-01-02 15:04:05")
|
||||||
|
} else {
|
||||||
|
expiryTime = "N/A"
|
||||||
|
}
|
||||||
|
|
||||||
|
var machineKey key.MachinePublic
|
||||||
|
err := machineKey.UnmarshalText(
|
||||||
|
[]byte(headscale.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
machineKey = key.MachinePublic{}
|
||||||
}
|
}
|
||||||
|
|
||||||
var nodeKey key.NodePublic
|
var nodeKey key.NodePublic
|
||||||
err := nodeKey.UnmarshalText(
|
err = nodeKey.UnmarshalText(
|
||||||
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -511,9 +544,7 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var online string
|
var online string
|
||||||
if lastSeen.After(
|
if machine.Online {
|
||||||
time.Now().Add(-5 * time.Minute),
|
|
||||||
) { // TODO: Find a better way to reliably show if online
|
|
||||||
online = pterm.LightGreen("online")
|
online = pterm.LightGreen("online")
|
||||||
} else {
|
} else {
|
||||||
online = pterm.LightRed("offline")
|
online = pterm.LightRed("offline")
|
||||||
@@ -546,12 +577,12 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
validTags = strings.TrimLeft(validTags, ",")
|
validTags = strings.TrimLeft(validTags, ",")
|
||||||
|
|
||||||
var namespace string
|
var user string
|
||||||
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
|
if currentUser == "" || (currentUser == machine.User.Name) {
|
||||||
namespace = pterm.LightMagenta(machine.Namespace.Name)
|
user = pterm.LightMagenta(machine.User.Name)
|
||||||
} else {
|
} else {
|
||||||
// Shared into this namespace
|
// Shared into this user
|
||||||
namespace = pterm.LightYellow(machine.Namespace.Name)
|
user = pterm.LightYellow(machine.User.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
var IPV4Address string
|
var IPV4Address string
|
||||||
@@ -568,11 +599,13 @@ func nodesToPtables(
|
|||||||
strconv.FormatUint(machine.Id, headscale.Base10),
|
strconv.FormatUint(machine.Id, headscale.Base10),
|
||||||
machine.Name,
|
machine.Name,
|
||||||
machine.GetGivenName(),
|
machine.GetGivenName(),
|
||||||
|
machineKey.ShortString(),
|
||||||
nodeKey.ShortString(),
|
nodeKey.ShortString(),
|
||||||
namespace,
|
user,
|
||||||
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
||||||
strconv.FormatBool(ephemeral),
|
strconv.FormatBool(ephemeral),
|
||||||
lastSeenTime,
|
lastSeenTime,
|
||||||
|
expiryTime,
|
||||||
online,
|
online,
|
||||||
expired,
|
expired,
|
||||||
}
|
}
|
||||||
|
@@ -20,8 +20,14 @@ const (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(preauthkeysCmd)
|
rootCmd.AddCommand(preauthkeysCmd)
|
||||||
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
|
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
|
||||||
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
|
|
||||||
|
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
|
||||||
|
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
|
||||||
|
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
pakNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
@@ -46,14 +52,14 @@ var preauthkeysCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listPreAuthKeys = &cobra.Command{
|
var listPreAuthKeys = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List the preauthkeys for this namespace",
|
Short: "List the preauthkeys for this user",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -63,7 +69,7 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListPreAuthKeysRequest{
|
request := &v1.ListPreAuthKeysRequest{
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListPreAuthKeys(ctx, request)
|
response, err := client.ListPreAuthKeys(ctx, request)
|
||||||
@@ -143,14 +149,14 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
|
|
||||||
var createPreAuthKeyCmd = &cobra.Command{
|
var createPreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "create",
|
Use: "create",
|
||||||
Short: "Creates a new preauthkey in the specified namespace",
|
Short: "Creates a new preauthkey in the specified user",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -162,11 +168,11 @@ var createPreAuthKeyCmd = &cobra.Command{
|
|||||||
log.Trace().
|
log.Trace().
|
||||||
Bool("reusable", reusable).
|
Bool("reusable", reusable).
|
||||||
Bool("ephemeral", ephemeral).
|
Bool("ephemeral", ephemeral).
|
||||||
Str("namespace", namespace).
|
Str("user", user).
|
||||||
Msg("Preparing to create preauthkey")
|
Msg("Preparing to create preauthkey")
|
||||||
|
|
||||||
request := &v1.CreatePreAuthKeyRequest{
|
request := &v1.CreatePreAuthKeyRequest{
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
Reusable: reusable,
|
Reusable: reusable,
|
||||||
Ephemeral: ephemeral,
|
Ephemeral: ephemeral,
|
||||||
AclTags: tags,
|
AclTags: tags,
|
||||||
@@ -225,9 +231,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
namespace, err := cmd.Flags().GetString("namespace")
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -237,7 +243,7 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ExpirePreAuthKeyRequest{
|
request := &v1.ExpirePreAuthKeyRequest{
|
||||||
Namespace: namespace,
|
User: user,
|
||||||
Key: args[0],
|
Key: args[0],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -12,10 +12,15 @@ import (
|
|||||||
"github.com/tcnksm/go-latest"
|
"github.com/tcnksm/go-latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
deprecateNamespaceMessage = "use --user"
|
||||||
|
)
|
||||||
|
|
||||||
var cfgFile string = ""
|
var cfgFile string = ""
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
if len(os.Args) > 1 && (os.Args[1] == "version" || os.Args[1] == "mockoidc") {
|
if len(os.Args) > 1 &&
|
||||||
|
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -3,37 +3,45 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
|
"net/netip"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/juanfont/headscale"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
"google.golang.org/grpc/status"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
Base10 = 10
|
||||||
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(routesCmd)
|
rootCmd.AddCommand(routesCmd)
|
||||||
|
|
||||||
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
err := listRoutesCmd.MarkFlagRequired("identifier")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(listRoutesCmd)
|
routesCmd.AddCommand(listRoutesCmd)
|
||||||
|
|
||||||
enableRouteCmd.Flags().
|
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
||||||
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
|
err := enableRouteCmd.MarkFlagRequired("route")
|
||||||
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
|
||||||
enableRouteCmd.Flags().BoolP("all", "a", false, "All routes from host")
|
|
||||||
|
|
||||||
err = enableRouteCmd.MarkFlagRequired("identifier")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
routesCmd.AddCommand(enableRouteCmd)
|
routesCmd.AddCommand(enableRouteCmd)
|
||||||
|
|
||||||
nodeCmd.AddCommand(routesCmd)
|
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
||||||
|
err = disableRouteCmd.MarkFlagRequired("route")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf(err.Error())
|
||||||
|
}
|
||||||
|
routesCmd.AddCommand(disableRouteCmd)
|
||||||
|
|
||||||
|
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
||||||
|
err = deleteRouteCmd.MarkFlagRequired("route")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf(err.Error())
|
||||||
|
}
|
||||||
|
routesCmd.AddCommand(deleteRouteCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
var routesCmd = &cobra.Command{
|
var routesCmd = &cobra.Command{
|
||||||
@@ -44,7 +52,7 @@ var routesCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listRoutesCmd = &cobra.Command{
|
var listRoutesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List routes advertised and enabled by a given node",
|
Short: "List all routes",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -64,11 +72,10 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.GetMachineRouteRequest{
|
var routes []*v1.Route
|
||||||
MachineId: machineID,
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.GetMachineRoute(ctx, request)
|
if machineID == 0 {
|
||||||
|
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -85,7 +92,31 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData := routesToPtables(response.Routes)
|
routes = response.Routes
|
||||||
|
} else {
|
||||||
|
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
|
||||||
|
MachineId: machineID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.Routes, "", output)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
routes = response.Routes
|
||||||
|
}
|
||||||
|
|
||||||
|
tableData := routesToPtables(routes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
|
||||||
@@ -107,16 +138,12 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
|
|
||||||
var enableRouteCmd = &cobra.Command{
|
var enableRouteCmd = &cobra.Command{
|
||||||
Use: "enable",
|
Use: "enable",
|
||||||
Short: "Set the enabled routes for a given node",
|
Short: "Set a route as enabled",
|
||||||
Long: `This command will take a list of routes that will _replace_
|
Long: `This command will make as enabled a given route.`,
|
||||||
the current set of routes on a given node.
|
|
||||||
If you would like to disable a route, simply run the command again, but
|
|
||||||
omit the route you do not want to enable.
|
|
||||||
`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
machineID, err := cmd.Flags().GetUint64("identifier")
|
routeID, err := cmd.Flags().GetUint64("route")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -131,52 +158,13 @@ omit the route you do not want to enable.
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
var routes []string
|
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
|
||||||
|
RouteId: routeID,
|
||||||
isAll, _ := cmd.Flags().GetBool("all")
|
|
||||||
if isAll {
|
|
||||||
response, err := client.GetMachineRoute(ctx, &v1.GetMachineRouteRequest{
|
|
||||||
MachineId: machineID,
|
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
|
||||||
"Cannot get machine routes: %s\n",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
routes = response.GetRoutes().GetAdvertisedRoutes()
|
|
||||||
} else {
|
|
||||||
routes, err = cmd.Flags().GetStringSlice("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting routes from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
request := &v1.EnableMachineRoutesRequest{
|
|
||||||
MachineId: machineID,
|
|
||||||
Routes: routes,
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.EnableMachineRoutes(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot register machine: %s\n",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -184,50 +172,127 @@ omit the route you do not want to enable.
|
|||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
if output != "" {
|
||||||
SuccessOutput(response.Routes, "", output)
|
SuccessOutput(response, "", output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
tableData := routesToPtables(response.Routes)
|
var disableRouteCmd = &cobra.Command{
|
||||||
if err != nil {
|
Use: "disable",
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
Short: "Set as disabled a given route",
|
||||||
|
Long: `This command will make as disabled a given route.`,
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
return
|
routeID, err := cmd.Flags().GetUint64("route")
|
||||||
}
|
|
||||||
|
|
||||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
|
||||||
|
RouteId: routeID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response, "", output)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var deleteRouteCmd = &cobra.Command{
|
||||||
|
Use: "delete",
|
||||||
|
Short: "Delete a given route",
|
||||||
|
Long: `This command will delete a given route.`,
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
routeID, err := cmd.Flags().GetUint64("route")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
|
||||||
|
RouteId: routeID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response, "", output)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// routesToPtables converts the list of routes to a nice table.
|
// routesToPtables converts the list of routes to a nice table.
|
||||||
func routesToPtables(routes *v1.Routes) pterm.TableData {
|
func routesToPtables(routes []*v1.Route) pterm.TableData {
|
||||||
tableData := pterm.TableData{{"Route", "Enabled"}}
|
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
|
||||||
|
|
||||||
for _, route := range routes.GetAdvertisedRoutes() {
|
for _, route := range routes {
|
||||||
enabled := isStringInSlice(routes.EnabledRoutes, route)
|
var isPrimaryStr string
|
||||||
|
prefix, err := netip.ParsePrefix(route.Prefix)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
|
||||||
|
|
||||||
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
|
continue
|
||||||
|
}
|
||||||
|
if prefix == headscale.ExitRouteV4 || prefix == headscale.ExitRouteV6 {
|
||||||
|
isPrimaryStr = "-"
|
||||||
|
} else {
|
||||||
|
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
|
||||||
|
}
|
||||||
|
|
||||||
|
tableData = append(tableData,
|
||||||
|
[]string{
|
||||||
|
strconv.FormatUint(route.Id, Base10),
|
||||||
|
route.Machine.GivenName,
|
||||||
|
route.Prefix,
|
||||||
|
strconv.FormatBool(route.Advertised),
|
||||||
|
strconv.FormatBool(route.Enabled),
|
||||||
|
isPrimaryStr,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
return tableData
|
return tableData
|
||||||
}
|
}
|
||||||
|
|
||||||
func isStringInSlice(strs []string, s string) bool {
|
|
||||||
for _, s2 := range strs {
|
|
||||||
if s == s2 {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
@@ -13,26 +13,26 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(namespaceCmd)
|
rootCmd.AddCommand(userCmd)
|
||||||
namespaceCmd.AddCommand(createNamespaceCmd)
|
userCmd.AddCommand(createUserCmd)
|
||||||
namespaceCmd.AddCommand(listNamespacesCmd)
|
userCmd.AddCommand(listUsersCmd)
|
||||||
namespaceCmd.AddCommand(destroyNamespaceCmd)
|
userCmd.AddCommand(destroyUserCmd)
|
||||||
namespaceCmd.AddCommand(renameNamespaceCmd)
|
userCmd.AddCommand(renameUserCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
errMissingParameter = headscale.Error("missing parameters")
|
errMissingParameter = headscale.Error("missing parameters")
|
||||||
)
|
)
|
||||||
|
|
||||||
var namespaceCmd = &cobra.Command{
|
var userCmd = &cobra.Command{
|
||||||
Use: "namespaces",
|
Use: "users",
|
||||||
Short: "Manage the namespaces of Headscale",
|
Short: "Manage the users of Headscale",
|
||||||
Aliases: []string{"namespace", "ns", "user", "users"},
|
Aliases: []string{"user", "namespace", "namespaces", "ns"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createNamespaceCmd = &cobra.Command{
|
var createUserCmd = &cobra.Command{
|
||||||
Use: "create NAME",
|
Use: "create NAME",
|
||||||
Short: "Creates a new namespace",
|
Short: "Creates a new user",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
@@ -44,7 +44,7 @@ var createNamespaceCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
namespaceName := args[0]
|
userName := args[0]
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@@ -52,15 +52,15 @@ var createNamespaceCmd = &cobra.Command{
|
|||||||
|
|
||||||
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
||||||
|
|
||||||
request := &v1.CreateNamespaceRequest{Name: namespaceName}
|
request := &v1.CreateUserRequest{Name: userName}
|
||||||
|
|
||||||
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
|
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
|
||||||
response, err := client.CreateNamespace(ctx, request)
|
response, err := client.CreateUser(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot create namespace: %s",
|
"Cannot create user: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -69,13 +69,13 @@ var createNamespaceCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.Namespace, "Namespace created", output)
|
SuccessOutput(response.User, "User created", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var destroyNamespaceCmd = &cobra.Command{
|
var destroyUserCmd = &cobra.Command{
|
||||||
Use: "destroy NAME",
|
Use: "destroy NAME",
|
||||||
Short: "Destroys a namespace",
|
Short: "Destroys a user",
|
||||||
Aliases: []string{"delete"},
|
Aliases: []string{"delete"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
@@ -87,17 +87,17 @@ var destroyNamespaceCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
namespaceName := args[0]
|
userName := args[0]
|
||||||
|
|
||||||
request := &v1.GetNamespaceRequest{
|
request := &v1.GetUserRequest{
|
||||||
Name: namespaceName,
|
Name: userName,
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
_, err := client.GetNamespace(ctx, request)
|
_, err := client.GetUser(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -113,8 +113,8 @@ var destroyNamespaceCmd = &cobra.Command{
|
|||||||
if !force {
|
if !force {
|
||||||
prompt := &survey.Confirm{
|
prompt := &survey.Confirm{
|
||||||
Message: fmt.Sprintf(
|
Message: fmt.Sprintf(
|
||||||
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
|
"Do you want to remove the user '%s' and any associated preauthkeys?",
|
||||||
namespaceName,
|
userName,
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
err := survey.AskOne(prompt, &confirm)
|
err := survey.AskOne(prompt, &confirm)
|
||||||
@@ -124,14 +124,14 @@ var destroyNamespaceCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if confirm || force {
|
if confirm || force {
|
||||||
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
|
request := &v1.DeleteUserRequest{Name: userName}
|
||||||
|
|
||||||
response, err := client.DeleteNamespace(ctx, request)
|
response, err := client.DeleteUser(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot destroy namespace: %s",
|
"Cannot destroy user: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -139,16 +139,16 @@ var destroyNamespaceCmd = &cobra.Command{
|
|||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
SuccessOutput(response, "Namespace destroyed", output)
|
SuccessOutput(response, "User destroyed", output)
|
||||||
} else {
|
} else {
|
||||||
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
|
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var listNamespacesCmd = &cobra.Command{
|
var listUsersCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all the namespaces",
|
Short: "List all the users",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -157,13 +157,13 @@ var listNamespacesCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListNamespacesRequest{}
|
request := &v1.ListUsersRequest{}
|
||||||
|
|
||||||
response, err := client.ListNamespaces(ctx, request)
|
response, err := client.ListUsers(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
|
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -171,19 +171,19 @@ var listNamespacesCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
if output != "" {
|
||||||
SuccessOutput(response.Namespaces, "", output)
|
SuccessOutput(response.Users, "", output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
||||||
for _, namespace := range response.GetNamespaces() {
|
for _, user := range response.GetUsers() {
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
[]string{
|
[]string{
|
||||||
namespace.GetId(),
|
user.GetId(),
|
||||||
namespace.GetName(),
|
user.GetName(),
|
||||||
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -200,9 +200,9 @@ var listNamespacesCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameNamespaceCmd = &cobra.Command{
|
var renameUserCmd = &cobra.Command{
|
||||||
Use: "rename OLD_NAME NEW_NAME",
|
Use: "rename OLD_NAME NEW_NAME",
|
||||||
Short: "Renames a namespace",
|
Short: "Renames a user",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
expectedArguments := 2
|
expectedArguments := 2
|
||||||
@@ -219,17 +219,17 @@ var renameNamespaceCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.RenameNamespaceRequest{
|
request := &v1.RenameUserRequest{
|
||||||
OldName: args[0],
|
OldName: args[0],
|
||||||
NewName: args[1],
|
NewName: args[1],
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RenameNamespace(ctx, request)
|
response, err := client.RenameUser(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot rename namespace: %s",
|
"Cannot rename user: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -238,6 +238,6 @@ var renameNamespaceCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.Namespace, "Namespace renamed", output)
|
SuccessOutput(response.User, "User renamed", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
@@ -14,7 +14,7 @@ import (
|
|||||||
"google.golang.org/grpc"
|
"google.golang.org/grpc"
|
||||||
"google.golang.org/grpc/credentials"
|
"google.golang.org/grpc/credentials"
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
"gopkg.in/yaml.v2"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
|
@@ -6,11 +6,25 @@ import (
|
|||||||
|
|
||||||
"github.com/efekarakus/termcolor"
|
"github.com/efekarakus/termcolor"
|
||||||
"github.com/juanfont/headscale/cmd/headscale/cli"
|
"github.com/juanfont/headscale/cmd/headscale/cli"
|
||||||
|
"github.com/pkg/profile"
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
|
||||||
|
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
|
||||||
|
err := os.MkdirAll(profilePath, os.ModePerm)
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("failed to create profiling directory")
|
||||||
|
}
|
||||||
|
|
||||||
|
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
|
||||||
|
} else {
|
||||||
|
defer profile.Start().Stop()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var colors bool
|
var colors bool
|
||||||
switch l := termcolor.SupportLevel(os.Stderr); l {
|
switch l := termcolor.SupportLevel(os.Stderr); l {
|
||||||
case termcolor.Level16M:
|
case termcolor.Level16M:
|
||||||
|
@@ -55,7 +55,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
|
|||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
@@ -98,7 +98,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
|
|||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
|
@@ -14,7 +14,9 @@ server_url: http://127.0.0.1:8080
|
|||||||
|
|
||||||
# Address to listen to / bind to on the server
|
# Address to listen to / bind to on the server
|
||||||
#
|
#
|
||||||
listen_addr: 0.0.0.0:8080
|
# For production:
|
||||||
|
# listen_addr: 0.0.0.0:8080
|
||||||
|
listen_addr: 127.0.0.1:8080
|
||||||
|
|
||||||
# Address to listen to /metrics, you may want
|
# Address to listen to /metrics, you may want
|
||||||
# to keep this endpoint private to your internal
|
# to keep this endpoint private to your internal
|
||||||
@@ -27,7 +29,10 @@ metrics_listen_addr: 127.0.0.1:9090
|
|||||||
# remotely with the CLI
|
# remotely with the CLI
|
||||||
# Note: Remote access _only_ works if you have
|
# Note: Remote access _only_ works if you have
|
||||||
# valid certificates.
|
# valid certificates.
|
||||||
grpc_listen_addr: 0.0.0.0:50443
|
#
|
||||||
|
# For production:
|
||||||
|
# grpc_listen_addr: 0.0.0.0:50443
|
||||||
|
grpc_listen_addr: 127.0.0.1:50443
|
||||||
|
|
||||||
# Allow the gRPC admin interface to run in INSECURE
|
# Allow the gRPC admin interface to run in INSECURE
|
||||||
# mode. This is not recommended as the traffic will
|
# mode. This is not recommended as the traffic will
|
||||||
@@ -38,6 +43,7 @@ grpc_allow_insecure: false
|
|||||||
# Private key used to encrypt the traffic between headscale
|
# Private key used to encrypt the traffic between headscale
|
||||||
# and Tailscale clients.
|
# and Tailscale clients.
|
||||||
# The private key file will be autogenerated if it's missing.
|
# The private key file will be autogenerated if it's missing.
|
||||||
|
#
|
||||||
private_key_path: /var/lib/headscale/private.key
|
private_key_path: /var/lib/headscale/private.key
|
||||||
|
|
||||||
# The Noise section includes specific configuration for the
|
# The Noise section includes specific configuration for the
|
||||||
@@ -52,6 +58,12 @@ noise:
|
|||||||
# List of IP prefixes to allocate tailaddresses from.
|
# List of IP prefixes to allocate tailaddresses from.
|
||||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||||
# and the associated prefix length, delimited by a slash.
|
# and the associated prefix length, delimited by a slash.
|
||||||
|
# It must be within IP ranges supported by the Tailscale
|
||||||
|
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
||||||
|
# See below:
|
||||||
|
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
||||||
|
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
||||||
|
# Any other range is NOT supported, and it will cause unexpected issues.
|
||||||
ip_prefixes:
|
ip_prefixes:
|
||||||
- fd7a:115c:a1e0::/48
|
- fd7a:115c:a1e0::/48
|
||||||
- 100.64.0.0/10
|
- 100.64.0.0/10
|
||||||
@@ -119,6 +131,8 @@ node_update_check_interval: 10s
|
|||||||
|
|
||||||
# SQLite config
|
# SQLite config
|
||||||
db_type: sqlite3
|
db_type: sqlite3
|
||||||
|
|
||||||
|
# For production:
|
||||||
db_path: /var/lib/headscale/db.sqlite
|
db_path: /var/lib/headscale/db.sqlite
|
||||||
|
|
||||||
# # Postgres config
|
# # Postgres config
|
||||||
@@ -129,6 +143,9 @@ db_path: /var/lib/headscale/db.sqlite
|
|||||||
# db_name: headscale
|
# db_name: headscale
|
||||||
# db_user: foo
|
# db_user: foo
|
||||||
# db_pass: bar
|
# db_pass: bar
|
||||||
|
|
||||||
|
# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
|
||||||
|
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
|
||||||
# db_ssl: false
|
# db_ssl: false
|
||||||
|
|
||||||
### TLS configuration
|
### TLS configuration
|
||||||
@@ -147,15 +164,9 @@ acme_email: ""
|
|||||||
# Domain name to request a TLS certificate for:
|
# Domain name to request a TLS certificate for:
|
||||||
tls_letsencrypt_hostname: ""
|
tls_letsencrypt_hostname: ""
|
||||||
|
|
||||||
# Client (Tailscale/Browser) authentication mode (mTLS)
|
|
||||||
# Acceptable values:
|
|
||||||
# - disabled: client authentication disabled
|
|
||||||
# - relaxed: client certificate is required but not verified
|
|
||||||
# - enforced: client certificate is required and verified
|
|
||||||
tls_client_auth_mode: relaxed
|
|
||||||
|
|
||||||
# Path to store certificates and metadata needed by
|
# Path to store certificates and metadata needed by
|
||||||
# letsencrypt
|
# letsencrypt
|
||||||
|
# For production:
|
||||||
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
||||||
|
|
||||||
# Type of ACME challenge to use, currently supported types:
|
# Type of ACME challenge to use, currently supported types:
|
||||||
@@ -198,6 +209,18 @@ dns_config:
|
|||||||
nameservers:
|
nameservers:
|
||||||
- 1.1.1.1
|
- 1.1.1.1
|
||||||
|
|
||||||
|
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
|
||||||
|
# "abc123" is example NextDNS ID, replace with yours.
|
||||||
|
#
|
||||||
|
# With metadata sharing:
|
||||||
|
# nameservers:
|
||||||
|
# - https://dns.nextdns.io/abc123
|
||||||
|
#
|
||||||
|
# Without metadata sharing:
|
||||||
|
# nameservers:
|
||||||
|
# - 2a07:a8c0::ab:c123
|
||||||
|
# - 2a07:a8c1::ab:c123
|
||||||
|
|
||||||
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
||||||
# list of search domains and the DNS to query for each one.
|
# list of search domains and the DNS to query for each one.
|
||||||
#
|
#
|
||||||
@@ -211,6 +234,17 @@ dns_config:
|
|||||||
# Search domains to inject.
|
# Search domains to inject.
|
||||||
domains: []
|
domains: []
|
||||||
|
|
||||||
|
# Extra DNS records
|
||||||
|
# so far only A-records are supported (on the tailscale side)
|
||||||
|
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
|
||||||
|
# extra_records:
|
||||||
|
# - name: "grafana.myvpn.example.com"
|
||||||
|
# type: "A"
|
||||||
|
# value: "100.64.0.3"
|
||||||
|
#
|
||||||
|
# # you can also put it in one line
|
||||||
|
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
|
||||||
|
|
||||||
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
||||||
# Only works if there is at least a nameserver defined.
|
# Only works if there is at least a nameserver defined.
|
||||||
magic_dns: true
|
magic_dns: true
|
||||||
@@ -218,13 +252,12 @@ dns_config:
|
|||||||
# Defines the base domain to create the hostnames for MagicDNS.
|
# Defines the base domain to create the hostnames for MagicDNS.
|
||||||
# `base_domain` must be a FQDNs, without the trailing dot.
|
# `base_domain` must be a FQDNs, without the trailing dot.
|
||||||
# The FQDN of the hosts will be
|
# The FQDN of the hosts will be
|
||||||
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
|
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
|
||||||
base_domain: example.com
|
base_domain: example.com
|
||||||
|
|
||||||
# Unix socket used for the CLI to connect without authentication
|
# Unix socket used for the CLI to connect without authentication
|
||||||
# Note: for local development, you probably want to change this to:
|
# Note: for production you will want to set this to something like:
|
||||||
# unix_socket: ./headscale.sock
|
unix_socket: /var/run/headscale/headscale.sock
|
||||||
unix_socket: /var/run/headscale.sock
|
|
||||||
unix_socket_permission: "0770"
|
unix_socket_permission: "0770"
|
||||||
#
|
#
|
||||||
# headscale supports experimental OpenID connect support,
|
# headscale supports experimental OpenID connect support,
|
||||||
@@ -236,26 +269,45 @@ unix_socket_permission: "0770"
|
|||||||
# issuer: "https://your-oidc.issuer.com/path"
|
# issuer: "https://your-oidc.issuer.com/path"
|
||||||
# client_id: "your-oidc-client-id"
|
# client_id: "your-oidc-client-id"
|
||||||
# client_secret: "your-oidc-client-secret"
|
# client_secret: "your-oidc-client-secret"
|
||||||
|
# # Alternatively, set `client_secret_path` to read the secret from the file.
|
||||||
|
# # It resolves environment variables, making integration to systemd's
|
||||||
|
# # `LoadCredential` straightforward:
|
||||||
|
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
||||||
|
# # client_secret and client_secret_path are mutually exclusive.
|
||||||
#
|
#
|
||||||
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
# # The amount of time from a node is authenticated with OpenID until it
|
||||||
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
# # expires and needs to reauthenticate.
|
||||||
|
# # Setting the value to "0" will mean no expiry.
|
||||||
|
# expiry: 180d
|
||||||
|
#
|
||||||
|
# # Use the expiry from the token received from OpenID when the user logged
|
||||||
|
# # in, this will typically lead to frequent need to reauthenticate and should
|
||||||
|
# # only been enabled if you know what you are doing.
|
||||||
|
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
||||||
|
# use_expiry_from_token: false
|
||||||
|
#
|
||||||
|
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
||||||
|
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
||||||
#
|
#
|
||||||
# scope: ["openid", "profile", "email", "custom"]
|
# scope: ["openid", "profile", "email", "custom"]
|
||||||
# extra_params:
|
# extra_params:
|
||||||
# domain_hint: example.com
|
# domain_hint: example.com
|
||||||
#
|
#
|
||||||
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
||||||
# authentication request will be rejected.
|
# # authentication request will be rejected.
|
||||||
#
|
#
|
||||||
# allowed_domains:
|
# allowed_domains:
|
||||||
# - example.com
|
# - example.com
|
||||||
|
# # Note: Groups from keycloak have a leading '/'
|
||||||
|
# allowed_groups:
|
||||||
|
# - /headscale
|
||||||
# allowed_users:
|
# allowed_users:
|
||||||
# - alice@example.com
|
# - alice@example.com
|
||||||
#
|
#
|
||||||
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
||||||
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
|
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
|
||||||
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
||||||
# namespace: `first-name.last-name.example.com`
|
# user: `first-name.last-name.example.com`
|
||||||
#
|
#
|
||||||
# strip_email_domain: true
|
# strip_email_domain: true
|
||||||
|
|
||||||
|
169
config.go
169
config.go
@@ -1,20 +1,22 @@
|
|||||||
package headscale
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"net/url"
|
"net/url"
|
||||||
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/coreos/go-oidc/v3/oidc"
|
"github.com/coreos/go-oidc/v3/oidc"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
|
"tailscale.com/net/tsaddr"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
"tailscale.com/types/dnstype"
|
"tailscale.com/types/dnstype"
|
||||||
)
|
)
|
||||||
@@ -25,6 +27,13 @@ const (
|
|||||||
|
|
||||||
JSONLogFormat = "json"
|
JSONLogFormat = "json"
|
||||||
TextLogFormat = "text"
|
TextLogFormat = "text"
|
||||||
|
|
||||||
|
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
|
||||||
|
maxDuration time.Duration = 1<<63 - 1
|
||||||
|
)
|
||||||
|
|
||||||
|
var errOidcMutuallyExclusive = errors.New(
|
||||||
|
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config contains the initial Headscale configuration.
|
// Config contains the initial Headscale configuration.
|
||||||
@@ -52,7 +61,7 @@ type Config struct {
|
|||||||
DBname string
|
DBname string
|
||||||
DBuser string
|
DBuser string
|
||||||
DBpass string
|
DBpass string
|
||||||
DBssl bool
|
DBssl string
|
||||||
|
|
||||||
TLS TLSConfig
|
TLS TLSConfig
|
||||||
|
|
||||||
@@ -77,7 +86,6 @@ type Config struct {
|
|||||||
type TLSConfig struct {
|
type TLSConfig struct {
|
||||||
CertPath string
|
CertPath string
|
||||||
KeyPath string
|
KeyPath string
|
||||||
ClientAuthMode tls.ClientAuthType
|
|
||||||
|
|
||||||
LetsEncrypt LetsEncryptConfig
|
LetsEncrypt LetsEncryptConfig
|
||||||
}
|
}
|
||||||
@@ -98,7 +106,10 @@ type OIDCConfig struct {
|
|||||||
ExtraParams map[string]string
|
ExtraParams map[string]string
|
||||||
AllowedDomains []string
|
AllowedDomains []string
|
||||||
AllowedUsers []string
|
AllowedUsers []string
|
||||||
|
AllowedGroups []string
|
||||||
StripEmaildomain bool
|
StripEmaildomain bool
|
||||||
|
Expiry time.Duration
|
||||||
|
UseExpiryFromToken bool
|
||||||
}
|
}
|
||||||
|
|
||||||
type DERPConfig struct {
|
type DERPConfig struct {
|
||||||
@@ -154,7 +165,6 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
|
|
||||||
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
||||||
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
||||||
viper.SetDefault("tls_client_auth_mode", "relaxed")
|
|
||||||
|
|
||||||
viper.SetDefault("log.level", "info")
|
viper.SetDefault("log.level", "info")
|
||||||
viper.SetDefault("log.format", TextLogFormat)
|
viper.SetDefault("log.format", TextLogFormat)
|
||||||
@@ -165,7 +175,7 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
viper.SetDefault("derp.server.enabled", false)
|
viper.SetDefault("derp.server.enabled", false)
|
||||||
viper.SetDefault("derp.server.stun.enabled", true)
|
viper.SetDefault("derp.server.stun.enabled", true)
|
||||||
|
|
||||||
viper.SetDefault("unix_socket", "/var/run/headscale.sock")
|
viper.SetDefault("unix_socket", "/var/run/headscale/headscale.sock")
|
||||||
viper.SetDefault("unix_socket_permission", "0o770")
|
viper.SetDefault("unix_socket_permission", "0o770")
|
||||||
|
|
||||||
viper.SetDefault("grpc_listen_addr", ":50443")
|
viper.SetDefault("grpc_listen_addr", ":50443")
|
||||||
@@ -174,9 +184,13 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
viper.SetDefault("cli.timeout", "5s")
|
viper.SetDefault("cli.timeout", "5s")
|
||||||
viper.SetDefault("cli.insecure", false)
|
viper.SetDefault("cli.insecure", false)
|
||||||
|
|
||||||
|
viper.SetDefault("db_ssl", false)
|
||||||
|
|
||||||
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
||||||
viper.SetDefault("oidc.strip_email_domain", true)
|
viper.SetDefault("oidc.strip_email_domain", true)
|
||||||
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
||||||
|
viper.SetDefault("oidc.expiry", "180d")
|
||||||
|
viper.SetDefault("oidc.use_expiry_from_token", false)
|
||||||
|
|
||||||
viper.SetDefault("logtail.enabled", false)
|
viper.SetDefault("logtail.enabled", false)
|
||||||
viper.SetDefault("randomize_client_port", false)
|
viper.SetDefault("randomize_client_port", false)
|
||||||
@@ -185,6 +199,10 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
|
|
||||||
viper.SetDefault("node_update_check_interval", "10s")
|
viper.SetDefault("node_update_check_interval", "10s")
|
||||||
|
|
||||||
|
if IsCLIConfigured() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
if err := viper.ReadInConfig(); err != nil {
|
if err := viper.ReadInConfig(); err != nil {
|
||||||
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
||||||
|
|
||||||
@@ -220,19 +238,6 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
_, authModeValid := LookupTLSClientAuthMode(
|
|
||||||
viper.GetString("tls_client_auth_mode"),
|
|
||||||
)
|
|
||||||
|
|
||||||
if !authModeValid {
|
|
||||||
errorText += fmt.Sprintf(
|
|
||||||
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
|
|
||||||
viper.GetString("tls_client_auth_mode"),
|
|
||||||
DisabledClientAuth,
|
|
||||||
RelaxedClientAuth,
|
|
||||||
EnforcedClientAuth)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
||||||
// to avoid races
|
// to avoid races
|
||||||
minInactivityTimeout, _ := time.ParseDuration("65s")
|
minInactivityTimeout, _ := time.ParseDuration("65s")
|
||||||
@@ -262,10 +267,6 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func GetTLSConfig() TLSConfig {
|
func GetTLSConfig() TLSConfig {
|
||||||
tlsClientAuthMode, _ := LookupTLSClientAuthMode(
|
|
||||||
viper.GetString("tls_client_auth_mode"),
|
|
||||||
)
|
|
||||||
|
|
||||||
return TLSConfig{
|
return TLSConfig{
|
||||||
LetsEncrypt: LetsEncryptConfig{
|
LetsEncrypt: LetsEncryptConfig{
|
||||||
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
||||||
@@ -281,7 +282,6 @@ func GetTLSConfig() TLSConfig {
|
|||||||
KeyPath: AbsolutePathFromConfigPath(
|
KeyPath: AbsolutePathFromConfigPath(
|
||||||
viper.GetString("tls_key_path"),
|
viper.GetString("tls_key_path"),
|
||||||
),
|
),
|
||||||
ClientAuthMode: tlsClientAuthMode,
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -383,10 +383,21 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
if viper.IsSet("dns_config.nameservers") {
|
if viper.IsSet("dns_config.nameservers") {
|
||||||
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
||||||
|
|
||||||
nameservers := make([]netip.Addr, len(nameserversStr))
|
nameservers := []netip.Addr{}
|
||||||
resolvers := make([]*dnstype.Resolver, len(nameserversStr))
|
resolvers := []*dnstype.Resolver{}
|
||||||
|
|
||||||
for index, nameserverStr := range nameserversStr {
|
for _, nameserverStr := range nameserversStr {
|
||||||
|
// Search for explicit DNS-over-HTTPS resolvers
|
||||||
|
if strings.HasPrefix(nameserverStr, "https://") {
|
||||||
|
resolvers = append(resolvers, &dnstype.Resolver{
|
||||||
|
Addr: nameserverStr,
|
||||||
|
})
|
||||||
|
|
||||||
|
// This nameserver can not be parsed as an IP address
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse nameserver as a regular IP
|
||||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
@@ -395,10 +406,10 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
nameservers[index] = nameserver
|
nameservers = append(nameservers, nameserver)
|
||||||
resolvers[index] = &dnstype.Resolver{
|
resolvers = append(resolvers, &dnstype.Resolver{
|
||||||
Addr: nameserver.String(),
|
Addr: nameserver.String(),
|
||||||
}
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
dnsConfig.Nameservers = nameservers
|
dnsConfig.Nameservers = nameservers
|
||||||
@@ -411,8 +422,8 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.restricted_nameservers") {
|
if viper.IsSet("dns_config.restricted_nameservers") {
|
||||||
if len(dnsConfig.Nameservers) > 0 {
|
|
||||||
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
||||||
|
domains := []string{}
|
||||||
restrictedDNS := viper.GetStringMapStringSlice(
|
restrictedDNS := viper.GetStringMapStringSlice(
|
||||||
"dns_config.restricted_nameservers",
|
"dns_config.restricted_nameservers",
|
||||||
)
|
)
|
||||||
@@ -434,16 +445,14 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
dnsConfig.Routes[domain] = restrictedResolvers
|
dnsConfig.Routes[domain] = restrictedResolvers
|
||||||
|
domains = append(domains, domain)
|
||||||
}
|
}
|
||||||
} else {
|
dnsConfig.Domains = domains
|
||||||
log.Warn().
|
|
||||||
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.domains") {
|
if viper.IsSet("dns_config.domains") {
|
||||||
domains := viper.GetStringSlice("dns_config.domains")
|
domains := viper.GetStringSlice("dns_config.domains")
|
||||||
if len(dnsConfig.Nameservers) > 0 {
|
if len(dnsConfig.Resolvers) > 0 {
|
||||||
dnsConfig.Domains = domains
|
dnsConfig.Domains = domains
|
||||||
} else if domains != nil {
|
} else if domains != nil {
|
||||||
log.Warn().
|
log.Warn().
|
||||||
@@ -451,6 +460,20 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if viper.IsSet("dns_config.extra_records") {
|
||||||
|
var extraRecords []tailcfg.DNSRecord
|
||||||
|
|
||||||
|
err := viper.UnmarshalKey("dns_config.extra_records", &extraRecords)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Str("func", "getDNSConfig").
|
||||||
|
Err(err).
|
||||||
|
Msgf("Could not parse dns_config.extra_records")
|
||||||
|
}
|
||||||
|
|
||||||
|
dnsConfig.ExtraRecords = extraRecords
|
||||||
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.magic_dns") {
|
if viper.IsSet("dns_config.magic_dns") {
|
||||||
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
|
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
|
||||||
}
|
}
|
||||||
@@ -469,6 +492,17 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func GetHeadscaleConfig() (*Config, error) {
|
func GetHeadscaleConfig() (*Config, error) {
|
||||||
|
if IsCLIConfigured() {
|
||||||
|
return &Config{
|
||||||
|
CLI: CLIConfig{
|
||||||
|
Address: viper.GetString("cli.address"),
|
||||||
|
APIKey: viper.GetString("cli.api_key"),
|
||||||
|
Timeout: viper.GetDuration("cli.timeout"),
|
||||||
|
Insecure: viper.GetBool("cli.insecure"),
|
||||||
|
},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
dnsConfig, baseDomain := GetDNSConfig()
|
dnsConfig, baseDomain := GetDNSConfig()
|
||||||
derpConfig := GetDERPConfig()
|
derpConfig := GetDERPConfig()
|
||||||
logConfig := GetLogTailConfig()
|
logConfig := GetLogTailConfig()
|
||||||
@@ -482,6 +516,29 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if prefix.Addr().Is4() {
|
||||||
|
builder := netipx.IPSetBuilder{}
|
||||||
|
builder.AddPrefix(tsaddr.CGNATRange())
|
||||||
|
ipSet, _ := builder.IPSet()
|
||||||
|
if !ipSet.ContainsPrefix(prefix) {
|
||||||
|
log.Warn().
|
||||||
|
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
||||||
|
prefixInConfig, tsaddr.CGNATRange())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if prefix.Addr().Is6() {
|
||||||
|
builder := netipx.IPSetBuilder{}
|
||||||
|
builder.AddPrefix(tsaddr.TailscaleULARange())
|
||||||
|
ipSet, _ := builder.IPSet()
|
||||||
|
if !ipSet.ContainsPrefix(prefix) {
|
||||||
|
log.Warn().
|
||||||
|
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
||||||
|
prefixInConfig, tsaddr.TailscaleULARange())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
parsedPrefixes = append(parsedPrefixes, prefix)
|
parsedPrefixes = append(parsedPrefixes, prefix)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -506,6 +563,19 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
oidcClientSecret := viper.GetString("oidc.client_secret")
|
||||||
|
oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
|
||||||
|
if oidcClientSecretPath != "" && oidcClientSecret != "" {
|
||||||
|
return nil, errOidcMutuallyExclusive
|
||||||
|
}
|
||||||
|
if oidcClientSecretPath != "" {
|
||||||
|
secretBytes, err := os.ReadFile(os.ExpandEnv(oidcClientSecretPath))
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
oidcClientSecret = string(secretBytes)
|
||||||
|
}
|
||||||
|
|
||||||
return &Config{
|
return &Config{
|
||||||
ServerURL: viper.GetString("server_url"),
|
ServerURL: viper.GetString("server_url"),
|
||||||
Addr: viper.GetString("listen_addr"),
|
Addr: viper.GetString("listen_addr"),
|
||||||
@@ -540,7 +610,7 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
DBname: viper.GetString("db_name"),
|
DBname: viper.GetString("db_name"),
|
||||||
DBuser: viper.GetString("db_user"),
|
DBuser: viper.GetString("db_user"),
|
||||||
DBpass: viper.GetString("db_pass"),
|
DBpass: viper.GetString("db_pass"),
|
||||||
DBssl: viper.GetBool("db_ssl"),
|
DBssl: viper.GetString("db_ssl"),
|
||||||
|
|
||||||
TLS: GetTLSConfig(),
|
TLS: GetTLSConfig(),
|
||||||
|
|
||||||
@@ -558,17 +628,36 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
),
|
),
|
||||||
Issuer: viper.GetString("oidc.issuer"),
|
Issuer: viper.GetString("oidc.issuer"),
|
||||||
ClientID: viper.GetString("oidc.client_id"),
|
ClientID: viper.GetString("oidc.client_id"),
|
||||||
ClientSecret: viper.GetString("oidc.client_secret"),
|
ClientSecret: oidcClientSecret,
|
||||||
Scope: viper.GetStringSlice("oidc.scope"),
|
Scope: viper.GetStringSlice("oidc.scope"),
|
||||||
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
||||||
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
||||||
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
||||||
|
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
|
||||||
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
||||||
|
Expiry: func() time.Duration {
|
||||||
|
// if set to 0, we assume no expiry
|
||||||
|
if value := viper.GetString("oidc.expiry"); value == "0" {
|
||||||
|
return maxDuration
|
||||||
|
} else {
|
||||||
|
expiry, err := model.ParseDuration(value)
|
||||||
|
if err != nil {
|
||||||
|
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
|
||||||
|
|
||||||
|
return defaultOIDCExpiryTime
|
||||||
|
}
|
||||||
|
|
||||||
|
return time.Duration(expiry)
|
||||||
|
}
|
||||||
|
}(),
|
||||||
|
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
|
||||||
},
|
},
|
||||||
|
|
||||||
LogTail: logConfig,
|
LogTail: logConfig,
|
||||||
RandomizeClientPort: randomizeClientPort,
|
RandomizeClientPort: randomizeClientPort,
|
||||||
|
|
||||||
|
ACL: GetACLConfig(),
|
||||||
|
|
||||||
CLI: CLIConfig{
|
CLI: CLIConfig{
|
||||||
Address: viper.GetString("cli.address"),
|
Address: viper.GetString("cli.address"),
|
||||||
APIKey: viper.GetString("cli.api_key"),
|
APIKey: viper.GetString("cli.api_key"),
|
||||||
@@ -576,8 +665,10 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
Insecure: viper.GetBool("cli.insecure"),
|
Insecure: viper.GetBool("cli.insecure"),
|
||||||
},
|
},
|
||||||
|
|
||||||
ACL: GetACLConfig(),
|
|
||||||
|
|
||||||
Log: GetLogConfig(),
|
Log: GetLogConfig(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func IsCLIConfigured() bool {
|
||||||
|
return viper.GetString("cli.address") != "" && viper.GetString("cli.api_key") != ""
|
||||||
|
}
|
||||||
|
105
db.go
105
db.go
@@ -19,7 +19,9 @@ import (
|
|||||||
|
|
||||||
const (
|
const (
|
||||||
dbVersion = "1"
|
dbVersion = "1"
|
||||||
|
|
||||||
errValueNotFound = Error("not found")
|
errValueNotFound = Error("not found")
|
||||||
|
ErrCannotParsePrefix = Error("cannot parse prefix")
|
||||||
)
|
)
|
||||||
|
|
||||||
// KV is a key-value store in a psql table. For future use...
|
// KV is a key-value store in a psql table. For future use...
|
||||||
@@ -39,6 +41,16 @@ func (h *Headscale) initDB() error {
|
|||||||
db.Exec(`create extension if not exists "uuid-ossp";`)
|
db.Exec(`create extension if not exists "uuid-ossp";`)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_ = db.Migrator().RenameTable("namespaces", "users")
|
||||||
|
|
||||||
|
err = db.AutoMigrate(&User{})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
_ = db.Migrator().RenameColumn(&Machine{}, "namespace_id", "user_id")
|
||||||
|
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
|
||||||
|
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
||||||
|
|
||||||
@@ -79,6 +91,70 @@ func (h *Headscale) initDB() error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
err = db.AutoMigrate(&Route{})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
|
||||||
|
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
|
||||||
|
|
||||||
|
type MachineAux struct {
|
||||||
|
ID uint64
|
||||||
|
EnabledRoutes IPPrefixes
|
||||||
|
}
|
||||||
|
|
||||||
|
machinesAux := []MachineAux{}
|
||||||
|
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("Error accessing db")
|
||||||
|
}
|
||||||
|
for _, machine := range machinesAux {
|
||||||
|
for _, prefix := range machine.EnabledRoutes {
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Err(err).
|
||||||
|
Str("enabled_route", prefix.String()).
|
||||||
|
Msg("Error parsing enabled_route")
|
||||||
|
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
err = db.Preload("Machine").
|
||||||
|
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
|
||||||
|
First(&Route{}).
|
||||||
|
Error
|
||||||
|
if err == nil {
|
||||||
|
log.Info().
|
||||||
|
Str("enabled_route", prefix.String()).
|
||||||
|
Msg("Route already migrated to new table, skipping")
|
||||||
|
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
route := Route{
|
||||||
|
MachineID: machine.ID,
|
||||||
|
Advertised: true,
|
||||||
|
Enabled: true,
|
||||||
|
Prefix: IPPrefix(prefix),
|
||||||
|
}
|
||||||
|
if err := h.db.Create(&route).Error; err != nil {
|
||||||
|
log.Error().Err(err).Msg("Error creating route")
|
||||||
|
} else {
|
||||||
|
log.Info().
|
||||||
|
Uint64("machine_id", route.MachineID).
|
||||||
|
Str("prefix", prefix.String()).
|
||||||
|
Msg("Route migrated")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
|
||||||
|
if err != nil {
|
||||||
|
log.Error().Err(err).Msg("Error dropping enabled_routes column")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&Machine{})
|
err = db.AutoMigrate(&Machine{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -121,11 +197,6 @@ func (h *Headscale) initDB() error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&Namespace{})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.AutoMigrate(&PreAuthKey{})
|
err = db.AutoMigrate(&PreAuthKey{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -264,6 +335,30 @@ func (hi HostInfo) Value() (driver.Value, error) {
|
|||||||
return string(bytes), err
|
return string(bytes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type IPPrefix netip.Prefix
|
||||||
|
|
||||||
|
func (i *IPPrefix) Scan(destination interface{}) error {
|
||||||
|
switch value := destination.(type) {
|
||||||
|
case string:
|
||||||
|
prefix, err := netip.ParsePrefix(value)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
*i = IPPrefix(prefix)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
default:
|
||||||
|
return fmt.Errorf("%w: unexpected data type %T", ErrCannotParsePrefix, destination)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value return json value, implement driver.Valuer interface.
|
||||||
|
func (i IPPrefix) Value() (driver.Value, error) {
|
||||||
|
prefixStr := netip.Prefix(i).String()
|
||||||
|
|
||||||
|
return prefixStr, nil
|
||||||
|
}
|
||||||
|
|
||||||
type IPPrefixes []netip.Prefix
|
type IPPrefixes []netip.Prefix
|
||||||
|
|
||||||
func (i *IPPrefixes) Scan(destination interface{}) error {
|
func (i *IPPrefixes) Scan(destination interface{}) error {
|
||||||
|
2
derp.go
2
derp.go
@@ -10,7 +10,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"gopkg.in/yaml.v2"
|
"gopkg.in/yaml.v3"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@@ -157,14 +157,14 @@ func (h *Headscale) DERPHandler(
|
|||||||
|
|
||||||
if !fastStart {
|
if !fastStart {
|
||||||
pubKey := h.privateKey.Public()
|
pubKey := h.privateKey.Public()
|
||||||
pubKeyStr := pubKey.UntypedHexString() //nolint
|
pubKeyStr, _ := pubKey.MarshalText() //nolint
|
||||||
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
||||||
"Upgrade: DERP\r\n"+
|
"Upgrade: DERP\r\n"+
|
||||||
"Connection: Upgrade\r\n"+
|
"Connection: Upgrade\r\n"+
|
||||||
"Derp-Version: %v\r\n"+
|
"Derp-Version: %v\r\n"+
|
||||||
"Derp-Public-Key: %s\r\n\r\n",
|
"Derp-Public-Key: %s\r\n\r\n",
|
||||||
derp.ProtocolVersion,
|
derp.ProtocolVersion,
|
||||||
pubKeyStr)
|
string(pubKeyStr))
|
||||||
}
|
}
|
||||||
|
|
||||||
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
||||||
|
49
dns.go
49
dns.go
@@ -3,11 +3,13 @@ package headscale
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
|
"net/url"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
mapset "github.com/deckarep/golang-set/v2"
|
mapset "github.com/deckarep/golang-set/v2"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
|
"tailscale.com/types/dnstype"
|
||||||
"tailscale.com/util/dnsname"
|
"tailscale.com/util/dnsname"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -20,6 +22,10 @@ const (
|
|||||||
ipv6AddressLength = 128
|
ipv6AddressLength = 128
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
nextDNSDoHPrefix = "https://dns.nextdns.io"
|
||||||
|
)
|
||||||
|
|
||||||
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
||||||
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
||||||
// server (listening in 100.100.100.100 udp/53) should be used for.
|
// server (listening in 100.100.100.100 udp/53) should be used for.
|
||||||
@@ -152,37 +158,62 @@ func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
|
|||||||
return fqdns
|
return fqdns
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If any nextdns DoH resolvers are present in the list of resolvers it will
|
||||||
|
// take metadata from the machine metadata and instruct tailscale to add it
|
||||||
|
// to the requests. This makes it possible to identify from which device the
|
||||||
|
// requests come in the NextDNS dashboard.
|
||||||
|
//
|
||||||
|
// This will produce a resolver like:
|
||||||
|
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
|
||||||
|
func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
|
||||||
|
for _, resolver := range resolvers {
|
||||||
|
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
|
||||||
|
attrs := url.Values{
|
||||||
|
"device_name": []string{machine.Hostname},
|
||||||
|
"device_model": []string{machine.HostInfo.OS},
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(machine.IPAddresses) > 0 {
|
||||||
|
attrs.Add("device_ip", machine.IPAddresses[0].String())
|
||||||
|
}
|
||||||
|
|
||||||
|
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func getMapResponseDNSConfig(
|
func getMapResponseDNSConfig(
|
||||||
dnsConfigOrig *tailcfg.DNSConfig,
|
dnsConfigOrig *tailcfg.DNSConfig,
|
||||||
baseDomain string,
|
baseDomain string,
|
||||||
machine Machine,
|
machine Machine,
|
||||||
peers Machines,
|
peers Machines,
|
||||||
) *tailcfg.DNSConfig {
|
) *tailcfg.DNSConfig {
|
||||||
var dnsConfig *tailcfg.DNSConfig
|
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
|
||||||
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
||||||
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
|
// Only inject the Search Domain of the current user - shared nodes should use their full FQDN
|
||||||
dnsConfig = dnsConfigOrig.Clone()
|
|
||||||
dnsConfig.Domains = append(
|
dnsConfig.Domains = append(
|
||||||
dnsConfig.Domains,
|
dnsConfig.Domains,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"%s.%s",
|
"%s.%s",
|
||||||
machine.Namespace.Name,
|
machine.User.Name,
|
||||||
baseDomain,
|
baseDomain,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
namespaceSet := mapset.NewSet[Namespace]()
|
userSet := mapset.NewSet[User]()
|
||||||
namespaceSet.Add(machine.Namespace)
|
userSet.Add(machine.User)
|
||||||
for _, p := range peers {
|
for _, p := range peers {
|
||||||
namespaceSet.Add(p.Namespace)
|
userSet.Add(p.User)
|
||||||
}
|
}
|
||||||
for _, namespace := range namespaceSet.ToSlice() {
|
for _, user := range userSet.ToSlice() {
|
||||||
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
|
dnsRoute := fmt.Sprintf("%v.%v", user.Name, baseDomain)
|
||||||
dnsConfig.Routes[dnsRoute] = nil
|
dnsConfig.Routes[dnsRoute] = nil
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
dnsConfig = dnsConfigOrig
|
dnsConfig = dnsConfigOrig
|
||||||
}
|
}
|
||||||
|
|
||||||
|
addNextDNSMetadata(dnsConfig.Resolvers, machine)
|
||||||
|
|
||||||
return dnsConfig
|
return dnsConfig
|
||||||
}
|
}
|
||||||
|
82
dns_test.go
82
dns_test.go
@@ -112,17 +112,17 @@ func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||||
namespaceShared1, err := app.CreateNamespace("shared1")
|
userShared1, err := app.CreateUser("shared1")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
namespaceShared2, err := app.CreateNamespace("shared2")
|
userShared2, err := app.CreateUser("shared2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
namespaceShared3, err := app.CreateNamespace("shared3")
|
userShared3, err := app.CreateUser("shared3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||||
namespaceShared1.Name,
|
userShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -131,7 +131,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||||
namespaceShared2.Name,
|
userShared2.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -140,7 +140,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||||
namespaceShared3.Name,
|
userShared3.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -149,7 +149,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||||
namespaceShared1.Name,
|
userShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -157,7 +157,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
machineInShared1 := &Machine{
|
machineInShared1 := &Machine{
|
||||||
@@ -166,15 +166,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
Hostname: "test_get_shared_nodes_1",
|
Hostname: "test_get_shared_nodes_1",
|
||||||
NamespaceID: namespaceShared1.ID,
|
UserID: userShared1.ID,
|
||||||
Namespace: *namespaceShared1,
|
User: *userShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared1)
|
app.db.Save(machineInShared1)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared2 := &Machine{
|
machineInShared2 := &Machine{
|
||||||
@@ -183,15 +183,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_2",
|
Hostname: "test_get_shared_nodes_2",
|
||||||
NamespaceID: namespaceShared2.ID,
|
UserID: userShared2.ID,
|
||||||
Namespace: *namespaceShared2,
|
User: *userShared2,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared2)
|
app.db.Save(machineInShared2)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared3 := &Machine{
|
machineInShared3 := &Machine{
|
||||||
@@ -200,15 +200,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_3",
|
Hostname: "test_get_shared_nodes_3",
|
||||||
NamespaceID: namespaceShared3.ID,
|
UserID: userShared3.ID,
|
||||||
Namespace: *namespaceShared3,
|
User: *userShared3,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared3)
|
app.db.Save(machineInShared3)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machine2InShared1 := &Machine{
|
machine2InShared1 := &Machine{
|
||||||
@@ -217,8 +217,8 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_4",
|
Hostname: "test_get_shared_nodes_4",
|
||||||
NamespaceID: namespaceShared1.ID,
|
UserID: userShared1.ID,
|
||||||
Namespace: *namespaceShared1,
|
User: *userShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||||
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
||||||
@@ -245,31 +245,31 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
|
|
||||||
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
||||||
|
|
||||||
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
|
domainRouteShared1 := fmt.Sprintf("%s.%s", userShared1.Name, baseDomain)
|
||||||
_, ok := dnsConfig.Routes[domainRouteShared1]
|
_, ok := dnsConfig.Routes[domainRouteShared1]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
|
|
||||||
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
|
domainRouteShared2 := fmt.Sprintf("%s.%s", userShared2.Name, baseDomain)
|
||||||
_, ok = dnsConfig.Routes[domainRouteShared2]
|
_, ok = dnsConfig.Routes[domainRouteShared2]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
|
|
||||||
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
|
domainRouteShared3 := fmt.Sprintf("%s.%s", userShared3.Name, baseDomain)
|
||||||
_, ok = dnsConfig.Routes[domainRouteShared3]
|
_, ok = dnsConfig.Routes[domainRouteShared3]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||||
namespaceShared1, err := app.CreateNamespace("shared1")
|
userShared1, err := app.CreateUser("shared1")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
namespaceShared2, err := app.CreateNamespace("shared2")
|
userShared2, err := app.CreateUser("shared2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
namespaceShared3, err := app.CreateNamespace("shared3")
|
userShared3, err := app.CreateUser("shared3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||||
namespaceShared1.Name,
|
userShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -278,7 +278,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||||
namespaceShared2.Name,
|
userShared2.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -287,7 +287,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||||
namespaceShared3.Name,
|
userShared3.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -296,7 +296,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||||
namespaceShared1.Name,
|
userShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -304,7 +304,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
machineInShared1 := &Machine{
|
machineInShared1 := &Machine{
|
||||||
@@ -313,15 +313,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
Hostname: "test_get_shared_nodes_1",
|
Hostname: "test_get_shared_nodes_1",
|
||||||
NamespaceID: namespaceShared1.ID,
|
UserID: userShared1.ID,
|
||||||
Namespace: *namespaceShared1,
|
User: *userShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared1)
|
app.db.Save(machineInShared1)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared2 := &Machine{
|
machineInShared2 := &Machine{
|
||||||
@@ -330,15 +330,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_2",
|
Hostname: "test_get_shared_nodes_2",
|
||||||
NamespaceID: namespaceShared2.ID,
|
UserID: userShared2.ID,
|
||||||
Namespace: *namespaceShared2,
|
User: *userShared2,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared2)
|
app.db.Save(machineInShared2)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared3 := &Machine{
|
machineInShared3 := &Machine{
|
||||||
@@ -347,15 +347,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_3",
|
Hostname: "test_get_shared_nodes_3",
|
||||||
NamespaceID: namespaceShared3.ID,
|
UserID: userShared3.ID,
|
||||||
Namespace: *namespaceShared3,
|
User: *userShared3,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared3)
|
app.db.Save(machineInShared3)
|
||||||
|
|
||||||
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machine2InShared1 := &Machine{
|
machine2InShared1 := &Machine{
|
||||||
@@ -364,8 +364,8 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_4",
|
Hostname: "test_get_shared_nodes_4",
|
||||||
NamespaceID: namespaceShared1.ID,
|
UserID: userShared1.ID,
|
||||||
Namespace: *namespaceShared1,
|
User: *userShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||||
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
||||||
|
@@ -1,54 +0,0 @@
|
|||||||
# headscale documentation
|
|
||||||
|
|
||||||
This page contains the official and community contributed documentation for `headscale`.
|
|
||||||
|
|
||||||
If you are having trouble with following the documentation or get unexpected results,
|
|
||||||
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
|
|
||||||
|
|
||||||
## Official documentation
|
|
||||||
|
|
||||||
### How-to
|
|
||||||
|
|
||||||
- [Running headscale on Linux](running-headscale-linux.md)
|
|
||||||
- [Control headscale remotely](remote-cli.md)
|
|
||||||
- [Using a Windows client with headscale](windows-client.md)
|
|
||||||
|
|
||||||
### References
|
|
||||||
|
|
||||||
- [Configuration](../config-example.yaml)
|
|
||||||
- [Glossary](glossary.md)
|
|
||||||
- [TLS](tls.md)
|
|
||||||
|
|
||||||
## Community documentation
|
|
||||||
|
|
||||||
Community documentation is not actively maintained by the headscale authors and is
|
|
||||||
written by community members. It is _not_ verified by `headscale` developers.
|
|
||||||
|
|
||||||
**It might be outdated and it might miss necessary steps**.
|
|
||||||
|
|
||||||
- [Running headscale in a container](running-headscale-container.md)
|
|
||||||
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
|
|
||||||
- [Running headscale behind a reverse proxy](reverse-proxy.md)
|
|
||||||
|
|
||||||
## Misc
|
|
||||||
|
|
||||||
### Policy ACLs
|
|
||||||
|
|
||||||
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
|
||||||
|
|
||||||
For instance, instead of referring to users when defining groups you must
|
|
||||||
use namespaces (which are the equivalent to user/logins in Tailscale.com).
|
|
||||||
|
|
||||||
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
|
||||||
|
|
||||||
When using ACL's the Namespace borders are no longer applied. All machines
|
|
||||||
whichever the Namespace have the ability to communicate with other hosts as
|
|
||||||
long as the ACL's permits this exchange.
|
|
||||||
|
|
||||||
The [ACLs](acls.md) document should help understand a fictional case of setting
|
|
||||||
up ACLs in a small company. All concepts presented in this document could be
|
|
||||||
applied outside of business oriented usage.
|
|
||||||
|
|
||||||
### Apple devices
|
|
||||||
|
|
||||||
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.
|
|
25
docs/acls.md
25
docs/acls.md
@@ -1,4 +1,15 @@
|
|||||||
# ACLs use case example
|
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
||||||
|
|
||||||
|
For instance, instead of referring to users when defining groups you must
|
||||||
|
use users (which are the equivalent to user/logins in Tailscale.com).
|
||||||
|
|
||||||
|
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
||||||
|
|
||||||
|
When using ACL's the User borders are no longer applied. All machines
|
||||||
|
whichever the User have the ability to communicate with other hosts as
|
||||||
|
long as the ACL's permits this exchange.
|
||||||
|
|
||||||
|
## ACLs use case example
|
||||||
|
|
||||||
Let's build an example use case for a small business (It may be the place where
|
Let's build an example use case for a small business (It may be the place where
|
||||||
ACL's are the most useful).
|
ACL's are the most useful).
|
||||||
@@ -29,19 +40,21 @@ servers.
|
|||||||
|
|
||||||
## ACL setup
|
## ACL setup
|
||||||
|
|
||||||
Note: Namespaces will be created automatically when users authenticate with the
|
Note: Users will be created automatically when users authenticate with the
|
||||||
Headscale server.
|
Headscale server.
|
||||||
|
|
||||||
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
||||||
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
||||||
|
|
||||||
When registering the servers we will need to add the flag
|
When registering the servers we will need to add the flag
|
||||||
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
|
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
|
||||||
registering the server should be allowed to do it. Since anyone can add tags to
|
registering the server should be allowed to do it. Since anyone can add tags to
|
||||||
a server they can register, the check of the tags is done on headscale server
|
a server they can register, the check of the tags is done on headscale server
|
||||||
and only valid tags are applied. A tag is valid if the namespace that is
|
and only valid tags are applied. A tag is valid if the user that is
|
||||||
registering it is allowed to do it.
|
registering it is allowed to do it.
|
||||||
|
|
||||||
|
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
|
||||||
|
|
||||||
Here are the ACL's to implement the same permissions as above:
|
Here are the ACL's to implement the same permissions as above:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
@@ -164,8 +177,8 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
"dst": ["tag:dev-app-servers:80,443"]
|
"dst": ["tag:dev-app-servers:80,443"]
|
||||||
},
|
},
|
||||||
|
|
||||||
// We still have to allow internal namespaces communications since nothing guarantees that each user have
|
// We still have to allow internal users communications since nothing guarantees that each user have
|
||||||
// their own namespaces.
|
// their own users.
|
||||||
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
||||||
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
||||||
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
||||||
|
90
docs/dns-records.md
Normal file
90
docs/dns-records.md
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
# Setting custom DNS records
|
||||||
|
|
||||||
|
!!! warning "Community documentation"
|
||||||
|
|
||||||
|
This page is not actively maintained by the headscale authors and is
|
||||||
|
written by community members. It is _not_ verified by `headscale` developers.
|
||||||
|
|
||||||
|
**It might be outdated and it might miss necessary steps**.
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
|
||||||
|
An example use case is to serve apps on the same host via a reverse proxy like NGINX, in this case a Prometheus monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the hostname and portnum combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### 1. Change the configuration
|
||||||
|
|
||||||
|
1. Change the `config.yaml` to contain the desired records like so:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
dns_config:
|
||||||
|
...
|
||||||
|
extra_records:
|
||||||
|
- name: "prometheus.myvpn.example.com"
|
||||||
|
type: "A"
|
||||||
|
value: "100.64.0.3"
|
||||||
|
|
||||||
|
- name: "grafana.myvpn.example.com"
|
||||||
|
type: "A"
|
||||||
|
value: "100.64.0.3"
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Restart your headscale instance.
|
||||||
|
|
||||||
|
Beware of the limitations listed later on!
|
||||||
|
|
||||||
|
### 2. Verify that the records are set
|
||||||
|
|
||||||
|
You can use a DNS querying tool of your choice on one of your hosts to verify that your newly set records are actually available in MagicDNS, here we used [`dig`](https://man.archlinux.org/man/dig.1.en):
|
||||||
|
|
||||||
|
```
|
||||||
|
$ dig grafana.myvpn.example.com
|
||||||
|
|
||||||
|
; <<>> DiG 9.18.10 <<>> grafana.myvpn.example.com
|
||||||
|
;; global options: +cmd
|
||||||
|
;; Got answer:
|
||||||
|
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44054
|
||||||
|
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
|
||||||
|
|
||||||
|
;; OPT PSEUDOSECTION:
|
||||||
|
; EDNS: version: 0, flags:; udp: 65494
|
||||||
|
;; QUESTION SECTION:
|
||||||
|
;grafana.myvpn.example.com. IN A
|
||||||
|
|
||||||
|
;; ANSWER SECTION:
|
||||||
|
grafana.myvpn.example.com. 593 IN A 100.64.0.3
|
||||||
|
|
||||||
|
;; Query time: 0 msec
|
||||||
|
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
|
||||||
|
;; WHEN: Sat Dec 31 11:46:55 CET 2022
|
||||||
|
;; MSG SIZE rcvd: 66
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Optional: Setup the reverse proxy
|
||||||
|
|
||||||
|
The motivating example here was to be able to access internal monitoring services on the same host without specifying a port:
|
||||||
|
|
||||||
|
```
|
||||||
|
server {
|
||||||
|
listen 80;
|
||||||
|
listen [::]:80;
|
||||||
|
|
||||||
|
server_name grafana.myvpn.example.com;
|
||||||
|
|
||||||
|
location / {
|
||||||
|
proxy_pass http://localhost:3000;
|
||||||
|
proxy_set_header Host $http_host;
|
||||||
|
proxy_set_header X-Real-IP $remote_addr;
|
||||||
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||||
|
proxy_set_header X-Forwarded-Proto $scheme;
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
[Not all types of records are supported](https://github.com/tailscale/tailscale/blob/6edf357b96b28ee1be659a70232c0135b2ffedfd/ipn/ipnlocal/local.go#L2989-L3007), especially no CNAME records.
|
49
docs/exit-node.md
Normal file
49
docs/exit-node.md
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
# Exit Nodes
|
||||||
|
|
||||||
|
## On the node
|
||||||
|
|
||||||
|
Register the node and make it advertise itself as an exit node:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ sudo tailscale up --login-server https://my-server.com --advertise-exit-node
|
||||||
|
```
|
||||||
|
|
||||||
|
If the node is already registered, it can advertise exit capabilities like this:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ sudo tailscale set --advertise-exit-node
|
||||||
|
```
|
||||||
|
|
||||||
|
To use a node as an exit node, IP forwarding must be enabled on the node. Check the official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to enable IP fowarding.
|
||||||
|
|
||||||
|
## On the control server
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ # list nodes
|
||||||
|
$ headscale routes list
|
||||||
|
ID | Machine | Prefix | Advertised | Enabled | Primary
|
||||||
|
1 | | 0.0.0.0/0 | false | false | -
|
||||||
|
2 | | ::/0 | false | false | -
|
||||||
|
3 | phobos | 0.0.0.0/0 | true | false | -
|
||||||
|
4 | phobos | ::/0 | true | false | -
|
||||||
|
$ # enable routes for phobos
|
||||||
|
$ headscale routes enable -r 3
|
||||||
|
$ headscale routes enable -r 4
|
||||||
|
$ # Check node list again. The routes are now enabled.
|
||||||
|
$ headscale routes list
|
||||||
|
ID | Machine | Prefix | Advertised | Enabled | Primary
|
||||||
|
1 | | 0.0.0.0/0 | false | false | -
|
||||||
|
2 | | ::/0 | false | false | -
|
||||||
|
3 | phobos | 0.0.0.0/0 | true | true | -
|
||||||
|
4 | phobos | ::/0 | true | true | -
|
||||||
|
```
|
||||||
|
|
||||||
|
## On the client
|
||||||
|
|
||||||
|
The exit node can now be used with:
|
||||||
|
|
||||||
|
```console
|
||||||
|
$ sudo tailscale set --exit-node phobos
|
||||||
|
```
|
||||||
|
|
||||||
|
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes/?q=exit#step-3-use-the-exit-node) for how to do it on your device.
|
53
docs/faq.md
Normal file
53
docs/faq.md
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
---
|
||||||
|
hide:
|
||||||
|
- navigation
|
||||||
|
---
|
||||||
|
|
||||||
|
# Frequently Asked Questions
|
||||||
|
|
||||||
|
## What is the design goal of headscale?
|
||||||
|
|
||||||
|
`headscale` aims to implement a self-hosted, open source alternative to the [Tailscale](https://tailscale.com/)
|
||||||
|
control server.
|
||||||
|
`headscale`'s goal is to provide self-hosters and hobbyists with an open-source
|
||||||
|
server they can use for their projects and labs.
|
||||||
|
It implements a narrow scope, a _single_ Tailnet, suitable for a personal use, or a small
|
||||||
|
open-source organisation.
|
||||||
|
|
||||||
|
## How can I contribute?
|
||||||
|
|
||||||
|
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||||
|
contribution will have to be discussed with the Maintainers before being submitted.
|
||||||
|
|
||||||
|
Headscale is open to code contributions for bug fixes without discussion.
|
||||||
|
|
||||||
|
If you find mistakes in the documentation, please also submit a fix to the documentation.
|
||||||
|
|
||||||
|
## Why is 'acknowledged contribution' the chosen model?
|
||||||
|
|
||||||
|
Both maintainers have full-time jobs and families, and we want to avoid burnout. We also want to avoid frustration from contributors when their PRs are not accepted.
|
||||||
|
|
||||||
|
We are more than happy to exchange emails, or to have dedicated calls before a PR is submitted.
|
||||||
|
|
||||||
|
## When/Why is Feature X going to be implemented?
|
||||||
|
|
||||||
|
We don't know. We might be working on it. If you want to help, please send us a PR.
|
||||||
|
|
||||||
|
Please be aware that there are a number of reasons why we might not accept specific contributions:
|
||||||
|
|
||||||
|
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
|
||||||
|
- Given that we are reverse-engineering Tailscale to satify our own curiosity, we might be interested in implementing the feature ourselves.
|
||||||
|
- You are not sending unit and integration tests with it.
|
||||||
|
|
||||||
|
## Do you support Y method of deploying Headscale?
|
||||||
|
|
||||||
|
We currently support deploying `headscale` using our binaries and the DEB packages. Both can be found in the
|
||||||
|
[GitHub releases page](https://github.com/juanfont/headscale/releases).
|
||||||
|
|
||||||
|
In addition to that, there are semi-official RPM packages by the Fedora infra team https://copr.fedorainfracloud.org/coprs/jonathanspw/headscale/
|
||||||
|
|
||||||
|
For convenience, we also build Docker images with `headscale`. But **please be aware that we don't officially support deploying `headscale` using Docker**. We have a [Discord channel](https://discord.com/channels/896711691637780480/1070619770942148618) where you can ask for Docker-specific help to the community.
|
||||||
|
|
||||||
|
## Why is my reverse proxy not working with Headscale?
|
||||||
|
|
||||||
|
We don't know. We don't use reverse proxies with `headscale` ourselves, so we don't have any experience with them. We have [community documentation](https://headscale.net/reverse-proxy/) on how to configure various reverse proxies, and a dedicated [Discord channel](https://discord.com/channels/896711691637780480/1070619818346164324) where you can ask for help to the community.
|
@@ -1,6 +1,6 @@
|
|||||||
# Glossary
|
# Glossary
|
||||||
|
|
||||||
| Term | Description |
|
| Term | Description |
|
||||||
| --------- | --------------------------------------------------------------------------------------------------------------------- |
|
| --------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
|
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
|
||||||
| Namespace | A namespace is a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User |
|
| Namespace | A namespace was a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User (This is now called user) |
|
||||||
|
30
docs/iOS-client.md
Normal file
30
docs/iOS-client.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
# Connecting an iOS client
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
This documentation has the goal of showing how a user can use the official iOS [Tailscale](https://tailscale.com) client with `headscale`.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
|
||||||
|
|
||||||
|
Ensure that the installed version is at least 1.38.1, as that is the first release to support alternate control servers.
|
||||||
|
|
||||||
|
## Configuring the headscale URL
|
||||||
|
|
||||||
|
!!! info "Apple devices"
|
||||||
|
|
||||||
|
An endpoint with information on how to connect your Apple devices
|
||||||
|
(currently macOS only) is available at `/apple` on your running instance.
|
||||||
|
|
||||||
|
Ensure that the tailscale app is logged out before proceeding.
|
||||||
|
|
||||||
|
Go to iOS settings, scroll down past game center and tv provider to the tailscale app and select it. The headscale URL can be entered into the _"ALTERNATE COORDINATION SERVER URL"_ box.
|
||||||
|
|
||||||
|
> **Note**
|
||||||
|
>
|
||||||
|
> If the app was previously logged into tailscale, toggle on the _Reset Keychain_ switch.
|
||||||
|
|
||||||
|
Restart the app by closing it from the iOS app switcher, open the app and select the regular _Sign in_ option (non-SSO), and it should open up to the headscale authentication page.
|
||||||
|
|
||||||
|
Enter your credentials and log in. Headscale should now be working on your iOS device.
|
43
docs/index.md
Normal file
43
docs/index.md
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
---
|
||||||
|
hide:
|
||||||
|
- navigation
|
||||||
|
- toc
|
||||||
|
---
|
||||||
|
|
||||||
|
# headscale
|
||||||
|
|
||||||
|
`headscale` is an open source, self-hosted implementation of the Tailscale control server.
|
||||||
|
|
||||||
|
This page contains the documentation for the latest version of headscale. Please also check our [FAQ](/faq/).
|
||||||
|
|
||||||
|
Join our [Discord](https://discord.gg/c84AZQhmpx) server for a chat and community support.
|
||||||
|
|
||||||
|
## Design goal
|
||||||
|
|
||||||
|
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
||||||
|
control server.
|
||||||
|
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
||||||
|
server they can use for their projects and labs.
|
||||||
|
It implements a narrower scope, a single Tailnet, suitable for a personal use, or a small
|
||||||
|
open-source organisation.
|
||||||
|
|
||||||
|
## Supporting headscale
|
||||||
|
|
||||||
|
If you like `headscale` and find it useful, there is a sponsorship and donation
|
||||||
|
buttons available in the repo.
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||||
|
contribution will have to be discussed with the Maintainers before being submitted.
|
||||||
|
|
||||||
|
This model has been chosen to reduce the risk of burnout by limiting the
|
||||||
|
maintenance overhead of reviewing and validating third-party code.
|
||||||
|
|
||||||
|
Headscale is open to code contributions for bug fixes without discussion.
|
||||||
|
|
||||||
|
If you find mistakes in the documentation, please submit a fix to the documentation.
|
||||||
|
|
||||||
|
## About
|
||||||
|
|
||||||
|
`headscale` is maintained by [Kristoffer Dalby](https://kradalby.no/) and [Juan Font](https://font.eu).
|
1
docs/logo/headscale3-dots.svg
Normal file
1
docs/logo/headscale3-dots.svg
Normal file
@@ -0,0 +1 @@
|
|||||||
|
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>
|
After Width: | Height: | Size: 1.2 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user