mirror of
https://github.com/juanfont/headscale.git
synced 2025-08-15 08:47:54 +00:00
Compare commits
356 Commits
kradalby-g
...
v0.22.2
Author | SHA1 | Date | |
---|---|---|---|
![]() |
22e397e0b6 | ||
![]() |
c7db99d6ca | ||
![]() |
f73354b4f4 | ||
![]() |
4c8f8c6a1c | ||
![]() |
997e93455d | ||
![]() |
9f381256c4 | ||
![]() |
f60c5a1398 | ||
![]() |
5706f84cb0 | ||
![]() |
9478c288f6 | ||
![]() |
6043ec87cf | ||
![]() |
dcf2439c61 | ||
![]() |
ba45d7dbd3 | ||
![]() |
bab4e14828 | ||
![]() |
526e568e1e | ||
![]() |
02ab0df2de | ||
![]() |
7338775de7 | ||
![]() |
00c514608e | ||
![]() |
6c5723a463 | ||
![]() |
57fd5cf310 | ||
![]() |
f113cc7846 | ||
![]() |
ca54fb9f56 | ||
![]() |
735b185e7f | ||
![]() |
1a7ae11697 | ||
![]() |
644be822d5 | ||
![]() |
56b63c6e10 | ||
![]() |
ccedf276ab | ||
![]() |
10320a5f1f | ||
![]() |
ecd62fb785 | ||
![]() |
0d24e878d0 | ||
![]() |
889d5a1b29 | ||
![]() |
1700a747f6 | ||
![]() |
200e3b88cc | ||
![]() |
5bbbe437df | ||
![]() |
6de53e2f8d | ||
![]() |
b23a9153df | ||
![]() |
80772033ee | ||
![]() |
a2b760834f | ||
![]() |
493bcfcf18 | ||
![]() |
df72508089 | ||
![]() |
0f8d8fc2d8 | ||
![]() |
744e5a11b6 | ||
![]() |
3ea1750ea0 | ||
![]() |
a45777d22e | ||
![]() |
56dd734300 | ||
![]() |
d0113732fe | ||
![]() |
6215eb6471 | ||
![]() |
1d2b4bca8a | ||
![]() |
96f9680afd | ||
![]() |
b465592c07 | ||
![]() |
991ff25362 | ||
![]() |
eacd687dbf | ||
![]() |
549f5a164d | ||
![]() |
bb07aec82c | ||
![]() |
a5afe4bd06 | ||
![]() |
a71cc81fe7 | ||
![]() |
679305c3e4 | ||
![]() |
c0680f34f1 | ||
![]() |
64ebe6b0c8 | ||
![]() |
e6b26499f7 | ||
![]() |
977eb1dee3 | ||
![]() |
b2e2b02210 | ||
![]() |
2abff4bb08 | ||
![]() |
54c00645d1 | ||
![]() |
cad5ce0ebd | ||
![]() |
b12a167fa2 | ||
![]() |
667295e15e | ||
![]() |
bea52678e3 | ||
![]() |
307cfc3304 | ||
![]() |
5e74ca9414 | ||
![]() |
9836b097a4 | ||
![]() |
d0b3b1bfc4 | ||
![]() |
6eea96eabc | ||
![]() |
d08fee78c3 | ||
![]() |
bb5f0d456c | ||
![]() |
c186c49e25 | ||
![]() |
4ec6894773 | ||
![]() |
dd9b4b1cb7 | ||
![]() |
a43bb9c958 | ||
![]() |
ba905ff6fc | ||
![]() |
99bd09f688 | ||
![]() |
a6bc792a61 | ||
![]() |
6381d3660a | ||
![]() |
66c5f74d78 | ||
![]() |
1723a6bf40 | ||
![]() |
353f191e4f | ||
![]() |
8d865bb61b | ||
![]() |
c6815c5334 | ||
![]() |
b684ac0668 | ||
![]() |
dfc5d861c7 | ||
![]() |
50b706eeed | ||
![]() |
036ff1cbb9 | ||
![]() |
ceeef40cdf | ||
![]() |
681c86cc95 | ||
![]() |
c7b459b615 | ||
![]() |
56a7b1e349 | ||
![]() |
f1eee841cb | ||
![]() |
45fbd34480 | ||
![]() |
248abcf353 | ||
![]() |
2560c32378 | ||
![]() |
e38efd3cfa | ||
![]() |
d12f247490 | ||
![]() |
003036a779 | ||
![]() |
ed79f977a7 | ||
![]() |
8012e1cbd2 | ||
![]() |
a5562850a7 | ||
![]() |
bb786ac8e4 | ||
![]() |
ea82035222 | ||
![]() |
c9ecdd6ef1 | ||
![]() |
54f5c249f1 | ||
![]() |
a82a603db6 | ||
![]() |
f49930c514 | ||
![]() |
2baeb79aa0 | ||
![]() |
b3f78a209a | ||
![]() |
5e6868a858 | ||
![]() |
5caf848f94 | ||
![]() |
3e097123bf | ||
![]() |
74447b02e8 | ||
![]() |
20e96de963 | ||
![]() |
7c765fb3dc | ||
![]() |
dcc246c869 | ||
![]() |
cf7767d8f9 | ||
![]() |
61c578f82b | ||
![]() |
6950ff7841 | ||
![]() |
e65ce17f7b | ||
![]() |
b190ec8edc | ||
![]() |
c39085911f | ||
![]() |
3c20d2a178 | ||
![]() |
9187e4287c | ||
![]() |
2b7bcb77a5 | ||
![]() |
97a909866d | ||
![]() |
feeb5d334b | ||
![]() |
a840a2e6ee | ||
![]() |
4183345020 | ||
![]() |
50fb7ad6ce | ||
![]() |
88a9f4b44c | ||
![]() |
00fbd8dd93 | ||
![]() |
ce587d2421 | ||
![]() |
e1eb30084d | ||
![]() |
673638afe7 | ||
![]() |
da48cf64b3 | ||
![]() |
385fd93e73 | ||
![]() |
26edf24477 | ||
![]() |
83a538cc95 | ||
![]() |
cffa040474 | ||
![]() |
727d95b477 | ||
![]() |
640bb94119 | ||
![]() |
0f65918a25 | ||
![]() |
3ac2e0b253 | ||
![]() |
b322cdf251 | ||
![]() |
e128796b59 | ||
![]() |
6d669c6b9c | ||
![]() |
8dadb045cf | ||
![]() |
9f6e546522 | ||
![]() |
9714900db9 | ||
![]() |
cb25f0d650 | ||
![]() |
9c2e580ab5 | ||
![]() |
0ffff2c994 | ||
![]() |
c720af66d6 | ||
![]() |
86a7129027 | ||
![]() |
9eaa8dd049 | ||
![]() |
81441afe70 | ||
![]() |
f19e8aa7f0 | ||
![]() |
90287a6735 | ||
![]() |
fb3e2dcf10 | ||
![]() |
bf0b85f382 | ||
![]() |
5da0963aac | ||
![]() |
da5c051d73 | ||
![]() |
b98bf199dd | ||
![]() |
428d7c86ce | ||
![]() |
af1ec5a593 | ||
![]() |
e3a2593344 | ||
![]() |
bafb6791d3 | ||
![]() |
6edac4863a | ||
![]() |
e27e01c09f | ||
![]() |
dd173ecc1f | ||
![]() |
8ca0fb7ed0 | ||
![]() |
6c714e88ee | ||
![]() |
a6c8718a97 | ||
![]() |
26282b7a54 | ||
![]() |
93aca81c1c | ||
![]() |
81254cdf7a | ||
![]() |
b3a0c4a63b | ||
![]() |
376235c9de | ||
![]() |
7274fdacc6 | ||
![]() |
91c1f54b49 | ||
![]() |
efd0f79fbc | ||
![]() |
2084464225 | ||
![]() |
66ebbf3ecb | ||
![]() |
55a3885614 | ||
![]() |
afae1ff7b6 | ||
![]() |
4de49f5f49 | ||
![]() |
6db9656008 | ||
![]() |
fecb13b24b | ||
![]() |
23a595c26f | ||
![]() |
085912cfb4 | ||
![]() |
7157e14aff | ||
![]() |
4e2c4f92d3 | ||
![]() |
893b0de8fa | ||
![]() |
9b98c3b79f | ||
![]() |
6de26b1d7c | ||
![]() |
1f1931fb00 | ||
![]() |
1f4efbcd3b | ||
![]() |
711fe1d806 | ||
![]() |
e2c62a7b0c | ||
![]() |
ab6565723e | ||
![]() |
7bb6f1a7eb | ||
![]() |
549b82df11 | ||
![]() |
036cdf922f | ||
![]() |
b4ff22935c | ||
![]() |
5feadbf3fc | ||
![]() |
3e9ee816f9 | ||
![]() |
2494e27a73 | ||
![]() |
8e8b65bb84 | ||
![]() |
b7d7fc57c4 | ||
![]() |
b54c0e3d22 | ||
![]() |
593040b73d | ||
![]() |
6e890afc5f | ||
![]() |
2afba0233b | ||
![]() |
91900b7310 | ||
![]() |
55b198a16a | ||
![]() |
ca37dc6268 | ||
![]() |
000c02dad9 | ||
![]() |
4532915be1 | ||
![]() |
4b8d6e7c64 | ||
![]() |
579c5827b3 | ||
![]() |
01628f76ff | ||
![]() |
53858a32f1 | ||
![]() |
2bf576ea8a | ||
![]() |
1faac0b3d7 | ||
![]() |
134c72f4fb | ||
![]() |
70f2f5d750 | ||
![]() |
4453728614 | ||
![]() |
34107f9a0f | ||
![]() |
52862b8a22 | ||
![]() |
946d38e5d7 | ||
![]() |
78819be03c | ||
![]() |
34631dfcf5 | ||
![]() |
8fa9755b55 | ||
![]() |
1b557ac1ea | ||
![]() |
8170f5e693 | ||
![]() |
a506d0fcc8 | ||
![]() |
6718ff71d3 | ||
![]() |
b62acff2e3 | ||
![]() |
ac8bff716d | ||
![]() |
fba77de4eb | ||
![]() |
d1bca105ef | ||
![]() |
6c2d6fa302 | ||
![]() |
1015bc3e02 | ||
![]() |
68c72d03b5 | ||
![]() |
bd4b2da06e | ||
![]() |
7b8cf5ef1a | ||
![]() |
638a3d48ec | ||
![]() |
4de676c64e | ||
![]() |
a58a552f0e | ||
![]() |
0db16c7bbe | ||
![]() |
06f7e7cfd8 | ||
![]() |
19f12f94c0 | ||
![]() |
95d3062c21 | ||
![]() |
86fa136a63 | ||
![]() |
89c12072ba | ||
![]() |
54f701ff92 | ||
![]() |
5a70ea7326 | ||
![]() |
63cd3122e6 | ||
![]() |
6f4c6c1876 | ||
![]() |
eb072a1a74 | ||
![]() |
36b8862e7c | ||
![]() |
d4e3bf184b | ||
![]() |
c28ca27133 | ||
![]() |
c02e105065 | ||
![]() |
22da5bfc1d | ||
![]() |
c6d31747f7 | ||
![]() |
91ed6e2197 | ||
![]() |
d71aef3b98 | ||
![]() |
8a79c2e7ed | ||
![]() |
f34e7c341b | ||
![]() |
e28d308796 | ||
![]() |
f610be632e | ||
![]() |
fd6d25b5c1 | ||
![]() |
3695284286 | ||
![]() |
cfaa36e51a | ||
![]() |
d207c30949 | ||
![]() |
519f22f9bf | ||
![]() |
52a323b90d | ||
![]() |
91559d0558 | ||
![]() |
25195b8d73 | ||
![]() |
e69176e200 | ||
![]() |
d29d0222af | ||
![]() |
72b9803a08 | ||
![]() |
99e33181b2 | ||
![]() |
e7f322b9b6 | ||
![]() |
1d36e1775f | ||
![]() |
0525bea593 | ||
![]() |
2770c7cc07 | ||
![]() |
1b0e80bb10 | ||
![]() |
4ccc528d96 | ||
![]() |
6a311f4ab6 | ||
![]() |
a49a405413 | ||
![]() |
24f946e2e9 | ||
![]() |
c3cdb340de | ||
![]() |
935319a218 | ||
![]() |
4c7e15a7ce | ||
![]() |
d461097247 | ||
![]() |
f90a3c196c | ||
![]() |
751cc173d4 | ||
![]() |
ff134f2b8e | ||
![]() |
6d3ede1367 | ||
![]() |
c0884f94b8 | ||
![]() |
3d8dd68b14 | ||
![]() |
b02e88364e | ||
![]() |
9790831afb | ||
![]() |
2d79179141 | ||
![]() |
275cc28193 | ||
![]() |
c5ba7552c5 | ||
![]() |
8909f801bb | ||
![]() |
3d4af52b3a | ||
![]() |
6391555dab | ||
![]() |
8cc5b2174b | ||
![]() |
9269dd01f5 | ||
![]() |
ef68f17a96 | ||
![]() |
f74266f8f8 | ||
![]() |
46df219ed3 | ||
![]() |
835288d864 | ||
![]() |
93d56362af | ||
![]() |
4799859be0 | ||
![]() |
8e44596171 | ||
![]() |
d479234058 | ||
![]() |
3fc5866de0 | ||
![]() |
f3c40086ac | ||
![]() |
09ed21edd8 | ||
![]() |
456479eaa1 | ||
![]() |
cb87852825 | ||
![]() |
69440058bb | ||
![]() |
9bc6ac0f35 | ||
![]() |
89ff5c83d2 | ||
![]() |
0a47d694be | ||
![]() |
73c84d4f6a | ||
![]() |
a9251d6652 | ||
![]() |
f9c44f11d6 | ||
![]() |
1f8bd24a0d | ||
![]() |
7bf2eb3d71 | ||
![]() |
f5a5437917 | ||
![]() |
9989657c0f | ||
![]() |
cb2790984f | ||
![]() |
18c0009a51 | ||
![]() |
d038df2a88 | ||
![]() |
d8e9d95a3b | ||
![]() |
0e405c7ce0 | ||
![]() |
21f0e089b6 | ||
![]() |
cfda804726 | ||
![]() |
d6b383dd2f | ||
![]() |
07f92e647c | ||
![]() |
bf87b33292 | ||
![]() |
527b580f5e | ||
![]() |
c31328a54a | ||
![]() |
b2c0e37122 | ||
![]() |
889223e35f |
3
.github/FUNDING.yml
vendored
Normal file
3
.github/FUNDING.yml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# These are supported funding model platforms
|
||||
|
||||
ko_fi: headscale
|
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -6,19 +6,24 @@ labels: ["bug"]
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
|
||||
<!--
|
||||
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
|
||||
to make sure that it is truly a bug.
|
||||
The issue tracker is not the place to ask for support or how to set up Headscale.
|
||||
|
||||
**Bug description**
|
||||
Bug reports without the sufficient information will be closed.
|
||||
|
||||
Headscale is a multinational community across the globe. Our language is English.
|
||||
All bug reports needs to be in English.
|
||||
-->
|
||||
|
||||
## Bug description
|
||||
|
||||
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
||||
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
||||
it on our Discord server first. -->
|
||||
|
||||
**To Reproduce**
|
||||
|
||||
<!-- Steps to reproduce the behavior. -->
|
||||
|
||||
**Context info**
|
||||
## Environment
|
||||
|
||||
<!-- Please add relevant information about your system. For example:
|
||||
- Version of headscale used
|
||||
@@ -28,3 +33,20 @@ assignees: ""
|
||||
- The relevant config parameters you used
|
||||
- Log output
|
||||
-->
|
||||
|
||||
- OS:
|
||||
- Headscale version:
|
||||
- Tailscale version:
|
||||
|
||||
<!--
|
||||
We do not support running Headscale in a container nor behind a (reverse) proxy.
|
||||
If either of these are true for your environment, ask the community in Discord
|
||||
instead of filing a bug report.
|
||||
-->
|
||||
|
||||
- [ ] Headscale is behind a (reverse) proxy
|
||||
- [ ] Headscale runs in a container
|
||||
|
||||
## To Reproduce
|
||||
|
||||
<!-- Steps to reproduce the behavior. -->
|
||||
|
17
.github/ISSUE_TEMPLATE/feature_request.md
vendored
17
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -6,12 +6,21 @@ labels: ["enhancement"]
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
|
||||
<!--
|
||||
We typically have a clear roadmap for what we want to improve and reserve the right
|
||||
to close feature requests that does not fit in the roadmap, or fit with the scope
|
||||
of the project, or we actually want to implement ourselves.
|
||||
|
||||
**Feature request**
|
||||
Headscale is a multinational community across the globe. Our language is English.
|
||||
All bug reports needs to be in English.
|
||||
-->
|
||||
|
||||
<!-- A clear and precise description of what new or changed feature you want. -->
|
||||
## Why
|
||||
|
||||
<!-- Please include the reason, why you would need the feature. E.g. what problem
|
||||
<!-- Include the reason, why you would need the feature. E.g. what problem
|
||||
does it solve? Or which workflow is currently frustrating and will be improved by
|
||||
this? -->
|
||||
|
||||
## Description
|
||||
|
||||
<!-- A clear and precise description of what new or changed feature you want. -->
|
||||
|
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
@@ -1,30 +0,0 @@
|
||||
---
|
||||
name: "Other issue"
|
||||
about: "Report a different issue"
|
||||
title: ""
|
||||
labels: ["bug"]
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
|
||||
|
||||
<!-- If you have a question, please consider using our Discord for asking questions -->
|
||||
|
||||
**Issue description**
|
||||
|
||||
<!-- Please add your issue description. -->
|
||||
|
||||
**To Reproduce**
|
||||
|
||||
<!-- Steps to reproduce the behavior. -->
|
||||
|
||||
**Context info**
|
||||
|
||||
<!-- Please add relevant information about your system. For example:
|
||||
- Version of headscale used
|
||||
- Version of tailscale client
|
||||
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
|
||||
- Kernel version
|
||||
- The relevant config parameters you used
|
||||
- Log output
|
||||
-->
|
12
.github/pull_request_template.md
vendored
12
.github/pull_request_template.md
vendored
@@ -1,3 +1,15 @@
|
||||
<!--
|
||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||
contribution will have to be discussed with the Maintainers before being submitted.
|
||||
|
||||
This model has been chosen to reduce the risk of burnout by limiting the
|
||||
maintenance overhead of reviewing and validating third-party code.
|
||||
|
||||
Headscale is open to code contributions for bug fixes without discussion.
|
||||
|
||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
||||
-->
|
||||
|
||||
<!-- Please tick if the following things apply. You… -->
|
||||
|
||||
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
||||
|
26
.github/renovate.json
vendored
26
.github/renovate.json
vendored
@@ -6,31 +6,27 @@
|
||||
"onboarding": false,
|
||||
"extends": ["config:base", ":rebaseStalePrs"],
|
||||
"ignorePresets": [":prHourlyLimit2"],
|
||||
"enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
|
||||
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"],
|
||||
"includeForks": true,
|
||||
"repositories": ["juanfont/headscale"],
|
||||
"platform": "github",
|
||||
"packageRules": [
|
||||
{
|
||||
"matchDatasources": ["go"],
|
||||
"groupName": "Go modules",
|
||||
"groupSlug": "gomod",
|
||||
"separateMajorMinor": false
|
||||
"matchDatasources": ["go"],
|
||||
"groupName": "Go modules",
|
||||
"groupSlug": "gomod",
|
||||
"separateMajorMinor": false
|
||||
},
|
||||
{
|
||||
"matchDatasources": ["docker"],
|
||||
"groupName": "Dockerfiles",
|
||||
"groupSlug": "dockerfiles"
|
||||
}
|
||||
"matchDatasources": ["docker"],
|
||||
"groupName": "Dockerfiles",
|
||||
"groupSlug": "dockerfiles"
|
||||
}
|
||||
],
|
||||
"regexManagers": [
|
||||
{
|
||||
"fileMatch": [
|
||||
".github/workflows/.*.yml$"
|
||||
],
|
||||
"matchStrings": [
|
||||
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
|
||||
],
|
||||
"fileMatch": [".github/workflows/.*.yml$"],
|
||||
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"],
|
||||
"datasourceTemplate": "golang-version",
|
||||
"depNameTemplate": "actions/go-version"
|
||||
}
|
||||
|
37
.github/workflows/build.yml
vendored
37
.github/workflows/build.yml
vendored
@@ -8,18 +8,23 @@ on:
|
||||
branches:
|
||||
- main
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
permissions: write-all
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
@@ -32,10 +37,34 @@ jobs:
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run build
|
||||
id: build
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: nix build
|
||||
run: |
|
||||
nix build |& tee build-result
|
||||
BUILD_STATUS="${PIPESTATUS[0]}"
|
||||
|
||||
- uses: actions/upload-artifact@v2
|
||||
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
|
||||
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
|
||||
|
||||
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
|
||||
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
|
||||
|
||||
exit $BUILD_STATUS
|
||||
|
||||
- name: Nix gosum diverging
|
||||
uses: actions/github-script@v6
|
||||
if: failure() && steps.build.outcome == 'failure'
|
||||
with:
|
||||
github-token: ${{secrets.GITHUB_TOKEN}}
|
||||
script: |
|
||||
github.rest.pulls.createReviewComment({
|
||||
pull_number: context.issue.number,
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
||||
})
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: headscale-linux
|
||||
|
2
.github/workflows/contributors.yml
vendored
2
.github/workflows/contributors.yml
vendored
@@ -9,7 +9,7 @@ jobs:
|
||||
add-contributors:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
- name: Delete upstream contributor branch
|
||||
# Allow continue on failure to account for when the
|
||||
# upstream branch is deleted or does not exist.
|
||||
|
45
.github/workflows/docs.yml
vendored
Normal file
45
.github/workflows/docs.yml
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
name: Build documentation
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pages: write
|
||||
id-token: write
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v3
|
||||
- name: Install python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.x
|
||||
- name: Setup cache
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
key: ${{ github.ref }}
|
||||
path: .cache
|
||||
- name: Setup dependencies
|
||||
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
|
||||
- name: Build docs
|
||||
run: mkdocs build --strict
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-pages-artifact@v1
|
||||
with:
|
||||
path: ./site
|
||||
deploy:
|
||||
environment:
|
||||
name: github-pages
|
||||
url: ${{ steps.deployment.outputs.page_url }}
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
steps:
|
||||
- name: Deploy to GitHub Pages
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v1
|
23
.github/workflows/gh-actions-updater.yaml
vendored
Normal file
23
.github/workflows/gh-actions-updater.yaml
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
name: GitHub Actions Version Updater
|
||||
|
||||
# Controls when the action will run.
|
||||
on:
|
||||
schedule:
|
||||
# Automatically run on every Sunday
|
||||
- cron: "0 0 * * 0"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
# [Required] Access token with `workflow` scope.
|
||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
||||
|
||||
- name: Run GitHub Actions Version Updater
|
||||
uses: saadmk11/github-actions-version-updater@v0.7.1
|
||||
with:
|
||||
# [Required] Access token with `workflow` scope.
|
||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
12
.github/workflows/lint.yml
vendored
12
.github/workflows/lint.yml
vendored
@@ -3,17 +3,21 @@ name: Lint
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
golangci-lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
@@ -26,7 +30,7 @@ jobs:
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
uses: golangci/golangci-lint-action@v2
|
||||
with:
|
||||
version: v1.49.0
|
||||
version: v1.51.2
|
||||
|
||||
# Only block PRs on new problems.
|
||||
# If this is not enabled, we will end up having PRs
|
||||
@@ -59,7 +63,7 @@ jobs:
|
||||
|
||||
- name: Prettify code
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
uses: creyD/prettier_action@v4.0
|
||||
uses: creyD/prettier_action@v4.3
|
||||
with:
|
||||
prettier_options: >-
|
||||
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
||||
|
138
.github/workflows/release-docker.yml
vendored
Normal file
138
.github/workflows/release-docker.yml
vendored
Normal file
@@ -0,0 +1,138 @@
|
||||
---
|
||||
name: Release Docker
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "*" # triggers only if push new tag version
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
docker-release:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
- name: Set up QEMU for multiple platforms
|
||||
uses: docker/setup-qemu-action@master
|
||||
with:
|
||||
platforms: arm64,amd64
|
||||
- name: Cache Docker layers
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: /tmp/.buildx-cache
|
||||
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-buildx-
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@v3
|
||||
with:
|
||||
# list of Docker images to use as base name for tags
|
||||
images: |
|
||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||
ghcr.io/${{ github.repository_owner }}/headscale
|
||||
tags: |
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=semver,pattern={{major}}
|
||||
type=sha
|
||||
type=raw,value=develop
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push
|
||||
id: docker_build
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
push: true
|
||||
context: .
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=local,src=/tmp/.buildx-cache
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
||||
build-args: |
|
||||
VERSION=${{ steps.meta.outputs.version }}
|
||||
- name: Prepare cache for next build
|
||||
run: |
|
||||
rm -rf /tmp/.buildx-cache
|
||||
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
||||
|
||||
docker-debug-release:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
- name: Set up QEMU for multiple platforms
|
||||
uses: docker/setup-qemu-action@master
|
||||
with:
|
||||
platforms: arm64,amd64
|
||||
- name: Cache Docker layers
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: /tmp/.buildx-cache-debug
|
||||
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-buildx-debug-
|
||||
- name: Docker meta
|
||||
id: meta-debug
|
||||
uses: docker/metadata-action@v3
|
||||
with:
|
||||
# list of Docker images to use as base name for tags
|
||||
images: |
|
||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||
ghcr.io/${{ github.repository_owner }}/headscale
|
||||
flavor: |
|
||||
suffix=-debug,onlatest=true
|
||||
tags: |
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=semver,pattern={{major}}
|
||||
type=sha
|
||||
type=raw,value=develop
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push
|
||||
id: docker_build
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
push: true
|
||||
context: .
|
||||
file: Dockerfile.debug
|
||||
tags: ${{ steps.meta-debug.outputs.tags }}
|
||||
labels: ${{ steps.meta-debug.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
||||
build-args: |
|
||||
VERSION=${{ steps.meta-debug.outputs.version }}
|
||||
- name: Prepare cache for next build
|
||||
run: |
|
||||
rm -rf /tmp/.buildx-cache-debug
|
||||
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
217
.github/workflows/release.yml
vendored
217
.github/workflows/release.yml
vendored
@@ -9,221 +9,16 @@ on:
|
||||
|
||||
jobs:
|
||||
goreleaser:
|
||||
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: 1.19.0
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt install -y gcc-aarch64-linux-gnu
|
||||
- name: Run GoReleaser
|
||||
uses: goreleaser/goreleaser-action@v2
|
||||
with:
|
||||
distribution: goreleaser
|
||||
version: latest
|
||||
args: release --rm-dist
|
||||
- uses: cachix/install-nix-action@v16
|
||||
|
||||
- name: Run goreleaser
|
||||
run: nix develop --command -- goreleaser release --clean
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
docker-release:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
- name: Set up QEMU for multiple platforms
|
||||
uses: docker/setup-qemu-action@master
|
||||
with:
|
||||
platforms: arm64,amd64
|
||||
- name: Cache Docker layers
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: /tmp/.buildx-cache
|
||||
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-buildx-
|
||||
- name: Docker meta
|
||||
id: meta
|
||||
uses: docker/metadata-action@v3
|
||||
with:
|
||||
# list of Docker images to use as base name for tags
|
||||
images: |
|
||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||
ghcr.io/${{ github.repository_owner }}/headscale
|
||||
tags: |
|
||||
type=semver,pattern={{version}}
|
||||
type=semver,pattern={{major}}.{{minor}}
|
||||
type=semver,pattern={{major}}
|
||||
type=raw,value=latest
|
||||
type=sha
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push
|
||||
id: docker_build
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
push: true
|
||||
context: .
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=local,src=/tmp/.buildx-cache
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
||||
build-args: |
|
||||
VERSION=${{ steps.meta.outputs.version }}
|
||||
- name: Prepare cache for next build
|
||||
run: |
|
||||
rm -rf /tmp/.buildx-cache
|
||||
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
||||
|
||||
docker-debug-release:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
- name: Set up QEMU for multiple platforms
|
||||
uses: docker/setup-qemu-action@master
|
||||
with:
|
||||
platforms: arm64,amd64
|
||||
- name: Cache Docker layers
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: /tmp/.buildx-cache-debug
|
||||
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-buildx-debug-
|
||||
- name: Docker meta
|
||||
id: meta-debug
|
||||
uses: docker/metadata-action@v3
|
||||
with:
|
||||
# list of Docker images to use as base name for tags
|
||||
images: |
|
||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||
ghcr.io/${{ github.repository_owner }}/headscale
|
||||
flavor: |
|
||||
latest=false
|
||||
tags: |
|
||||
type=semver,pattern={{version}}-debug
|
||||
type=semver,pattern={{major}}.{{minor}}-debug
|
||||
type=semver,pattern={{major}}-debug
|
||||
type=raw,value=latest-debug
|
||||
type=sha,suffix=-debug
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push
|
||||
id: docker_build
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
push: true
|
||||
context: .
|
||||
file: Dockerfile.debug
|
||||
tags: ${{ steps.meta-debug.outputs.tags }}
|
||||
labels: ${{ steps.meta-debug.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
||||
build-args: |
|
||||
VERSION=${{ steps.meta-debug.outputs.version }}
|
||||
- name: Prepare cache for next build
|
||||
run: |
|
||||
rm -rf /tmp/.buildx-cache-debug
|
||||
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
||||
|
||||
docker-alpine-release:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v1
|
||||
- name: Set up QEMU for multiple platforms
|
||||
uses: docker/setup-qemu-action@master
|
||||
with:
|
||||
platforms: arm64,amd64
|
||||
- name: Cache Docker layers
|
||||
uses: actions/cache@v2
|
||||
with:
|
||||
path: /tmp/.buildx-cache-alpine
|
||||
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-buildx-alpine-
|
||||
- name: Docker meta
|
||||
id: meta-alpine
|
||||
uses: docker/metadata-action@v3
|
||||
with:
|
||||
# list of Docker images to use as base name for tags
|
||||
images: |
|
||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||
ghcr.io/${{ github.repository_owner }}/headscale
|
||||
flavor: |
|
||||
latest=false
|
||||
tags: |
|
||||
type=semver,pattern={{version}}-alpine
|
||||
type=semver,pattern={{major}}.{{minor}}-alpine
|
||||
type=semver,pattern={{major}}-alpine
|
||||
type=raw,value=latest-alpine
|
||||
type=sha,suffix=-alpine
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Build and push
|
||||
id: docker_build
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
push: true
|
||||
context: .
|
||||
file: Dockerfile.alpine
|
||||
tags: ${{ steps.meta-alpine.outputs.tags }}
|
||||
labels: ${{ steps.meta-alpine.outputs.labels }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
cache-from: type=local,src=/tmp/.buildx-cache-alpine
|
||||
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
|
||||
build-args: |
|
||||
VERSION=${{ steps.meta-alpine.outputs.version }}
|
||||
- name: Prepare cache for next build
|
||||
run: |
|
||||
rm -rf /tmp/.buildx-cache-alpine
|
||||
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine
|
||||
|
27
.github/workflows/renovatebot.yml
vendored
27
.github/workflows/renovatebot.yml
vendored
@@ -1,27 +0,0 @@
|
||||
---
|
||||
name: Renovate
|
||||
on:
|
||||
schedule:
|
||||
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
renovate:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Get token
|
||||
id: get_token
|
||||
uses: machine-learning-apps/actions-app-token@master
|
||||
with:
|
||||
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
|
||||
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
|
||||
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2.0.0
|
||||
|
||||
- name: Self-hosted Renovate
|
||||
uses: renovatebot/github-action@v31.81.3
|
||||
with:
|
||||
configurationFile: .github/renovate.json
|
||||
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
|
||||
# env:
|
||||
# LOG_LEVEL: "debug"
|
4
.github/workflows/test-integration-cli.yml
vendored
4
.github/workflows/test-integration-cli.yml
vendored
@@ -7,7 +7,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
@@ -18,7 +18,7 @@ jobs:
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
|
35
.github/workflows/test-integration-derp.yml
vendored
35
.github/workflows/test-integration-derp.yml
vendored
@@ -1,35 +0,0 @@
|
||||
name: Integration Test DERP
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
integration-test-derp:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Set Swap Space
|
||||
uses: pierotofy/set-swap-space@master
|
||||
with:
|
||||
swap-size-gb: 10
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v16
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run Embedded DERP server integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: nix develop --command -- make test_integration_derp
|
35
.github/workflows/test-integration-oidc.yml
vendored
35
.github/workflows/test-integration-oidc.yml
vendored
@@ -1,35 +0,0 @@
|
||||
name: Integration Test OIDC
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
integration-test-oidc:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Set Swap Space
|
||||
uses: pierotofy/set-swap-space@master
|
||||
with:
|
||||
swap-size-gb: 10
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v16
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run OIDC integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: nix develop --command -- make test_integration_oidc
|
63
.github/workflows/test-integration-v2-TestACLAllowStarDst.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLAllowStarDst.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLAllowStarDst
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLAllowStarDst$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLAllowUser80Dst.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLAllowUser80Dst.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLAllowUser80Dst
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLAllowUser80Dst$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLAllowUserDst.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLAllowUserDst.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLAllowUserDst
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLAllowUserDst$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLDenyAllPort80.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLDenyAllPort80.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLDenyAllPort80
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLDenyAllPort80$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLDevice1CanAccessDevice2.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLDevice1CanAccessDevice2.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLDevice1CanAccessDevice2$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLHostsInNetMapTable.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLHostsInNetMapTable.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLHostsInNetMapTable
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLHostsInNetMapTable$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReach.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReach.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLNamedHostsCanReach
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLNamedHostsCanReach$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReachBySubnet.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestACLNamedHostsCanReachBySubnet.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestACLNamedHostsCanReachBySubnet$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestAuthKeyLogoutAndRelogin.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestAuthKeyLogoutAndRelogin.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestAuthKeyLogoutAndRelogin$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestAuthWebFlowAuthenticationPingAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestAuthWebFlowAuthenticationPingAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestAuthWebFlowAuthenticationPingAll$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestAuthWebFlowLogoutAndRelogin.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestAuthWebFlowLogoutAndRelogin.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestAuthWebFlowLogoutAndRelogin$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestCreateTailscale.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestCreateTailscale.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestCreateTailscale
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestCreateTailscale$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestDERPServerScenario.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestDERPServerScenario.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestDERPServerScenario
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestDERPServerScenario$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestEnablingRoutes.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestEnablingRoutes.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestEnablingRoutes
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestEnablingRoutes$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestEphemeral.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestEphemeral.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestEphemeral
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestEphemeral$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestExpireNode.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestExpireNode.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestExpireNode
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestExpireNode$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestHeadscale.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestHeadscale.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestHeadscale
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestHeadscale$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestOIDCAuthenticationPingAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestOIDCAuthenticationPingAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestOIDCAuthenticationPingAll
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestOIDCAuthenticationPingAll$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestOIDCExpireNodesBasedOnTokenExpiry.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestOIDCExpireNodesBasedOnTokenExpiry.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPingAllByHostname.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPingAllByHostname.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestPingAllByHostname
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestPingAllByHostname$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPingAllByIP.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPingAllByIP.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestPingAllByIP
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestPingAllByIP$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommand.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommand.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestPreAuthKeyCommand
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestPreAuthKeyCommand$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandReusableEphemeral.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandReusableEphemeral.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestPreAuthKeyCommandReusableEphemeral$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandWithoutExpiry.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestPreAuthKeyCommandWithoutExpiry.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestPreAuthKeyCommandWithoutExpiry$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestResolveMagicDNS.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestResolveMagicDNS.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestResolveMagicDNS
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestResolveMagicDNS$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHIsBlockedInACL.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHIsBlockedInACL.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestSSHIsBlockedInACL
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestSSHIsBlockedInACL$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHMultipleUsersAllToAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHMultipleUsersAllToAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestSSHMultipleUsersAllToAll$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHNoSSHConfigured.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHNoSSHConfigured.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestSSHNoSSHConfigured
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestSSHNoSSHConfigured$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSHOneUserAllToAll.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSHOneUserAllToAll.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestSSHOneUserAllToAll
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestSSHOneUserAllToAll$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestSSUserOnlyIsolation.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestSSUserOnlyIsolation.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestSSUserOnlyIsolation
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestSSUserOnlyIsolation$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestTaildrop.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestTaildrop.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestTaildrop
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestTaildrop$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestTailscaleNodesJoiningHeadcale.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestTailscaleNodesJoiningHeadcale.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestTailscaleNodesJoiningHeadcale$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
63
.github/workflows/test-integration-v2-TestUserCommand.yaml
vendored
Normal file
63
.github/workflows/test-integration-v2-TestUserCommand.yaml
vendored
Normal file
@@ -0,0 +1,63 @@
|
||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - TestUserCommand
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^TestUserCommand$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
@@ -1,27 +0,0 @@
|
||||
name: Integration Test v2 - kradalby
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
integration-test-v2-kradalby:
|
||||
runs-on: [self-hosted, linux, x64, nixos, docker]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: nix develop --command -- make test_integration_v2_general
|
8
.github/workflows/test.yml
vendored
8
.github/workflows/test.yml
vendored
@@ -2,18 +2,22 @@ name: Tests
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v14.1
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
|
12
.gitignore
vendored
12
.gitignore
vendored
@@ -1,3 +1,5 @@
|
||||
ignored/
|
||||
|
||||
# Binaries for programs and plugins
|
||||
*.exe
|
||||
*.exe~
|
||||
@@ -12,8 +14,9 @@
|
||||
*.out
|
||||
|
||||
# Dependency directories (remove the comment below to include it)
|
||||
# vendor/
|
||||
vendor/
|
||||
|
||||
dist/
|
||||
/headscale
|
||||
config.json
|
||||
config.yaml
|
||||
@@ -26,10 +29,15 @@ derp.yaml
|
||||
# Exclude Jetbrains Editors
|
||||
.idea
|
||||
|
||||
test_output/
|
||||
test_output/
|
||||
control_logs/
|
||||
|
||||
# Nix build output
|
||||
result
|
||||
.direnv/
|
||||
|
||||
integration_test/etc/config.dump.yaml
|
||||
|
||||
# MkDocs
|
||||
.cache
|
||||
/site
|
||||
|
@@ -1,6 +1,8 @@
|
||||
---
|
||||
run:
|
||||
timeout: 10m
|
||||
build-tags:
|
||||
- ts2019
|
||||
|
||||
issues:
|
||||
skip-dirs:
|
||||
@@ -26,6 +28,15 @@ linters:
|
||||
- ireturn
|
||||
- execinquery
|
||||
- exhaustruct
|
||||
- nolintlint
|
||||
- musttag # causes issues with imported libs
|
||||
|
||||
# deprecated
|
||||
- structcheck # replaced by unused
|
||||
- ifshort # deprecated by the owner
|
||||
- varcheck # replaced by unused
|
||||
- nosnakecase # replaced by revive
|
||||
- deadcode # replaced by unused
|
||||
|
||||
# We should strive to enable these:
|
||||
- wrapcheck
|
||||
@@ -33,6 +44,9 @@ linters:
|
||||
- makezero
|
||||
- maintidx
|
||||
|
||||
# Limits the methods of an interface to 10. We have more in integration tests
|
||||
- interfacebloat
|
||||
|
||||
# We might want to enable this, but it might be a lot of work
|
||||
- cyclop
|
||||
- nestif
|
||||
|
111
.goreleaser.yml
111
.goreleaser.yml
@@ -1,74 +1,85 @@
|
||||
---
|
||||
before:
|
||||
hooks:
|
||||
- go mod tidy -compat=1.19
|
||||
- go mod tidy -compat=1.20
|
||||
- go mod vendor
|
||||
|
||||
release:
|
||||
prerelease: auto
|
||||
|
||||
builds:
|
||||
- id: darwin-amd64
|
||||
- id: headscale
|
||||
main: ./cmd/headscale/headscale.go
|
||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- darwin
|
||||
goarch:
|
||||
- amd64
|
||||
targets:
|
||||
- darwin_amd64
|
||||
- darwin_arm64
|
||||
- freebsd_amd64
|
||||
- linux_386
|
||||
- linux_amd64
|
||||
- linux_arm64
|
||||
- linux_arm_5
|
||||
- linux_arm_6
|
||||
- linux_arm_7
|
||||
flags:
|
||||
- -mod=readonly
|
||||
ldflags:
|
||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||
|
||||
- id: darwin-arm64
|
||||
main: ./cmd/headscale/headscale.go
|
||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- darwin
|
||||
goarch:
|
||||
- arm64
|
||||
flags:
|
||||
- -mod=readonly
|
||||
ldflags:
|
||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||
|
||||
- id: linux-amd64
|
||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- linux
|
||||
goarch:
|
||||
- amd64
|
||||
main: ./cmd/headscale/headscale.go
|
||||
ldflags:
|
||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||
|
||||
- id: linux-arm64
|
||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||
env:
|
||||
- CGO_ENABLED=0
|
||||
goos:
|
||||
- linux
|
||||
goarch:
|
||||
- arm64
|
||||
main: ./cmd/headscale/headscale.go
|
||||
ldflags:
|
||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||
tags:
|
||||
- ts2019
|
||||
|
||||
archives:
|
||||
- id: golang-cross
|
||||
builds:
|
||||
- darwin-amd64
|
||||
- darwin-arm64
|
||||
- linux-amd64
|
||||
- linux-arm64
|
||||
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
|
||||
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
|
||||
format: binary
|
||||
|
||||
source:
|
||||
enabled: true
|
||||
name_template: "{{ .ProjectName }}_{{ .Version }}"
|
||||
format: tar.gz
|
||||
files:
|
||||
- "vendor/"
|
||||
|
||||
nfpms:
|
||||
# Configure nFPM for .deb and .rpm releases
|
||||
#
|
||||
# See https://nfpm.goreleaser.com/configuration/
|
||||
# and https://goreleaser.com/customization/nfpm/
|
||||
#
|
||||
# Useful tools for debugging .debs:
|
||||
# List file contents: dpkg -c dist/headscale...deb
|
||||
# Package metadata: dpkg --info dist/headscale....deb
|
||||
#
|
||||
- builds:
|
||||
- headscale
|
||||
package_name: headscale
|
||||
priority: optional
|
||||
vendor: headscale
|
||||
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
|
||||
homepage: https://github.com/juanfont/headscale
|
||||
license: BSD
|
||||
bindir: /usr/bin
|
||||
formats:
|
||||
- deb
|
||||
# - rpm
|
||||
contents:
|
||||
- src: ./config-example.yaml
|
||||
dst: /etc/headscale/config.yaml
|
||||
type: config|noreplace
|
||||
file_info:
|
||||
mode: 0644
|
||||
- src: ./docs/packaging/headscale.systemd.service
|
||||
dst: /usr/lib/systemd/system/headscale.service
|
||||
- dst: /var/lib/headscale
|
||||
type: dir
|
||||
- dst: /var/run/headscale
|
||||
type: dir
|
||||
scripts:
|
||||
postinstall: ./docs/packaging/postinstall.sh
|
||||
postremove: ./docs/packaging/postremove.sh
|
||||
|
||||
checksum:
|
||||
name_template: "checksums.txt"
|
||||
snapshot:
|
||||
|
1
.prettierignore
Normal file
1
.prettierignore
Normal file
@@ -0,0 +1 @@
|
||||
.github/workflows/test-integration-v2*
|
111
CHANGELOG.md
111
CHANGELOG.md
@@ -1,14 +1,111 @@
|
||||
# CHANGELOG
|
||||
|
||||
## 0.17.0 (2022-XX-XX)
|
||||
|
||||
### BREAKING
|
||||
|
||||
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
||||
## 0.23.0 (2023-XX-XX)
|
||||
|
||||
### Changes
|
||||
|
||||
## 0.22.2 (2023-05-10)
|
||||
|
||||
### Changes
|
||||
|
||||
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
|
||||
- Profiles are continously generated in our integration tests.
|
||||
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
|
||||
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
|
||||
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
|
||||
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
|
||||
- Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
|
||||
|
||||
## 0.22.1 (2023-04-20)
|
||||
|
||||
### Changes
|
||||
|
||||
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
|
||||
|
||||
## 0.22.0 (2023-04-20)
|
||||
|
||||
### Changes
|
||||
|
||||
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
||||
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
|
||||
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
||||
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
|
||||
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
|
||||
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
|
||||
|
||||
## 0.21.0 (2023-03-20)
|
||||
|
||||
### Changes
|
||||
|
||||
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
|
||||
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
|
||||
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
|
||||
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
|
||||
|
||||
## 0.20.0 (2023-02-03)
|
||||
|
||||
### Changes
|
||||
|
||||
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
|
||||
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
|
||||
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
|
||||
- defaults to 180 days like Tailscale SaaS
|
||||
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
|
||||
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
|
||||
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
|
||||
|
||||
## 0.19.0 (2023-01-29)
|
||||
|
||||
### BREAKING
|
||||
|
||||
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
|
||||
- **BACKUP your database before upgrading**
|
||||
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
|
||||
|
||||
## 0.18.0 (2023-01-14)
|
||||
|
||||
### Changes
|
||||
|
||||
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
|
||||
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
|
||||
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
|
||||
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
|
||||
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
|
||||
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
|
||||
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
|
||||
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
|
||||
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
|
||||
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
|
||||
|
||||
## 0.17.1 (2022-12-05)
|
||||
|
||||
### Changes
|
||||
|
||||
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
|
||||
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
|
||||
|
||||
## 0.17.0 (2022-11-26)
|
||||
|
||||
### BREAKING
|
||||
|
||||
- `noise.private_key_path` has been added and is required for the new noise protocol.
|
||||
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
||||
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
|
||||
|
||||
### Important Changes
|
||||
|
||||
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
||||
- Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847)
|
||||
- Please note that this support should be considered _partially_ implemented
|
||||
- SSH ACLs status:
|
||||
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
|
||||
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
|
||||
- If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
|
||||
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
|
||||
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
|
||||
|
||||
### Changes
|
||||
|
||||
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
||||
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
||||
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
||||
@@ -24,6 +121,10 @@
|
||||
- Remove `ip_prefix` configuration option and warning [#899](https://github.com/juanfont/headscale/pull/899)
|
||||
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
|
||||
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
|
||||
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
|
||||
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971)
|
||||
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
|
||||
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
|
||||
|
||||
## 0.16.4 (2022-08-21)
|
||||
|
||||
|
@@ -1,5 +1,5 @@
|
||||
# Builder image
|
||||
FROM docker.io/golang:1.19.0-bullseye AS build
|
||||
FROM docker.io/golang:1.20-bullseye AS build
|
||||
ARG VERSION=dev
|
||||
ENV GOPATH /go
|
||||
WORKDIR /go/src/headscale
|
||||
@@ -9,15 +9,17 @@ RUN go mod download
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||
RUN strip /go/bin/headscale
|
||||
RUN test -e /go/bin/headscale
|
||||
|
||||
# Production image
|
||||
FROM gcr.io/distroless/base-debian11
|
||||
FROM docker.io/debian:bullseye-slim
|
||||
|
||||
COPY --from=build /go/bin/headscale /bin/headscale
|
||||
ENV TZ UTC
|
||||
|
||||
RUN mkdir -p /var/run/headscale
|
||||
|
||||
EXPOSE 8080/tcp
|
||||
CMD ["headscale"]
|
||||
|
@@ -1,24 +0,0 @@
|
||||
# Builder image
|
||||
FROM docker.io/golang:1.19.0-alpine AS build
|
||||
ARG VERSION=dev
|
||||
ENV GOPATH /go
|
||||
WORKDIR /go/src/headscale
|
||||
|
||||
COPY go.mod go.sum /go/src/headscale/
|
||||
RUN apk add gcc musl-dev
|
||||
RUN go mod download
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||
RUN strip /go/bin/headscale
|
||||
RUN test -e /go/bin/headscale
|
||||
|
||||
# Production image
|
||||
FROM docker.io/alpine:latest
|
||||
|
||||
COPY --from=build /go/bin/headscale /bin/headscale
|
||||
ENV TZ UTC
|
||||
|
||||
EXPOSE 8080/tcp
|
||||
CMD ["headscale"]
|
@@ -1,5 +1,5 @@
|
||||
# Builder image
|
||||
FROM docker.io/golang:1.19.0-bullseye AS build
|
||||
FROM docker.io/golang:1.20-bullseye AS build
|
||||
ARG VERSION=dev
|
||||
ENV GOPATH /go
|
||||
WORKDIR /go/src/headscale
|
||||
@@ -9,15 +9,17 @@ RUN go mod download
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||
RUN test -e /go/bin/headscale
|
||||
|
||||
# Debug image
|
||||
FROM docker.io/golang:1.19.0-bullseye
|
||||
FROM docker.io/golang:1.20.0-bullseye
|
||||
|
||||
COPY --from=build /go/bin/headscale /bin/headscale
|
||||
ENV TZ UTC
|
||||
|
||||
RUN mkdir -p /var/run/headscale
|
||||
|
||||
# Need to reset the entrypoint or everything will run as a busybox script
|
||||
ENTRYPOINT []
|
||||
EXPOSE 8080/tcp
|
||||
|
@@ -1,17 +1,16 @@
|
||||
FROM ubuntu:latest
|
||||
FROM ubuntu:22.04
|
||||
|
||||
ARG TAILSCALE_VERSION=*
|
||||
ARG TAILSCALE_CHANNEL=stable
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y gnupg curl \
|
||||
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
||||
&& apt-get install -y gnupg curl ssh dnsutils ca-certificates \
|
||||
&& adduser --shell=/bin/bash ssh-it-user
|
||||
|
||||
# Tailscale is deliberately split into a second stage so we can cash utils as a seperate layer.
|
||||
RUN curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
||||
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
|
||||
&& apt-get install -y tailscale=${TAILSCALE_VERSION} \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
||||
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
||||
|
||||
RUN update-ca-certificates
|
||||
|
@@ -1,23 +1,17 @@
|
||||
FROM golang:latest
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y ca-certificates dnsutils git iptables \
|
||||
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN useradd --shell=/bin/bash --create-home ssh-it-user
|
||||
|
||||
RUN git clone https://github.com/tailscale/tailscale.git
|
||||
|
||||
WORKDIR /go/tailscale
|
||||
|
||||
RUN git checkout main
|
||||
|
||||
RUN sh build_dist.sh tailscale.com/cmd/tailscale
|
||||
RUN sh build_dist.sh tailscale.com/cmd/tailscaled
|
||||
|
||||
RUN cp tailscale /usr/local/bin/
|
||||
RUN cp tailscaled /usr/local/bin/
|
||||
|
||||
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
||||
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
||||
|
||||
RUN update-ca-certificates
|
||||
RUN git checkout main \
|
||||
&& sh build_dist.sh tailscale.com/cmd/tailscale \
|
||||
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
|
||||
&& cp tailscale /usr/local/bin/ \
|
||||
&& cp tailscaled /usr/local/bin/
|
||||
|
47
Makefile
47
Makefile
@@ -10,6 +10,8 @@ ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
|
||||
else
|
||||
endif
|
||||
|
||||
TAGS = -tags ts2019
|
||||
|
||||
# GO_SOURCES = $(wildcard *.go)
|
||||
# PROTO_SOURCES = $(wildcard **/*.proto)
|
||||
GO_SOURCES = $(call rwildcard,,*.go)
|
||||
@@ -17,14 +19,14 @@ PROTO_SOURCES = $(call rwildcard,,*.proto)
|
||||
|
||||
|
||||
build:
|
||||
GOOS=$(GOOS) CGO_ENABLED=0 go build -trimpath $(pieflags) -mod=readonly -ldflags "-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$(version)" cmd/headscale/headscale.go
|
||||
nix build
|
||||
|
||||
dev: lint test build
|
||||
|
||||
test:
|
||||
@go test -short -coverprofile=coverage.out ./...
|
||||
@go test $(TAGS) -short -coverprofile=coverage.out ./...
|
||||
|
||||
test_integration: test_integration_cli test_integration_derp test_integration_oidc test_integration_v2_general
|
||||
test_integration: test_integration_cli test_integration_derp test_integration_v2_general
|
||||
|
||||
test_integration_cli:
|
||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||
@@ -34,27 +36,7 @@ test_integration_cli:
|
||||
-v ~/.cache/hs-integration-go:/go \
|
||||
-v $$PWD:$$PWD -w $$PWD \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||
go test -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
|
||||
|
||||
test_integration_derp:
|
||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||
docker network create headscale-test || true
|
||||
docker run -t --rm \
|
||||
--network headscale-test \
|
||||
-v ~/.cache/hs-integration-go:/go \
|
||||
-v $$PWD:$$PWD -w $$PWD \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||
go test -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
|
||||
|
||||
test_integration_oidc:
|
||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||
docker network create headscale-test || true
|
||||
docker run -t --rm \
|
||||
--network headscale-test \
|
||||
-v ~/.cache/hs-integration-go:/go \
|
||||
-v $$PWD:$$PWD -w $$PWD \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||
go test -failfast -timeout 30m -count=1 -run IntegrationOIDC ./...
|
||||
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
|
||||
|
||||
test_integration_v2_general:
|
||||
docker run \
|
||||
@@ -64,13 +46,7 @@ test_integration_v2_general:
|
||||
-v $$PWD:$$PWD -w $$PWD/integration \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
golang:1 \
|
||||
go test ./... -timeout 60m -parallel 6
|
||||
|
||||
coverprofile_func:
|
||||
go tool cover -func=coverage.out
|
||||
|
||||
coverprofile_html:
|
||||
go tool cover -html=coverage.out
|
||||
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
|
||||
|
||||
lint:
|
||||
golangci-lint run --fix --timeout 10m
|
||||
@@ -88,11 +64,4 @@ compress: build
|
||||
|
||||
generate:
|
||||
rm -rf gen
|
||||
go run github.com/bufbuild/buf/cmd/buf generate proto
|
||||
|
||||
install-protobuf-plugins:
|
||||
go install \
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
|
||||
google.golang.org/protobuf/cmd/protoc-gen-go \
|
||||
google.golang.org/grpc/cmd/protoc-gen-go-grpc
|
||||
buf generate proto
|
||||
|
431
README.md
431
README.md
@@ -32,22 +32,18 @@ organisation.
|
||||
|
||||
## Design goal
|
||||
|
||||
`headscale` aims to implement a self-hosted, open source alternative to the Tailscale
|
||||
control server. `headscale` has a narrower scope and an instance of `headscale`
|
||||
implements a _single_ Tailnet, which is typically what a single organisation, or
|
||||
home/personal setup would use.
|
||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
||||
control server.
|
||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
||||
server they can use for their projects and labs.
|
||||
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
|
||||
open-source organisation.
|
||||
|
||||
`headscale` uses terms that maps to Tailscale's control server, consult the
|
||||
[glossary](./docs/glossary.md) for explainations.
|
||||
|
||||
## Support
|
||||
## Supporting Headscale
|
||||
|
||||
If you like `headscale` and find it useful, there is a sponsorship and donation
|
||||
buttons available in the repo.
|
||||
|
||||
If you would like to sponsor features, bugs or prioritisation, reach out to
|
||||
one of the maintainers.
|
||||
|
||||
## Features
|
||||
|
||||
- Full "base" support of Tailscale's features
|
||||
@@ -75,19 +71,39 @@ one of the maintainers.
|
||||
| macOS | Yes (see `/apple` on your headscale for more information) |
|
||||
| Windows | Yes [docs](./docs/windows-client.md) |
|
||||
| Android | Yes [docs](./docs/android-client.md) |
|
||||
| iOS | Not yet |
|
||||
| iOS | Yes [docs](./docs/iOS-client.md) |
|
||||
|
||||
## Running headscale
|
||||
|
||||
Please have a look at the documentation under [`docs/`](docs/).
|
||||
**Please note that we do not support nor encourage the use of reverse proxies
|
||||
and container to run Headscale.**
|
||||
|
||||
Please have a look at the [`documentation`](https://headscale.net/).
|
||||
|
||||
## Talks
|
||||
|
||||
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
||||
- presented by Juan Font Alonso and Kristoffer Dalby
|
||||
|
||||
## Disclaimer
|
||||
|
||||
1. We have nothing to do with Tailscale, or Tailscale Inc.
|
||||
1. This project is not associated with Tailscale Inc.
|
||||
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
||||
|
||||
## Contributing
|
||||
|
||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||
contribution will have to be discussed with the Maintainers before being submitted.
|
||||
|
||||
This model has been chosen to reduce the risk of burnout by limiting the
|
||||
maintenance overhead of reviewing and validating third-party code.
|
||||
|
||||
Headscale is open to code contributions for bug fixes without discussion.
|
||||
|
||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
||||
|
||||
### Requirements
|
||||
|
||||
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
||||
and [Buf](https://buf.build)(Protobuf generator).
|
||||
|
||||
@@ -95,8 +111,6 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
|
||||
be done with `nix develop`, which will install the tools and give you a shell.
|
||||
This guarantees that you will have the same dev env as `headscale` maintainers.
|
||||
|
||||
PRs and suggestions are welcome.
|
||||
|
||||
### Code style
|
||||
|
||||
To ensure we have some consistency with a growing number of contributions,
|
||||
@@ -174,13 +188,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/restanrm>
|
||||
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/cure>
|
||||
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
||||
@@ -202,8 +209,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/reynico>
|
||||
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
||||
@@ -211,6 +216,15 @@ make build
|
||||
<sub style="font-size:14px"><b>Nico</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/evenh>
|
||||
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Even Holthe</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/e-zk>
|
||||
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
||||
@@ -239,6 +253,15 @@ make build
|
||||
<sub style="font-size:14px"><b>unreality</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/mpldr>
|
||||
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/ohdearaugustin>
|
||||
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
||||
@@ -246,13 +269,11 @@ make build
|
||||
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/mpldr>
|
||||
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
||||
<a href=https://github.com/restanrm>
|
||||
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
|
||||
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
@@ -262,6 +283,13 @@ make build
|
||||
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/christian-heusel>
|
||||
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/mike-lloyd03>
|
||||
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
|
||||
@@ -269,6 +297,15 @@ make build
|
||||
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/iSchluff>
|
||||
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/Niek>
|
||||
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
||||
@@ -290,15 +327,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Azz</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/iSchluff>
|
||||
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/qbit>
|
||||
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
||||
@@ -320,6 +348,15 @@ make build
|
||||
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/majst01>
|
||||
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/fdelucchijr>
|
||||
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
||||
@@ -327,6 +364,13 @@ make build
|
||||
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/OrvilleQ>
|
||||
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Orville Q. Song</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/hdhoang>
|
||||
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
|
||||
@@ -334,8 +378,6 @@ make build
|
||||
<sub style="font-size:14px"><b>hdhoang</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/bravechamp>
|
||||
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
||||
@@ -343,6 +385,15 @@ make build
|
||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/bravechamp>
|
||||
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/deonthomasgy>
|
||||
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
||||
@@ -378,8 +429,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Michael G.</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/ptman>
|
||||
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
|
||||
@@ -387,6 +436,8 @@ make build
|
||||
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/samson4649>
|
||||
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
||||
@@ -394,13 +445,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/majst01>
|
||||
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/kevin1sMe>
|
||||
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
|
||||
@@ -408,6 +452,13 @@ make build
|
||||
<sub style="font-size:14px"><b>kevinlin</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/QZAiXH>
|
||||
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Snack</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/artemklevtsov>
|
||||
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
|
||||
@@ -422,8 +473,36 @@ make build
|
||||
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/dbevacqua>
|
||||
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>dbevacqua</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/joshuataylor>
|
||||
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/CNLHC>
|
||||
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/motiejus>
|
||||
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/pvinis>
|
||||
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
||||
@@ -439,19 +518,21 @@ make build
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/vtrf>
|
||||
<a href=https://github.com/snh>
|
||||
<img src=https://avatars.githubusercontent.com/u/2051768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Steven Honson/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Steven Honson</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/ratsclub>
|
||||
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/lachy2849>
|
||||
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>lachy2849</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/t56k>
|
||||
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
||||
@@ -459,6 +540,13 @@ make build
|
||||
<sub style="font-size:14px"><b>thomas</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/linsomniac>
|
||||
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/aberoham>
|
||||
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
||||
@@ -466,8 +554,29 @@ make build
|
||||
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/iFargle>
|
||||
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/puzpuzpuz>
|
||||
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/theryecatcher>
|
||||
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/apognu>
|
||||
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
||||
@@ -475,6 +584,13 @@ make build
|
||||
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/tony1661>
|
||||
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/aofei>
|
||||
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
||||
@@ -483,12 +599,21 @@ make build
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/awoimbee>
|
||||
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
|
||||
<a href=https://github.com/arnarg>
|
||||
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
|
||||
<sub style="font-size:14px"><b>Arnar</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/avirut>
|
||||
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/stensonb>
|
||||
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
||||
@@ -510,8 +635,13 @@ make build
|
||||
<sub style="font-size:14px"><b>kundel</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/fatih-acar>
|
||||
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>fatih-acar</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/fkr>
|
||||
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
||||
@@ -526,6 +656,15 @@ make build
|
||||
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/gabe565>
|
||||
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/JJGadgets>
|
||||
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
||||
@@ -533,6 +672,13 @@ make build
|
||||
<sub style="font-size:14px"><b>JJGadgets</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/hrtkpf>
|
||||
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>hrtkpf</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/jimt>
|
||||
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
|
||||
@@ -540,6 +686,22 @@ make build
|
||||
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/jsiebens>
|
||||
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/johnae>
|
||||
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/ShadowJonathan>
|
||||
<img src=https://avatars.githubusercontent.com/u/22740616?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan de Jong/>
|
||||
@@ -547,6 +709,43 @@ make build
|
||||
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/JulienFloris>
|
||||
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/win-t>
|
||||
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/foxtrot>
|
||||
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Marc</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/magf>
|
||||
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/mikejsavage>
|
||||
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Michael Savage</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/piec>
|
||||
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
|
||||
@@ -554,8 +753,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/Donran>
|
||||
<img src=https://avatars.githubusercontent.com/u/4838348?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pontus N/>
|
||||
@@ -579,9 +776,9 @@ make build
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/renovate-bot>
|
||||
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=WhiteSource Renovate/>
|
||||
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mend Renovate/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>WhiteSource Renovate</b></sub>
|
||||
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
@@ -591,6 +788,8 @@ make build
|
||||
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/shaananc>
|
||||
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
||||
@@ -598,8 +797,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/stefanvanburen>
|
||||
<img src=https://avatars.githubusercontent.com/u/622527?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan VanBuren/>
|
||||
@@ -628,6 +825,15 @@ make build
|
||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/Teteros>
|
||||
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/gitter-badger>
|
||||
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
|
||||
@@ -642,8 +848,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/thetillhoff>
|
||||
<img src=https://avatars.githubusercontent.com/u/25052289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Till Hoffmann/>
|
||||
@@ -672,6 +876,15 @@ make build
|
||||
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/newellz2>
|
||||
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/zekker6>
|
||||
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
||||
@@ -686,8 +899,6 @@ make build
|
||||
<sub style="font-size:14px"><b>Zhiyuan Zheng</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/Bpazy>
|
||||
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ziyuan Han/>
|
||||
@@ -695,6 +906,13 @@ make build
|
||||
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/caelansar>
|
||||
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>caelansar</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/derelm>
|
||||
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
||||
@@ -702,6 +920,15 @@ make build
|
||||
<sub style="font-size:14px"><b>derelm</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/dnaq>
|
||||
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>dnaq</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/nning>
|
||||
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
||||
@@ -716,6 +943,20 @@ make build
|
||||
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/jimyag>
|
||||
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>jimyag</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/magichuihui>
|
||||
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>suhelen</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/lion24>
|
||||
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
|
||||
@@ -723,15 +964,36 @@ make build
|
||||
<sub style="font-size:14px"><b>sharkonet</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/pernila>
|
||||
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>pernila</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/ma6174>
|
||||
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>ma6174</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/manju-rn>
|
||||
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>manju-rn</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/nicholas-yap>
|
||||
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/pernila>
|
||||
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Tommi Pernila</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/phpmalik>
|
||||
<img src=https://avatars.githubusercontent.com/u/26834645?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=phpmalik/>
|
||||
@@ -746,6 +1008,8 @@ make build
|
||||
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/xpzouying>
|
||||
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
||||
@@ -753,5 +1017,12 @@ make build
|
||||
<sub style="font-size:14px"><b>zy</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||
<a href=https://github.com/atorregrosa-smd>
|
||||
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
|
||||
<br />
|
||||
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
585
acls.go
585
acls.go
@@ -10,10 +10,13 @@ import (
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/rs/zerolog/log"
|
||||
"github.com/tailscale/hujson"
|
||||
"go4.org/netipx"
|
||||
"gopkg.in/yaml.v3"
|
||||
"tailscale.com/envknob"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
@@ -54,6 +57,8 @@ const (
|
||||
ProtocolFC = 133 // Fibre Channel
|
||||
)
|
||||
|
||||
var featureEnableSSH = envknob.RegisterBool("HEADSCALE_EXPERIMENTAL_FEATURE_SSH")
|
||||
|
||||
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
|
||||
func (h *Headscale) LoadACLPolicy(path string) error {
|
||||
log.Debug().
|
||||
@@ -113,39 +118,62 @@ func (h *Headscale) LoadACLPolicy(path string) error {
|
||||
}
|
||||
|
||||
func (h *Headscale) UpdateACLRules() error {
|
||||
rules, err := h.generateACLRules()
|
||||
machines, err := h.ListMachines()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if h.aclPolicy == nil {
|
||||
return errEmptyPolicy
|
||||
}
|
||||
|
||||
rules, err := h.aclPolicy.generateFilterRules(machines, h.cfg.OIDC.StripEmaildomain)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
|
||||
h.aclRules = rules
|
||||
|
||||
if featureEnableSSH() {
|
||||
sshRules, err := h.generateSSHRules()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
log.Trace().Interface("SSH", sshRules).Msg("SSH rules generated")
|
||||
if h.sshPolicy == nil {
|
||||
h.sshPolicy = &tailcfg.SSHPolicy{}
|
||||
}
|
||||
h.sshPolicy.Rules = sshRules
|
||||
} else if h.aclPolicy != nil && len(h.aclPolicy.SSHs) > 0 {
|
||||
log.Info().Msg("SSH ACLs has been defined, but HEADSCALE_EXPERIMENTAL_FEATURE_SSH is not enabled, this is a unstable feature, check docs before activating")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
||||
// generateFilterRules takes a set of machines and an ACLPolicy and generates a
|
||||
// set of Tailscale compatible FilterRules used to allow traffic on clients.
|
||||
func (pol *ACLPolicy) generateFilterRules(
|
||||
machines []Machine,
|
||||
stripEmailDomain bool,
|
||||
) ([]tailcfg.FilterRule, error) {
|
||||
rules := []tailcfg.FilterRule{}
|
||||
|
||||
if h.aclPolicy == nil {
|
||||
return nil, errEmptyPolicy
|
||||
}
|
||||
|
||||
machines, err := h.ListMachines()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for index, acl := range h.aclPolicy.ACLs {
|
||||
for index, acl := range pol.ACLs {
|
||||
if acl.Action != "accept" {
|
||||
return nil, errInvalidAction
|
||||
}
|
||||
|
||||
srcIPs := []string{}
|
||||
for innerIndex, src := range acl.Sources {
|
||||
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, src)
|
||||
for srcIndex, src := range acl.Sources {
|
||||
srcs, err := pol.getIPsFromSource(src, machines, stripEmailDomain)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
|
||||
Interface("src", src).
|
||||
Int("ACL index", index).
|
||||
Int("Src index", srcIndex).
|
||||
Msgf("Error parsing ACL")
|
||||
|
||||
return nil, err
|
||||
}
|
||||
@@ -161,16 +189,19 @@ func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
||||
}
|
||||
|
||||
destPorts := []tailcfg.NetPortRange{}
|
||||
for innerIndex, dest := range acl.Destinations {
|
||||
dests, err := h.generateACLPolicyDest(
|
||||
machines,
|
||||
*h.aclPolicy,
|
||||
for destIndex, dest := range acl.Destinations {
|
||||
dests, err := pol.getNetPortRangeFromDestination(
|
||||
dest,
|
||||
machines,
|
||||
needsWildcard,
|
||||
stripEmailDomain,
|
||||
)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Msgf("Error parsing ACL %d, Destination %d", index, innerIndex)
|
||||
Interface("dest", dest).
|
||||
Int("ACL index", index).
|
||||
Int("dest index", destIndex).
|
||||
Msgf("Error parsing ACL")
|
||||
|
||||
return nil, err
|
||||
}
|
||||
@@ -187,29 +218,192 @@ func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
||||
return rules, nil
|
||||
}
|
||||
|
||||
func (h *Headscale) generateACLPolicySrcIP(
|
||||
machines []Machine,
|
||||
aclPolicy ACLPolicy,
|
||||
src string,
|
||||
) ([]string, error) {
|
||||
return expandAlias(machines, aclPolicy, src, h.cfg.OIDC.StripEmaildomain)
|
||||
func (h *Headscale) generateSSHRules() ([]*tailcfg.SSHRule, error) {
|
||||
rules := []*tailcfg.SSHRule{}
|
||||
|
||||
if h.aclPolicy == nil {
|
||||
return nil, errEmptyPolicy
|
||||
}
|
||||
|
||||
machines, err := h.ListMachines()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
acceptAction := tailcfg.SSHAction{
|
||||
Message: "",
|
||||
Reject: false,
|
||||
Accept: true,
|
||||
SessionDuration: 0,
|
||||
AllowAgentForwarding: false,
|
||||
HoldAndDelegate: "",
|
||||
AllowLocalPortForwarding: true,
|
||||
}
|
||||
|
||||
rejectAction := tailcfg.SSHAction{
|
||||
Message: "",
|
||||
Reject: true,
|
||||
Accept: false,
|
||||
SessionDuration: 0,
|
||||
AllowAgentForwarding: false,
|
||||
HoldAndDelegate: "",
|
||||
AllowLocalPortForwarding: false,
|
||||
}
|
||||
|
||||
for index, sshACL := range h.aclPolicy.SSHs {
|
||||
action := rejectAction
|
||||
switch sshACL.Action {
|
||||
case "accept":
|
||||
action = acceptAction
|
||||
case "check":
|
||||
checkAction, err := sshCheckAction(sshACL.CheckPeriod)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Msgf("Error parsing SSH %d, check action with unparsable duration '%s'", index, sshACL.CheckPeriod)
|
||||
} else {
|
||||
action = *checkAction
|
||||
}
|
||||
default:
|
||||
log.Error().
|
||||
Msgf("Error parsing SSH %d, unknown action '%s'", index, sshACL.Action)
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
|
||||
for innerIndex, rawSrc := range sshACL.Sources {
|
||||
if isWildcard(rawSrc) {
|
||||
principals = append(principals, &tailcfg.SSHPrincipal{
|
||||
Any: true,
|
||||
})
|
||||
} else if isGroup(rawSrc) {
|
||||
users, err := h.aclPolicy.getUsersInGroup(rawSrc, h.cfg.OIDC.StripEmaildomain)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, user := range users {
|
||||
principals = append(principals, &tailcfg.SSHPrincipal{
|
||||
UserLogin: user,
|
||||
})
|
||||
}
|
||||
} else {
|
||||
expandedSrcs, err := h.aclPolicy.expandAlias(
|
||||
machines,
|
||||
rawSrc,
|
||||
h.cfg.OIDC.StripEmaildomain,
|
||||
)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
|
||||
|
||||
return nil, err
|
||||
}
|
||||
for _, expandedSrc := range expandedSrcs.Prefixes() {
|
||||
principals = append(principals, &tailcfg.SSHPrincipal{
|
||||
NodeIP: expandedSrc.Addr().String(),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
userMap := make(map[string]string, len(sshACL.Users))
|
||||
for _, user := range sshACL.Users {
|
||||
userMap[user] = "="
|
||||
}
|
||||
rules = append(rules, &tailcfg.SSHRule{
|
||||
Principals: principals,
|
||||
SSHUsers: userMap,
|
||||
Action: &action,
|
||||
})
|
||||
}
|
||||
|
||||
return rules, nil
|
||||
}
|
||||
|
||||
func (h *Headscale) generateACLPolicyDest(
|
||||
machines []Machine,
|
||||
aclPolicy ACLPolicy,
|
||||
dest string,
|
||||
needsWildcard bool,
|
||||
) ([]tailcfg.NetPortRange, error) {
|
||||
tokens := strings.Split(dest, ":")
|
||||
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
||||
return nil, errInvalidPortFormat
|
||||
func sshCheckAction(duration string) (*tailcfg.SSHAction, error) {
|
||||
sessionLength, err := time.ParseDuration(duration)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &tailcfg.SSHAction{
|
||||
Message: "",
|
||||
Reject: false,
|
||||
Accept: true,
|
||||
SessionDuration: sessionLength,
|
||||
AllowAgentForwarding: false,
|
||||
HoldAndDelegate: "",
|
||||
AllowLocalPortForwarding: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// getIPsFromSource returns a set of Source IPs that would be associated
|
||||
// with the given src alias.
|
||||
func (pol *ACLPolicy) getIPsFromSource(
|
||||
src string,
|
||||
machines []Machine,
|
||||
stripEmaildomain bool,
|
||||
) ([]string, error) {
|
||||
ipSet, err := pol.expandAlias(machines, src, stripEmaildomain)
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
|
||||
prefixes := []string{}
|
||||
|
||||
for _, prefix := range ipSet.Prefixes() {
|
||||
prefixes = append(prefixes, prefix.String())
|
||||
}
|
||||
|
||||
return prefixes, nil
|
||||
}
|
||||
|
||||
// getNetPortRangeFromDestination returns a set of tailcfg.NetPortRange
|
||||
// which are associated with the dest alias.
|
||||
func (pol *ACLPolicy) getNetPortRangeFromDestination(
|
||||
dest string,
|
||||
machines []Machine,
|
||||
needsWildcard bool,
|
||||
stripEmaildomain bool,
|
||||
) ([]tailcfg.NetPortRange, error) {
|
||||
var tokens []string
|
||||
|
||||
log.Trace().Str("destination", dest).Msg("generating policy destination")
|
||||
|
||||
// Check if there is a IPv4/6:Port combination, IPv6 has more than
|
||||
// three ":".
|
||||
tokens = strings.Split(dest, ":")
|
||||
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
||||
port := tokens[len(tokens)-1]
|
||||
|
||||
maybeIPv6Str := strings.TrimSuffix(dest, ":"+port)
|
||||
log.Trace().Str("maybeIPv6Str", maybeIPv6Str).Msg("")
|
||||
|
||||
if maybeIPv6, err := netip.ParseAddr(maybeIPv6Str); err != nil && !maybeIPv6.Is6() {
|
||||
log.Trace().Err(err).Msg("trying to parse as IPv6")
|
||||
|
||||
return nil, fmt.Errorf(
|
||||
"failed to parse destination, tokens %v: %w",
|
||||
tokens,
|
||||
errInvalidPortFormat,
|
||||
)
|
||||
} else {
|
||||
tokens = []string{maybeIPv6Str, port}
|
||||
}
|
||||
}
|
||||
|
||||
log.Trace().Strs("tokens", tokens).Msg("generating policy destination")
|
||||
|
||||
var alias string
|
||||
// We can have here stuff like:
|
||||
// git-server:*
|
||||
// 192.168.1.0/24:22
|
||||
// fd7a:115c:a1e0::2:22
|
||||
// fd7a:115c:a1e0::2/128:22
|
||||
// tag:montreal-webserver:80,443
|
||||
// tag:api-server:443
|
||||
// example-host-1:*
|
||||
@@ -219,11 +413,10 @@ func (h *Headscale) generateACLPolicyDest(
|
||||
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
|
||||
}
|
||||
|
||||
expanded, err := expandAlias(
|
||||
expanded, err := pol.expandAlias(
|
||||
machines,
|
||||
aclPolicy,
|
||||
alias,
|
||||
h.cfg.OIDC.StripEmaildomain,
|
||||
stripEmaildomain,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -234,11 +427,11 @@ func (h *Headscale) generateACLPolicyDest(
|
||||
}
|
||||
|
||||
dests := []tailcfg.NetPortRange{}
|
||||
for _, d := range expanded {
|
||||
for _, p := range *ports {
|
||||
for _, dest := range expanded.Prefixes() {
|
||||
for _, port := range *ports {
|
||||
pr := tailcfg.NetPortRange{
|
||||
IP: d,
|
||||
Ports: p,
|
||||
IP: dest.String(),
|
||||
Ports: port,
|
||||
}
|
||||
dests = append(dests, pr)
|
||||
}
|
||||
@@ -260,12 +453,7 @@ func (h *Headscale) generateACLPolicyDest(
|
||||
func parseProtocol(protocol string) ([]int, bool, error) {
|
||||
switch protocol {
|
||||
case "":
|
||||
return []int{
|
||||
protocolICMP,
|
||||
protocolIPv6ICMP,
|
||||
protocolTCP,
|
||||
protocolUDP,
|
||||
}, false, nil
|
||||
return nil, false, nil
|
||||
case "igmp":
|
||||
return []int{protocolIGMP}, true, nil
|
||||
case "ipv4", "ip-in-ip":
|
||||
@@ -303,128 +491,81 @@ func parseProtocol(protocol string) ([]int, bool, error) {
|
||||
}
|
||||
|
||||
// expandalias has an input of either
|
||||
// - a namespace
|
||||
// - a user
|
||||
// - a group
|
||||
// - a tag
|
||||
// - a host
|
||||
// - an ip
|
||||
// - a cidr
|
||||
// and transform these in IPAddresses.
|
||||
func expandAlias(
|
||||
machines []Machine,
|
||||
aclPolicy ACLPolicy,
|
||||
func (pol *ACLPolicy) expandAlias(
|
||||
machines Machines,
|
||||
alias string,
|
||||
stripEmailDomain bool,
|
||||
) ([]string, error) {
|
||||
ips := []string{}
|
||||
if alias == "*" {
|
||||
return []string{"*"}, nil
|
||||
) (*netipx.IPSet, error) {
|
||||
if isWildcard(alias) {
|
||||
return parseIPSet("*", nil)
|
||||
}
|
||||
|
||||
build := netipx.IPSetBuilder{}
|
||||
|
||||
log.Debug().
|
||||
Str("alias", alias).
|
||||
Msg("Expanding")
|
||||
|
||||
if strings.HasPrefix(alias, "group:") {
|
||||
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
|
||||
if err != nil {
|
||||
return ips, err
|
||||
}
|
||||
for _, n := range namespaces {
|
||||
nodes := filterMachinesByNamespace(machines, n)
|
||||
for _, node := range nodes {
|
||||
ips = append(ips, node.IPAddresses.ToStringSlice()...)
|
||||
}
|
||||
}
|
||||
|
||||
return ips, nil
|
||||
// if alias is a group
|
||||
if isGroup(alias) {
|
||||
return pol.getIPsFromGroup(alias, machines, stripEmailDomain)
|
||||
}
|
||||
|
||||
if strings.HasPrefix(alias, "tag:") {
|
||||
// check for forced tags
|
||||
for _, machine := range machines {
|
||||
if contains(machine.ForcedTags, alias) {
|
||||
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
||||
}
|
||||
}
|
||||
|
||||
// find tag owners
|
||||
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
|
||||
if err != nil {
|
||||
if errors.Is(err, errInvalidTag) {
|
||||
if len(ips) == 0 {
|
||||
return ips, fmt.Errorf(
|
||||
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
||||
errInvalidTag,
|
||||
alias,
|
||||
)
|
||||
}
|
||||
|
||||
return ips, nil
|
||||
} else {
|
||||
return ips, err
|
||||
}
|
||||
}
|
||||
|
||||
// filter out machines per tag owner
|
||||
for _, namespace := range owners {
|
||||
machines := filterMachinesByNamespace(machines, namespace)
|
||||
for _, machine := range machines {
|
||||
hi := machine.GetHostInfo()
|
||||
if contains(hi.RequestTags, alias) {
|
||||
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ips, nil
|
||||
// if alias is a tag
|
||||
if isTag(alias) {
|
||||
return pol.getIPsFromTag(alias, machines, stripEmailDomain)
|
||||
}
|
||||
|
||||
// if alias is a namespace
|
||||
nodes := filterMachinesByNamespace(machines, alias)
|
||||
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
|
||||
|
||||
for _, n := range nodes {
|
||||
ips = append(ips, n.IPAddresses.ToStringSlice()...)
|
||||
}
|
||||
if len(ips) > 0 {
|
||||
return ips, nil
|
||||
// if alias is a user
|
||||
if ips, err := pol.getIPsForUser(alias, machines, stripEmailDomain); ips != nil {
|
||||
return ips, err
|
||||
}
|
||||
|
||||
// if alias is an host
|
||||
if h, ok := aclPolicy.Hosts[alias]; ok {
|
||||
return []string{h.String()}, nil
|
||||
// Note, this is recursive.
|
||||
if h, ok := pol.Hosts[alias]; ok {
|
||||
log.Trace().Str("host", h.String()).Msg("expandAlias got hosts entry")
|
||||
|
||||
return pol.expandAlias(machines, h.String(), stripEmailDomain)
|
||||
}
|
||||
|
||||
// if alias is an IP
|
||||
ip, err := netip.ParseAddr(alias)
|
||||
if err == nil {
|
||||
return []string{ip.String()}, nil
|
||||
if ip, err := netip.ParseAddr(alias); err == nil {
|
||||
return pol.getIPsFromSingleIP(ip, machines)
|
||||
}
|
||||
|
||||
// if alias is an CIDR
|
||||
cidr, err := netip.ParsePrefix(alias)
|
||||
if err == nil {
|
||||
return []string{cidr.String()}, nil
|
||||
// if alias is an IP Prefix (CIDR)
|
||||
if prefix, err := netip.ParsePrefix(alias); err == nil {
|
||||
return pol.getIPsFromIPPrefix(prefix, machines)
|
||||
}
|
||||
|
||||
log.Warn().Msgf("No IPs found with the alias %v", alias)
|
||||
|
||||
return ips, nil
|
||||
return build.IPSet()
|
||||
}
|
||||
|
||||
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
|
||||
// that are correctly tagged since they should not be listed as being in the namespace
|
||||
// we assume in this function that we only have nodes from 1 namespace.
|
||||
// that are correctly tagged since they should not be listed as being in the user
|
||||
// we assume in this function that we only have nodes from 1 user.
|
||||
func excludeCorrectlyTaggedNodes(
|
||||
aclPolicy ACLPolicy,
|
||||
aclPolicy *ACLPolicy,
|
||||
nodes []Machine,
|
||||
namespace string,
|
||||
user string,
|
||||
stripEmailDomain bool,
|
||||
) []Machine {
|
||||
out := []Machine{}
|
||||
tags := []string{}
|
||||
for tag := range aclPolicy.TagOwners {
|
||||
owners, _ := expandTagOwners(aclPolicy, namespace, stripEmailDomain)
|
||||
ns := append(owners, namespace)
|
||||
if contains(ns, namespace) {
|
||||
owners, _ := getTagOwners(aclPolicy, user, stripEmailDomain)
|
||||
ns := append(owners, user)
|
||||
if contains(ns, user) {
|
||||
tags = append(tags, tag)
|
||||
}
|
||||
}
|
||||
@@ -452,7 +593,7 @@ func excludeCorrectlyTaggedNodes(
|
||||
}
|
||||
|
||||
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
|
||||
if portsStr == "*" {
|
||||
if isWildcard(portsStr) {
|
||||
return &[]tailcfg.PortRange{
|
||||
{First: portRangeBegin, Last: portRangeEnd},
|
||||
}, nil
|
||||
@@ -464,6 +605,7 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
|
||||
|
||||
ports := []tailcfg.PortRange{}
|
||||
for _, portStr := range strings.Split(portsStr, ",") {
|
||||
log.Trace().Msgf("parsing portstring: %s", portStr)
|
||||
rang := strings.Split(portStr, "-")
|
||||
switch len(rang) {
|
||||
case 1:
|
||||
@@ -498,10 +640,10 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
|
||||
return &ports, nil
|
||||
}
|
||||
|
||||
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
|
||||
func filterMachinesByUser(machines []Machine, user string) []Machine {
|
||||
out := []Machine{}
|
||||
for _, machine := range machines {
|
||||
if machine.Namespace.Name == namespace {
|
||||
if machine.User.Name == user {
|
||||
out = append(out, machine)
|
||||
}
|
||||
}
|
||||
@@ -509,15 +651,15 @@ func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
|
||||
return out
|
||||
}
|
||||
|
||||
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
|
||||
// getTagOwners will return a list of user. An owner can be either a user or a group
|
||||
// a group cannot be composed of groups.
|
||||
func expandTagOwners(
|
||||
aclPolicy ACLPolicy,
|
||||
func getTagOwners(
|
||||
pol *ACLPolicy,
|
||||
tag string,
|
||||
stripEmailDomain bool,
|
||||
) ([]string, error) {
|
||||
var owners []string
|
||||
ows, ok := aclPolicy.TagOwners[tag]
|
||||
ows, ok := pol.TagOwners[tag]
|
||||
if !ok {
|
||||
return []string{}, fmt.Errorf(
|
||||
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
|
||||
@@ -526,8 +668,8 @@ func expandTagOwners(
|
||||
)
|
||||
}
|
||||
for _, owner := range ows {
|
||||
if strings.HasPrefix(owner, "group:") {
|
||||
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
|
||||
if isGroup(owner) {
|
||||
gs, err := pol.getUsersInGroup(owner, stripEmailDomain)
|
||||
if err != nil {
|
||||
return []string{}, err
|
||||
}
|
||||
@@ -540,15 +682,15 @@ func expandTagOwners(
|
||||
return owners, nil
|
||||
}
|
||||
|
||||
// expandGroup will return the list of namespace inside the group
|
||||
// getUsersInGroup will return the list of user inside the group
|
||||
// after some validation.
|
||||
func expandGroup(
|
||||
aclPolicy ACLPolicy,
|
||||
func (pol *ACLPolicy) getUsersInGroup(
|
||||
group string,
|
||||
stripEmailDomain bool,
|
||||
) ([]string, error) {
|
||||
outGroups := []string{}
|
||||
aclGroups, ok := aclPolicy.Groups[group]
|
||||
users := []string{}
|
||||
log.Trace().Caller().Interface("pol", pol).Msg("test")
|
||||
aclGroups, ok := pol.Groups[group]
|
||||
if !ok {
|
||||
return []string{}, fmt.Errorf(
|
||||
"group %v isn't registered. %w",
|
||||
@@ -557,7 +699,7 @@ func expandGroup(
|
||||
)
|
||||
}
|
||||
for _, group := range aclGroups {
|
||||
if strings.HasPrefix(group, "group:") {
|
||||
if isGroup(group) {
|
||||
return []string{}, fmt.Errorf(
|
||||
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
|
||||
errInvalidGroup,
|
||||
@@ -571,8 +713,151 @@ func expandGroup(
|
||||
errInvalidGroup,
|
||||
)
|
||||
}
|
||||
outGroups = append(outGroups, grp)
|
||||
users = append(users, grp)
|
||||
}
|
||||
|
||||
return outGroups, nil
|
||||
return users, nil
|
||||
}
|
||||
|
||||
func (pol *ACLPolicy) getIPsFromGroup(
|
||||
group string,
|
||||
machines Machines,
|
||||
stripEmailDomain bool,
|
||||
) (*netipx.IPSet, error) {
|
||||
build := netipx.IPSetBuilder{}
|
||||
|
||||
users, err := pol.getUsersInGroup(group, stripEmailDomain)
|
||||
if err != nil {
|
||||
return &netipx.IPSet{}, err
|
||||
}
|
||||
for _, user := range users {
|
||||
filteredMachines := filterMachinesByUser(machines, user)
|
||||
for _, machine := range filteredMachines {
|
||||
machine.IPAddresses.AppendToIPSet(&build)
|
||||
}
|
||||
}
|
||||
|
||||
return build.IPSet()
|
||||
}
|
||||
|
||||
func (pol *ACLPolicy) getIPsFromTag(
|
||||
alias string,
|
||||
machines Machines,
|
||||
stripEmailDomain bool,
|
||||
) (*netipx.IPSet, error) {
|
||||
build := netipx.IPSetBuilder{}
|
||||
|
||||
// check for forced tags
|
||||
for _, machine := range machines {
|
||||
if contains(machine.ForcedTags, alias) {
|
||||
machine.IPAddresses.AppendToIPSet(&build)
|
||||
}
|
||||
}
|
||||
|
||||
// find tag owners
|
||||
owners, err := getTagOwners(pol, alias, stripEmailDomain)
|
||||
if err != nil {
|
||||
if errors.Is(err, errInvalidTag) {
|
||||
ipSet, _ := build.IPSet()
|
||||
if len(ipSet.Prefixes()) == 0 {
|
||||
return ipSet, fmt.Errorf(
|
||||
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
||||
errInvalidTag,
|
||||
alias,
|
||||
)
|
||||
}
|
||||
|
||||
return build.IPSet()
|
||||
} else {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// filter out machines per tag owner
|
||||
for _, user := range owners {
|
||||
machines := filterMachinesByUser(machines, user)
|
||||
for _, machine := range machines {
|
||||
hi := machine.GetHostInfo()
|
||||
if contains(hi.RequestTags, alias) {
|
||||
machine.IPAddresses.AppendToIPSet(&build)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return build.IPSet()
|
||||
}
|
||||
|
||||
func (pol *ACLPolicy) getIPsForUser(
|
||||
user string,
|
||||
machines Machines,
|
||||
stripEmailDomain bool,
|
||||
) (*netipx.IPSet, error) {
|
||||
build := netipx.IPSetBuilder{}
|
||||
|
||||
filteredMachines := filterMachinesByUser(machines, user)
|
||||
filteredMachines = excludeCorrectlyTaggedNodes(pol, filteredMachines, user, stripEmailDomain)
|
||||
|
||||
// shortcurcuit if we have no machines to get ips from.
|
||||
if len(filteredMachines) == 0 {
|
||||
return nil, nil //nolint
|
||||
}
|
||||
|
||||
for _, machine := range filteredMachines {
|
||||
machine.IPAddresses.AppendToIPSet(&build)
|
||||
}
|
||||
|
||||
return build.IPSet()
|
||||
}
|
||||
|
||||
func (pol *ACLPolicy) getIPsFromSingleIP(
|
||||
ip netip.Addr,
|
||||
machines Machines,
|
||||
) (*netipx.IPSet, error) {
|
||||
log.Trace().Str("ip", ip.String()).Msg("expandAlias got ip")
|
||||
|
||||
matches := machines.FilterByIP(ip)
|
||||
|
||||
build := netipx.IPSetBuilder{}
|
||||
build.Add(ip)
|
||||
|
||||
for _, machine := range matches {
|
||||
machine.IPAddresses.AppendToIPSet(&build)
|
||||
}
|
||||
|
||||
return build.IPSet()
|
||||
}
|
||||
|
||||
func (pol *ACLPolicy) getIPsFromIPPrefix(
|
||||
prefix netip.Prefix,
|
||||
machines Machines,
|
||||
) (*netipx.IPSet, error) {
|
||||
log.Trace().Str("prefix", prefix.String()).Msg("expandAlias got prefix")
|
||||
build := netipx.IPSetBuilder{}
|
||||
build.AddPrefix(prefix)
|
||||
|
||||
// This is suboptimal and quite expensive, but if we only add the prefix, we will miss all the relevant IPv6
|
||||
// addresses for the hosts that belong to tailscale. This doesnt really affect stuff like subnet routers.
|
||||
for _, machine := range machines {
|
||||
for _, ip := range machine.IPAddresses {
|
||||
// log.Trace().
|
||||
// Msgf("checking if machine ip (%s) is part of prefix (%s): %v, is single ip prefix (%v), addr: %s", ip.String(), prefix.String(), prefix.Contains(ip), prefix.IsSingleIP(), prefix.Addr().String())
|
||||
if prefix.Contains(ip) {
|
||||
machine.IPAddresses.AppendToIPSet(&build)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return build.IPSet()
|
||||
}
|
||||
|
||||
func isWildcard(str string) bool {
|
||||
return str == "*"
|
||||
}
|
||||
|
||||
func isGroup(str string) bool {
|
||||
return strings.HasPrefix(str, "group:")
|
||||
}
|
||||
|
||||
func isTag(str string) bool {
|
||||
return strings.HasPrefix(str, "tag:")
|
||||
}
|
||||
|
817
acls_test.go
817
acls_test.go
File diff suppressed because it is too large
Load Diff
@@ -17,6 +17,7 @@ type ACLPolicy struct {
|
||||
ACLs []ACL `json:"acls" yaml:"acls"`
|
||||
Tests []ACLTest `json:"tests" yaml:"tests"`
|
||||
AutoApprovers AutoApprovers `json:"autoApprovers" yaml:"autoApprovers"`
|
||||
SSHs []SSH `json:"ssh" yaml:"ssh"`
|
||||
}
|
||||
|
||||
// ACL is a basic rule for the ACL Policy.
|
||||
@@ -33,7 +34,7 @@ type Groups map[string][]string
|
||||
// Hosts are alias for IP addresses or subnets.
|
||||
type Hosts map[string]netip.Prefix
|
||||
|
||||
// TagOwners specify what users (namespaces?) are allow to use certain tags.
|
||||
// TagOwners specify what users (users?) are allow to use certain tags.
|
||||
type TagOwners map[string][]string
|
||||
|
||||
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
||||
@@ -43,13 +44,22 @@ type ACLTest struct {
|
||||
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
||||
}
|
||||
|
||||
// AutoApprovers specify which users (namespaces?), groups or tags have their advertised routes
|
||||
// AutoApprovers specify which users (users?), groups or tags have their advertised routes
|
||||
// or exit node status automatically enabled.
|
||||
type AutoApprovers struct {
|
||||
Routes map[string][]string `json:"routes" yaml:"routes"`
|
||||
ExitNode []string `json:"exitNode" yaml:"exitNode"`
|
||||
}
|
||||
|
||||
// SSH controls who can ssh into which machines.
|
||||
type SSH struct {
|
||||
Action string `json:"action" yaml:"action"`
|
||||
Sources []string `json:"src" yaml:"src"`
|
||||
Destinations []string `json:"dst" yaml:"dst"`
|
||||
Users []string `json:"users" yaml:"users"`
|
||||
CheckPeriod string `json:"checkPeriod,omitempty" yaml:"checkPeriod,omitempty"`
|
||||
}
|
||||
|
||||
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
||||
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
||||
newHosts := Hosts{}
|
||||
@@ -101,15 +111,15 @@ func (hosts *Hosts) UnmarshalYAML(data []byte) error {
|
||||
}
|
||||
|
||||
// IsZero is perhaps a bit naive here.
|
||||
func (policy ACLPolicy) IsZero() bool {
|
||||
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
|
||||
func (pol ACLPolicy) IsZero() bool {
|
||||
if len(pol.Groups) == 0 && len(pol.Hosts) == 0 && len(pol.ACLs) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// Returns the list of autoApproving namespaces, groups or tags for a given IPPrefix.
|
||||
// Returns the list of autoApproving users, groups or tags for a given IPPrefix.
|
||||
func (autoApprovers *AutoApprovers) GetRouteApprovers(
|
||||
prefix netip.Prefix,
|
||||
) ([]string, error) {
|
||||
|
2
api.go
2
api.go
@@ -78,7 +78,7 @@ var registerWebAPITemplate = template.Must(
|
||||
<p>
|
||||
Run the command below in the headscale server to add this machine to your network:
|
||||
</p>
|
||||
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
|
||||
<pre><code>headscale nodes register --user USERNAME --key {{.Key}}</code></pre>
|
||||
</body>
|
||||
</html>
|
||||
`))
|
||||
|
@@ -1,6 +1,8 @@
|
||||
package headscale
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/rs/zerolog/log"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
@@ -13,7 +15,7 @@ func (h *Headscale) generateMapResponse(
|
||||
Str("func", "generateMapResponse").
|
||||
Str("machine", mapRequest.Hostinfo.Hostname).
|
||||
Msg("Creating Map response")
|
||||
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Caller().
|
||||
@@ -35,9 +37,9 @@ func (h *Headscale) generateMapResponse(
|
||||
return nil, err
|
||||
}
|
||||
|
||||
profiles := getMapResponseUserProfiles(*machine, peers)
|
||||
profiles := h.getMapResponseUserProfiles(*machine, peers)
|
||||
|
||||
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Caller().
|
||||
@@ -55,15 +57,46 @@ func (h *Headscale) generateMapResponse(
|
||||
peers,
|
||||
)
|
||||
|
||||
now := time.Now()
|
||||
|
||||
resp := tailcfg.MapResponse{
|
||||
KeepAlive: false,
|
||||
Node: node,
|
||||
Peers: nodePeers,
|
||||
DNSConfig: dnsConfig,
|
||||
Domain: h.cfg.BaseDomain,
|
||||
KeepAlive: false,
|
||||
Node: node,
|
||||
|
||||
// TODO: Only send if updated
|
||||
DERPMap: h.DERPMap,
|
||||
|
||||
// TODO: Only send if updated
|
||||
Peers: nodePeers,
|
||||
|
||||
// TODO(kradalby): Implement:
|
||||
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
|
||||
// PeersChanged
|
||||
// PeersRemoved
|
||||
// PeersChangedPatch
|
||||
// PeerSeenChange
|
||||
// OnlineChange
|
||||
|
||||
// TODO: Only send if updated
|
||||
DNSConfig: dnsConfig,
|
||||
|
||||
// TODO: Only send if updated
|
||||
Domain: h.cfg.BaseDomain,
|
||||
|
||||
// Do not instruct clients to collect services, we do not
|
||||
// support or do anything with them
|
||||
CollectServices: "false",
|
||||
|
||||
// TODO: Only send if updated
|
||||
PacketFilter: h.aclRules,
|
||||
DERPMap: h.DERPMap,
|
||||
|
||||
UserProfiles: profiles,
|
||||
|
||||
// TODO: Only send if updated
|
||||
SSHPolicy: h.sshPolicy,
|
||||
|
||||
ControlTime: &now,
|
||||
|
||||
Debug: &tailcfg.Debug{
|
||||
DisableLogTail: !h.cfg.LogTail.Enabled,
|
||||
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
||||
|
@@ -29,7 +29,7 @@ type APIKey struct {
|
||||
LastSeen *time.Time
|
||||
}
|
||||
|
||||
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
|
||||
// CreateAPIKey creates a new ApiKey in a user, and returns it.
|
||||
func (h *Headscale) CreateAPIKey(
|
||||
expiration *time.Time,
|
||||
) (string, *APIKey, error) {
|
||||
@@ -64,7 +64,7 @@ func (h *Headscale) CreateAPIKey(
|
||||
return keyStr, &key, nil
|
||||
}
|
||||
|
||||
// ListAPIKeys returns the list of ApiKeys for a namespace.
|
||||
// ListAPIKeys returns the list of ApiKeys for a user.
|
||||
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
||||
keys := []APIKey{}
|
||||
if err := h.db.Find(&keys).Error; err != nil {
|
||||
|
184
app.go
184
app.go
@@ -11,6 +11,7 @@ import (
|
||||
"os"
|
||||
"os/signal"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
@@ -51,12 +52,6 @@ const (
|
||||
errUnsupportedLetsEncryptChallengeType = Error(
|
||||
"unknown value for Lets Encrypt challenge type",
|
||||
)
|
||||
|
||||
ErrFailedPrivateKey = Error("failed to read or create private key")
|
||||
ErrFailedNoisePrivateKey = Error(
|
||||
"failed to read or create Noise protocol private key",
|
||||
)
|
||||
ErrSamePrivateKeys = Error("private key and noise private key are the same")
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -86,13 +81,12 @@ type Headscale struct {
|
||||
privateKey *key.MachinePrivate
|
||||
noisePrivateKey *key.MachinePrivate
|
||||
|
||||
noiseMux *mux.Router
|
||||
|
||||
DERPMap *tailcfg.DERPMap
|
||||
DERPServer *DERPServer
|
||||
|
||||
aclPolicy *ACLPolicy
|
||||
aclRules []tailcfg.FilterRule
|
||||
sshPolicy *tailcfg.SSHPolicy
|
||||
|
||||
lastStateChange *xsync.MapOf[string, time.Time]
|
||||
|
||||
@@ -107,41 +101,20 @@ type Headscale struct {
|
||||
pollNetMapStreamWG sync.WaitGroup
|
||||
}
|
||||
|
||||
// Look up the TLS constant relative to user-supplied TLS client
|
||||
// authentication mode. If an unknown mode is supplied, the default
|
||||
// value, tls.RequireAnyClientCert, is returned. The returned boolean
|
||||
// indicates if the supplied mode was valid.
|
||||
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
|
||||
switch mode {
|
||||
case DisabledClientAuth:
|
||||
// Client cert is _not_ required.
|
||||
return tls.NoClientCert, true
|
||||
case RelaxedClientAuth:
|
||||
// Client cert required, but _not verified_.
|
||||
return tls.RequireAnyClientCert, true
|
||||
case EnforcedClientAuth:
|
||||
// Client cert is _required and verified_.
|
||||
return tls.RequireAndVerifyClientCert, true
|
||||
default:
|
||||
// Return the default when an unknown value is supplied.
|
||||
return tls.RequireAnyClientCert, false
|
||||
}
|
||||
}
|
||||
|
||||
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
||||
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
||||
if err != nil {
|
||||
return nil, ErrFailedPrivateKey
|
||||
return nil, fmt.Errorf("failed to read or create private key: %w", err)
|
||||
}
|
||||
|
||||
// TS2021 requires to have a different key from the legacy protocol.
|
||||
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
|
||||
if err != nil {
|
||||
return nil, ErrFailedNoisePrivateKey
|
||||
return nil, fmt.Errorf("failed to read or create Noise protocol private key: %w", err)
|
||||
}
|
||||
|
||||
if privateKey.Equal(*noisePrivateKey) {
|
||||
return nil, ErrSamePrivateKeys
|
||||
return nil, fmt.Errorf("private key and noise private key are the same: %w", err)
|
||||
}
|
||||
|
||||
var dbString string
|
||||
@@ -154,8 +127,12 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
||||
cfg.DBuser,
|
||||
)
|
||||
|
||||
if !cfg.DBssl {
|
||||
dbString += " sslmode=disable"
|
||||
if sslEnabled, err := strconv.ParseBool(cfg.DBssl); err == nil {
|
||||
if !sslEnabled {
|
||||
dbString += " sslmode=disable"
|
||||
}
|
||||
} else {
|
||||
dbString += fmt.Sprintf(" sslmode=%s", cfg.DBssl)
|
||||
}
|
||||
|
||||
if cfg.DBport != 0 {
|
||||
@@ -185,6 +162,7 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
||||
aclRules: tailcfg.FilterAllowAll, // default allowall
|
||||
registrationCache: registrationCache,
|
||||
pollNetMapStreamWG: sync.WaitGroup{},
|
||||
lastStateChange: xsync.NewMapOf[time.Time](),
|
||||
}
|
||||
|
||||
err = app.initDB()
|
||||
@@ -240,29 +218,47 @@ func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
|
||||
}
|
||||
}
|
||||
|
||||
// expireExpiredMachines expires machines that have an explicit expiry set
|
||||
// after that expiry time has passed.
|
||||
func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
|
||||
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
||||
for range ticker.C {
|
||||
h.expireExpiredMachinesWorker()
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
|
||||
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
||||
for range ticker.C {
|
||||
err := h.handlePrimarySubnetFailover()
|
||||
if err != nil {
|
||||
log.Error().Err(err).Msg("failed to handle primary subnet failover")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Headscale) expireEphemeralNodesWorker() {
|
||||
namespaces, err := h.ListNamespaces()
|
||||
users, err := h.ListUsers()
|
||||
if err != nil {
|
||||
log.Error().Err(err).Msg("Error listing namespaces")
|
||||
log.Error().Err(err).Msg("Error listing users")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
for _, namespace := range namespaces {
|
||||
machines, err := h.ListMachinesInNamespace(namespace.Name)
|
||||
for _, user := range users {
|
||||
machines, err := h.ListMachinesByUser(user.Name)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Err(err).
|
||||
Str("namespace", namespace.Name).
|
||||
Msg("Error listing machines in namespace")
|
||||
Str("user", user.Name).
|
||||
Msg("Error listing machines in user")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
expiredFound := false
|
||||
for _, machine := range machines {
|
||||
if machine.AuthKey != nil && machine.LastSeen != nil &&
|
||||
machine.AuthKey.Ephemeral &&
|
||||
if machine.isEphemeral() && machine.LastSeen != nil &&
|
||||
time.Now().
|
||||
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
||||
expiredFound = true
|
||||
@@ -286,6 +282,53 @@ func (h *Headscale) expireEphemeralNodesWorker() {
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Headscale) expireExpiredMachinesWorker() {
|
||||
users, err := h.ListUsers()
|
||||
if err != nil {
|
||||
log.Error().Err(err).Msg("Error listing users")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
for _, user := range users {
|
||||
machines, err := h.ListMachinesByUser(user.Name)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Err(err).
|
||||
Str("user", user.Name).
|
||||
Msg("Error listing machines in user")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
expiredFound := false
|
||||
for index, machine := range machines {
|
||||
if machine.isExpired() &&
|
||||
machine.Expiry.After(h.getLastStateChange(user)) {
|
||||
expiredFound = true
|
||||
|
||||
err := h.ExpireMachine(&machines[index])
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Err(err).
|
||||
Str("machine", machine.Hostname).
|
||||
Str("name", machine.GivenName).
|
||||
Msg("🤮 Cannot expire machine")
|
||||
} else {
|
||||
log.Info().
|
||||
Str("machine", machine.Hostname).
|
||||
Str("name", machine.GivenName).
|
||||
Msg("Machine successfully expired")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if expiredFound {
|
||||
h.setLastStateChangeToNow()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
||||
req interface{},
|
||||
info *grpc.UnaryServerInfo,
|
||||
@@ -454,9 +497,8 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
||||
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet)
|
||||
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet)
|
||||
router.HandleFunc("/register/{nkey}", h.RegisterWebAPI).Methods(http.MethodGet)
|
||||
router.HandleFunc("/machine/{mkey}/map", h.PollNetMapHandler).
|
||||
Methods(http.MethodPost)
|
||||
router.HandleFunc("/machine/{mkey}", h.RegistrationHandler).Methods(http.MethodPost)
|
||||
h.addLegacyHandlers(router)
|
||||
|
||||
router.HandleFunc("/oidc/register/{nkey}", h.RegisterOIDC).Methods(http.MethodGet)
|
||||
router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet)
|
||||
router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet)
|
||||
@@ -479,17 +521,7 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
||||
apiRouter.Use(h.httpAuthenticationMiddleware)
|
||||
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
||||
|
||||
router.PathPrefix("/").HandlerFunc(stdoutHandler)
|
||||
|
||||
return router
|
||||
}
|
||||
|
||||
func (h *Headscale) createNoiseMux() *mux.Router {
|
||||
router := mux.NewRouter()
|
||||
|
||||
router.HandleFunc("/machine/register", h.NoiseRegistrationHandler).
|
||||
Methods(http.MethodPost)
|
||||
router.HandleFunc("/machine/map", h.NoisePollNetMapHandler)
|
||||
router.PathPrefix("/").HandlerFunc(notFoundHandler)
|
||||
|
||||
return router
|
||||
}
|
||||
@@ -518,6 +550,9 @@ func (h *Headscale) Serve() error {
|
||||
}
|
||||
|
||||
go h.expireEphemeralNodes(updateInterval)
|
||||
go h.expireExpiredMachines(updateInterval)
|
||||
|
||||
go h.failoverSubnetRoutes(updateInterval)
|
||||
|
||||
if zl.GlobalLevel() == zl.TraceLevel {
|
||||
zerolog.RespLog = true
|
||||
@@ -651,12 +686,6 @@ func (h *Headscale) Serve() error {
|
||||
// over our main Addr. It also serves the legacy Tailcale API
|
||||
router := h.createRouter(grpcGatewayMux)
|
||||
|
||||
// This router is served only over the Noise connection, and exposes only the new API.
|
||||
//
|
||||
// The HTTP2 server that exposes this router is created for
|
||||
// a single hijacked connection from /ts2021, using netutil.NewOneConnListener
|
||||
h.noiseMux = h.createNoiseMux()
|
||||
|
||||
httpServer := &http.Server{
|
||||
Addr: h.cfg.Addr,
|
||||
Handler: router,
|
||||
@@ -789,7 +818,6 @@ func (h *Headscale) Serve() error {
|
||||
|
||||
// And we're done:
|
||||
cancel()
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -862,12 +890,7 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
|
||||
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
||||
}
|
||||
|
||||
log.Info().Msg(fmt.Sprintf(
|
||||
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
|
||||
h.cfg.TLS.ClientAuthMode))
|
||||
|
||||
tlsConfig := &tls.Config{
|
||||
ClientAuth: h.cfg.TLS.ClientAuthMode,
|
||||
NextProtos: []string{"http/1.1"},
|
||||
Certificates: make([]tls.Certificate, 1),
|
||||
MinVersion: tls.VersionTLS12,
|
||||
@@ -884,31 +907,31 @@ func (h *Headscale) setLastStateChangeToNow() {
|
||||
|
||||
now := time.Now().UTC()
|
||||
|
||||
namespaces, err := h.ListNamespaces()
|
||||
users, err := h.ListUsers()
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Caller().
|
||||
Err(err).
|
||||
Msg("failed to fetch all namespaces, failing to update last changed state.")
|
||||
Msg("failed to fetch all users, failing to update last changed state.")
|
||||
}
|
||||
|
||||
for _, namespace := range namespaces {
|
||||
lastStateUpdate.WithLabelValues(namespace.Name, "headscale").Set(float64(now.Unix()))
|
||||
for _, user := range users {
|
||||
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix()))
|
||||
if h.lastStateChange == nil {
|
||||
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
||||
}
|
||||
h.lastStateChange.Store(namespace.Name, now)
|
||||
h.lastStateChange.Store(user.Name, now)
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
|
||||
func (h *Headscale) getLastStateChange(users ...User) time.Time {
|
||||
times := []time.Time{}
|
||||
|
||||
// getLastStateChange takes a list of namespaces as a "filter", if no namespaces
|
||||
// are past, then use the entier list of namespaces and look for the last update
|
||||
if len(namespaces) > 0 {
|
||||
for _, namespace := range namespaces {
|
||||
if lastChange, ok := h.lastStateChange.Load(namespace.Name); ok {
|
||||
// getLastStateChange takes a list of users as a "filter", if no users
|
||||
// are past, then use the entier list of users and look for the last update
|
||||
if len(users) > 0 {
|
||||
for _, user := range users {
|
||||
if lastChange, ok := h.lastStateChange.Load(user.Name); ok {
|
||||
times = append(times, lastChange)
|
||||
}
|
||||
}
|
||||
@@ -933,7 +956,7 @@ func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
|
||||
}
|
||||
}
|
||||
|
||||
func stdoutHandler(
|
||||
func notFoundHandler(
|
||||
writer http.ResponseWriter,
|
||||
req *http.Request,
|
||||
) {
|
||||
@@ -945,6 +968,7 @@ func stdoutHandler(
|
||||
Interface("url", req.URL).
|
||||
Bytes("body", body).
|
||||
Msg("Request did not match")
|
||||
writer.WriteHeader(http.StatusNotFound)
|
||||
}
|
||||
|
||||
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
||||
|
17
app_test.go
17
app_test.go
@@ -59,20 +59,3 @@ func (s *Suite) ResetDB(c *check.C) {
|
||||
}
|
||||
app.db = db
|
||||
}
|
||||
|
||||
// Enusre an error is returned when an invalid auth mode
|
||||
// is supplied.
|
||||
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
|
||||
_, isValid := LookupTLSClientAuthMode("invalid")
|
||||
c.Assert(isValid, check.Equals, false)
|
||||
}
|
||||
|
||||
// Ensure that all client auth modes return a nil error.
|
||||
func (s *Suite) TestAuthModes(c *check.C) {
|
||||
modes := []string{"disabled", "relaxed", "enforced"}
|
||||
|
||||
for _, v := range modes {
|
||||
_, isValid := LookupTLSClientAuthMode(v)
|
||||
c.Assert(isValid, check.Equals, true)
|
||||
}
|
||||
}
|
||||
|
47
cmd/build-docker-img/main.go
Normal file
47
cmd/build-docker-img/main.go
Normal file
@@ -0,0 +1,47 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
||||
"github.com/juanfont/headscale/integration"
|
||||
"github.com/juanfont/headscale/integration/tsic"
|
||||
"github.com/ory/dockertest/v3"
|
||||
)
|
||||
|
||||
func main() {
|
||||
log.Printf("creating docker pool")
|
||||
pool, err := dockertest.NewPool("")
|
||||
if err != nil {
|
||||
log.Fatalf("could not connect to docker: %s", err)
|
||||
}
|
||||
|
||||
log.Printf("creating docker network")
|
||||
network, err := pool.CreateNetwork("docker-integration-net")
|
||||
if err != nil {
|
||||
log.Fatalf("failed to create or get network: %s", err)
|
||||
}
|
||||
|
||||
for _, version := range integration.TailscaleVersions {
|
||||
log.Printf("creating container image for Tailscale (%s)", version)
|
||||
|
||||
tsClient, err := tsic.New(
|
||||
pool,
|
||||
version,
|
||||
network,
|
||||
)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to create tailscale node: %s", err)
|
||||
}
|
||||
|
||||
err = tsClient.Shutdown()
|
||||
if err != nil {
|
||||
log.Fatalf("failed to shut down container: %s", err)
|
||||
}
|
||||
}
|
||||
|
||||
network.Close()
|
||||
err = pool.RemoveNetwork(network)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to remove network: %s", err)
|
||||
}
|
||||
}
|
169
cmd/gh-action-integration-generator/main.go
Normal file
169
cmd/gh-action-integration-generator/main.go
Normal file
@@ -0,0 +1,169 @@
|
||||
package main
|
||||
|
||||
//go:generate go run ./main.go
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"text/template"
|
||||
)
|
||||
|
||||
var (
|
||||
githubWorkflowPath = "../../.github/workflows/"
|
||||
jobFileNameTemplate = `test-integration-v2-%s.yaml`
|
||||
jobTemplate = template.Must(
|
||||
template.New("jobTemplate").
|
||||
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
||||
|
||||
name: Integration Test v2 - {{.Name}}
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
concurrency:
|
||||
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Get changed files
|
||||
id: changed-files
|
||||
uses: tj-actions/changed-files@v34
|
||||
with:
|
||||
files: |
|
||||
*.nix
|
||||
go.*
|
||||
**/*.go
|
||||
integration_test/
|
||||
config-example.yaml
|
||||
|
||||
- uses: cachix/install-nix-action@v18
|
||||
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
|
||||
|
||||
- name: Run general integration tests
|
||||
if: steps.changed-files.outputs.any_changed == 'true'
|
||||
run: |
|
||||
nix develop --command -- docker run \
|
||||
--tty --rm \
|
||||
--volume ~/.cache/hs-integration-go:/go \
|
||||
--name headscale-test-suite \
|
||||
--volume $PWD:$PWD -w $PWD/integration \
|
||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
||||
--volume $PWD/control_logs:/tmp/control \
|
||||
golang:1 \
|
||||
go test ./... \
|
||||
-tags ts2019 \
|
||||
-failfast \
|
||||
-timeout 120m \
|
||||
-parallel 1 \
|
||||
-run "^{{.Name}}$"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: logs
|
||||
path: "control_logs/*.log"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
||||
with:
|
||||
name: pprof
|
||||
path: "control_logs/*.pprof.tar"
|
||||
`),
|
||||
)
|
||||
)
|
||||
|
||||
const workflowFilePerm = 0o600
|
||||
|
||||
func removeTests() {
|
||||
glob := fmt.Sprintf(jobFileNameTemplate, "*")
|
||||
|
||||
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
|
||||
if err != nil {
|
||||
log.Fatalf("failed to find test files")
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
err := os.Remove(file)
|
||||
if err != nil {
|
||||
log.Printf("failed to remove: %s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func findTests() []string {
|
||||
rgBin, err := exec.LookPath("rg")
|
||||
if err != nil {
|
||||
log.Fatalf("failed to find rg (ripgrep) binary")
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"--regexp", "func (Test.+)\\(.*",
|
||||
"../../integration/",
|
||||
"--replace", "$1",
|
||||
"--sort", "path",
|
||||
"--no-line-number",
|
||||
"--no-filename",
|
||||
"--no-heading",
|
||||
}
|
||||
|
||||
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
|
||||
|
||||
ripgrep := exec.Command(
|
||||
rgBin,
|
||||
args...,
|
||||
)
|
||||
|
||||
result, err := ripgrep.CombinedOutput()
|
||||
if err != nil {
|
||||
log.Printf("out: %s", result)
|
||||
log.Fatalf("failed to run ripgrep: %s", err)
|
||||
}
|
||||
|
||||
tests := strings.Split(string(result), "\n")
|
||||
tests = tests[:len(tests)-1]
|
||||
|
||||
return tests
|
||||
}
|
||||
|
||||
func main() {
|
||||
type testConfig struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
tests := findTests()
|
||||
|
||||
removeTests()
|
||||
|
||||
for _, test := range tests {
|
||||
log.Printf("generating workflow for %s", test)
|
||||
|
||||
var content bytes.Buffer
|
||||
|
||||
if err := jobTemplate.Execute(&content, testConfig{
|
||||
Name: test,
|
||||
}); err != nil {
|
||||
log.Fatalf("failed to render template: %s", err)
|
||||
}
|
||||
|
||||
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
|
||||
|
||||
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
|
||||
if err != nil {
|
||||
log.Fatalf("failed to write github job: %s", err)
|
||||
}
|
||||
}
|
||||
}
|
22
cmd/headscale/cli/configtest.go
Normal file
22
cmd/headscale/cli/configtest.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package cli
|
||||
|
||||
import (
|
||||
"github.com/rs/zerolog/log"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(configTestCmd)
|
||||
}
|
||||
|
||||
var configTestCmd = &cobra.Command{
|
||||
Use: "configtest",
|
||||
Short: "Test the configuration.",
|
||||
Long: "Run a test of the configuration and exit.",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
_, err := getHeadscaleApp()
|
||||
if err != nil {
|
||||
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
||||
}
|
||||
},
|
||||
}
|
@@ -3,6 +3,7 @@ package cli
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/juanfont/headscale"
|
||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||
"github.com/rs/zerolog/log"
|
||||
"github.com/spf13/cobra"
|
||||
@@ -10,8 +11,7 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
keyLength = 64
|
||||
errPreAuthKeyTooShort = Error("key too short, must be 64 hexadecimal characters")
|
||||
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix")
|
||||
)
|
||||
|
||||
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
||||
@@ -27,8 +27,14 @@ func init() {
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("")
|
||||
}
|
||||
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
||||
err = createNodeCmd.MarkFlagRequired("namespace")
|
||||
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||
|
||||
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
|
||||
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||
createNodeNamespaceFlag.Hidden = true
|
||||
|
||||
err = createNodeCmd.MarkFlagRequired("user")
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("")
|
||||
}
|
||||
@@ -55,9 +61,9 @@ var createNodeCmd = &cobra.Command{
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||
|
||||
return
|
||||
}
|
||||
@@ -87,8 +93,8 @@ var createNodeCmd = &cobra.Command{
|
||||
|
||||
return
|
||||
}
|
||||
if len(machineKey) != keyLength {
|
||||
err = errPreAuthKeyTooShort
|
||||
if !headscale.NodePublicKeyRegex.Match([]byte(machineKey)) {
|
||||
err = errPreAuthKeyMalformed
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Error: %s", err),
|
||||
@@ -110,10 +116,10 @@ var createNodeCmd = &cobra.Command{
|
||||
}
|
||||
|
||||
request := &v1.DebugCreateMachineRequest{
|
||||
Key: machineKey,
|
||||
Name: name,
|
||||
Namespace: namespace,
|
||||
Routes: routes,
|
||||
Key: machineKey,
|
||||
Name: name,
|
||||
User: user,
|
||||
Routes: routes,
|
||||
}
|
||||
|
||||
response, err := client.DebugCreateMachine(ctx, request)
|
||||
|
@@ -16,10 +16,11 @@ const (
|
||||
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
||||
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
||||
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
||||
accessTTL = 10 * time.Minute
|
||||
refreshTTL = 60 * time.Minute
|
||||
)
|
||||
|
||||
var accessTTL = 2 * time.Minute
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(mockOidcCmd)
|
||||
}
|
||||
@@ -54,6 +55,16 @@ func mockOIDC() error {
|
||||
if portStr == "" {
|
||||
return errMockOidcPortNotDefined
|
||||
}
|
||||
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
||||
if accessTTLOverride != "" {
|
||||
newTTL, err := time.ParseDuration(accessTTLOverride)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
accessTTL = newTTL
|
||||
}
|
||||
|
||||
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
||||
|
||||
port, err := strconv.Atoi(portStr)
|
||||
if err != nil {
|
||||
|
@@ -19,12 +19,24 @@ import (
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(nodeCmd)
|
||||
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
|
||||
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
||||
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
||||
|
||||
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
|
||||
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
|
||||
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||
listNodesNamespaceFlag.Hidden = true
|
||||
|
||||
nodeCmd.AddCommand(listNodesCmd)
|
||||
|
||||
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
||||
err := registerNodeCmd.MarkFlagRequired("namespace")
|
||||
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||
|
||||
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
|
||||
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||
registerNodeNamespaceFlag.Hidden = true
|
||||
|
||||
err := registerNodeCmd.MarkFlagRequired("user")
|
||||
if err != nil {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
@@ -63,9 +75,14 @@ func init() {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
|
||||
moveNodeCmd.Flags().StringP("namespace", "n", "", "New namespace")
|
||||
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
|
||||
|
||||
err = moveNodeCmd.MarkFlagRequired("namespace")
|
||||
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
|
||||
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||
moveNodeNamespaceFlag.Hidden = true
|
||||
|
||||
err = moveNodeCmd.MarkFlagRequired("user")
|
||||
if err != nil {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
@@ -93,9 +110,9 @@ var registerNodeCmd = &cobra.Command{
|
||||
Short: "Registers a machine to your network",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||
|
||||
return
|
||||
}
|
||||
@@ -116,8 +133,8 @@ var registerNodeCmd = &cobra.Command{
|
||||
}
|
||||
|
||||
request := &v1.RegisterMachineRequest{
|
||||
Key: machineKey,
|
||||
Namespace: namespace,
|
||||
Key: machineKey,
|
||||
User: user,
|
||||
}
|
||||
|
||||
response, err := client.RegisterMachine(ctx, request)
|
||||
@@ -134,7 +151,9 @@ var registerNodeCmd = &cobra.Command{
|
||||
return
|
||||
}
|
||||
|
||||
SuccessOutput(response.Machine, "Machine register", output)
|
||||
SuccessOutput(
|
||||
response.Machine,
|
||||
fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
|
||||
},
|
||||
}
|
||||
|
||||
@@ -144,9 +163,9 @@ var listNodesCmd = &cobra.Command{
|
||||
Aliases: []string{"ls", "show"},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||
|
||||
return
|
||||
}
|
||||
@@ -162,7 +181,7 @@ var listNodesCmd = &cobra.Command{
|
||||
defer conn.Close()
|
||||
|
||||
request := &v1.ListMachinesRequest{
|
||||
Namespace: namespace,
|
||||
User: user,
|
||||
}
|
||||
|
||||
response, err := client.ListMachines(ctx, request)
|
||||
@@ -182,7 +201,7 @@ var listNodesCmd = &cobra.Command{
|
||||
return
|
||||
}
|
||||
|
||||
tableData, err := nodesToPtables(namespace, showTags, response.Machines)
|
||||
tableData, err := nodesToPtables(user, showTags, response.Machines)
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||
|
||||
@@ -386,7 +405,7 @@ var deleteNodeCmd = &cobra.Command{
|
||||
|
||||
var moveNodeCmd = &cobra.Command{
|
||||
Use: "move",
|
||||
Short: "Move node to another namespace",
|
||||
Short: "Move node to another user",
|
||||
Aliases: []string{"mv"},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
@@ -402,11 +421,11 @@ var moveNodeCmd = &cobra.Command{
|
||||
return
|
||||
}
|
||||
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Error getting namespace: %s", err),
|
||||
fmt.Sprintf("Error getting user: %s", err),
|
||||
output,
|
||||
)
|
||||
|
||||
@@ -437,7 +456,7 @@ var moveNodeCmd = &cobra.Command{
|
||||
|
||||
moveRequest := &v1.MoveMachineRequest{
|
||||
MachineId: identifier,
|
||||
Namespace: namespace,
|
||||
User: user,
|
||||
}
|
||||
|
||||
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
||||
@@ -454,12 +473,12 @@ var moveNodeCmd = &cobra.Command{
|
||||
return
|
||||
}
|
||||
|
||||
SuccessOutput(moveResponse.Machine, "Node moved to another namespace", output)
|
||||
SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
|
||||
},
|
||||
}
|
||||
|
||||
func nodesToPtables(
|
||||
currentNamespace string,
|
||||
currentUser string,
|
||||
showTags bool,
|
||||
machines []*v1.Machine,
|
||||
) (pterm.TableData, error) {
|
||||
@@ -467,11 +486,13 @@ func nodesToPtables(
|
||||
"ID",
|
||||
"Hostname",
|
||||
"Name",
|
||||
"MachineKey",
|
||||
"NodeKey",
|
||||
"Namespace",
|
||||
"User",
|
||||
"IP addresses",
|
||||
"Ephemeral",
|
||||
"Last seen",
|
||||
"Expiration",
|
||||
"Online",
|
||||
"Expired",
|
||||
}
|
||||
@@ -498,12 +519,24 @@ func nodesToPtables(
|
||||
}
|
||||
|
||||
var expiry time.Time
|
||||
var expiryTime string
|
||||
if machine.Expiry != nil {
|
||||
expiry = machine.Expiry.AsTime()
|
||||
expiryTime = expiry.Format("2006-01-02 15:04:05")
|
||||
} else {
|
||||
expiryTime = "N/A"
|
||||
}
|
||||
|
||||
var machineKey key.MachinePublic
|
||||
err := machineKey.UnmarshalText(
|
||||
[]byte(headscale.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
|
||||
)
|
||||
if err != nil {
|
||||
machineKey = key.MachinePublic{}
|
||||
}
|
||||
|
||||
var nodeKey key.NodePublic
|
||||
err := nodeKey.UnmarshalText(
|
||||
err = nodeKey.UnmarshalText(
|
||||
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
||||
)
|
||||
if err != nil {
|
||||
@@ -511,9 +544,7 @@ func nodesToPtables(
|
||||
}
|
||||
|
||||
var online string
|
||||
if lastSeen.After(
|
||||
time.Now().Add(-5 * time.Minute),
|
||||
) { // TODO: Find a better way to reliably show if online
|
||||
if machine.Online {
|
||||
online = pterm.LightGreen("online")
|
||||
} else {
|
||||
online = pterm.LightRed("offline")
|
||||
@@ -546,12 +577,12 @@ func nodesToPtables(
|
||||
}
|
||||
validTags = strings.TrimLeft(validTags, ",")
|
||||
|
||||
var namespace string
|
||||
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
|
||||
namespace = pterm.LightMagenta(machine.Namespace.Name)
|
||||
var user string
|
||||
if currentUser == "" || (currentUser == machine.User.Name) {
|
||||
user = pterm.LightMagenta(machine.User.Name)
|
||||
} else {
|
||||
// Shared into this namespace
|
||||
namespace = pterm.LightYellow(machine.Namespace.Name)
|
||||
// Shared into this user
|
||||
user = pterm.LightYellow(machine.User.Name)
|
||||
}
|
||||
|
||||
var IPV4Address string
|
||||
@@ -568,11 +599,13 @@ func nodesToPtables(
|
||||
strconv.FormatUint(machine.Id, headscale.Base10),
|
||||
machine.Name,
|
||||
machine.GetGivenName(),
|
||||
machineKey.ShortString(),
|
||||
nodeKey.ShortString(),
|
||||
namespace,
|
||||
user,
|
||||
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
||||
strconv.FormatBool(ephemeral),
|
||||
lastSeenTime,
|
||||
expiryTime,
|
||||
online,
|
||||
expired,
|
||||
}
|
||||
|
@@ -20,8 +20,14 @@ const (
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(preauthkeysCmd)
|
||||
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
|
||||
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
|
||||
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
|
||||
|
||||
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
|
||||
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
|
||||
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||
pakNamespaceFlag.Hidden = true
|
||||
|
||||
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("")
|
||||
}
|
||||
@@ -46,14 +52,14 @@ var preauthkeysCmd = &cobra.Command{
|
||||
|
||||
var listPreAuthKeys = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List the preauthkeys for this namespace",
|
||||
Short: "List the preauthkeys for this user",
|
||||
Aliases: []string{"ls", "show"},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||
|
||||
return
|
||||
}
|
||||
@@ -63,7 +69,7 @@ var listPreAuthKeys = &cobra.Command{
|
||||
defer conn.Close()
|
||||
|
||||
request := &v1.ListPreAuthKeysRequest{
|
||||
Namespace: namespace,
|
||||
User: user,
|
||||
}
|
||||
|
||||
response, err := client.ListPreAuthKeys(ctx, request)
|
||||
@@ -143,14 +149,14 @@ var listPreAuthKeys = &cobra.Command{
|
||||
|
||||
var createPreAuthKeyCmd = &cobra.Command{
|
||||
Use: "create",
|
||||
Short: "Creates a new preauthkey in the specified namespace",
|
||||
Short: "Creates a new preauthkey in the specified user",
|
||||
Aliases: []string{"c", "new"},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||
|
||||
return
|
||||
}
|
||||
@@ -162,11 +168,11 @@ var createPreAuthKeyCmd = &cobra.Command{
|
||||
log.Trace().
|
||||
Bool("reusable", reusable).
|
||||
Bool("ephemeral", ephemeral).
|
||||
Str("namespace", namespace).
|
||||
Str("user", user).
|
||||
Msg("Preparing to create preauthkey")
|
||||
|
||||
request := &v1.CreatePreAuthKeyRequest{
|
||||
Namespace: namespace,
|
||||
User: user,
|
||||
Reusable: reusable,
|
||||
Ephemeral: ephemeral,
|
||||
AclTags: tags,
|
||||
@@ -225,9 +231,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
namespace, err := cmd.Flags().GetString("namespace")
|
||||
user, err := cmd.Flags().GetString("user")
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||
|
||||
return
|
||||
}
|
||||
@@ -237,8 +243,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
||||
defer conn.Close()
|
||||
|
||||
request := &v1.ExpirePreAuthKeyRequest{
|
||||
Namespace: namespace,
|
||||
Key: args[0],
|
||||
User: user,
|
||||
Key: args[0],
|
||||
}
|
||||
|
||||
response, err := client.ExpirePreAuthKey(ctx, request)
|
||||
|
@@ -12,10 +12,15 @@ import (
|
||||
"github.com/tcnksm/go-latest"
|
||||
)
|
||||
|
||||
const (
|
||||
deprecateNamespaceMessage = "use --user"
|
||||
)
|
||||
|
||||
var cfgFile string = ""
|
||||
|
||||
func init() {
|
||||
if len(os.Args) > 1 && (os.Args[1] == "version" || os.Args[1] == "mockoidc") {
|
||||
if len(os.Args) > 1 &&
|
||||
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
|
||||
return
|
||||
}
|
||||
|
||||
|
@@ -3,37 +3,45 @@ package cli
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"net/netip"
|
||||
"strconv"
|
||||
|
||||
"github.com/juanfont/headscale"
|
||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||
"github.com/pterm/pterm"
|
||||
"github.com/spf13/cobra"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
const (
|
||||
Base10 = 10
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(routesCmd)
|
||||
|
||||
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||
err := listRoutesCmd.MarkFlagRequired("identifier")
|
||||
if err != nil {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
routesCmd.AddCommand(listRoutesCmd)
|
||||
|
||||
enableRouteCmd.Flags().
|
||||
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
|
||||
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||
enableRouteCmd.Flags().BoolP("all", "a", false, "All routes from host")
|
||||
|
||||
err = enableRouteCmd.MarkFlagRequired("identifier")
|
||||
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
||||
err := enableRouteCmd.MarkFlagRequired("route")
|
||||
if err != nil {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
|
||||
routesCmd.AddCommand(enableRouteCmd)
|
||||
|
||||
nodeCmd.AddCommand(routesCmd)
|
||||
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
||||
err = disableRouteCmd.MarkFlagRequired("route")
|
||||
if err != nil {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
routesCmd.AddCommand(disableRouteCmd)
|
||||
|
||||
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
||||
err = deleteRouteCmd.MarkFlagRequired("route")
|
||||
if err != nil {
|
||||
log.Fatalf(err.Error())
|
||||
}
|
||||
routesCmd.AddCommand(deleteRouteCmd)
|
||||
}
|
||||
|
||||
var routesCmd = &cobra.Command{
|
||||
@@ -44,7 +52,7 @@ var routesCmd = &cobra.Command{
|
||||
|
||||
var listRoutesCmd = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List routes advertised and enabled by a given node",
|
||||
Short: "List all routes",
|
||||
Aliases: []string{"ls", "show"},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
@@ -64,28 +72,51 @@ var listRoutesCmd = &cobra.Command{
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
request := &v1.GetMachineRouteRequest{
|
||||
MachineId: machineID,
|
||||
var routes []*v1.Route
|
||||
|
||||
if machineID == 0 {
|
||||
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response.Routes, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
routes = response.Routes
|
||||
} else {
|
||||
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
|
||||
MachineId: machineID,
|
||||
})
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response.Routes, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
routes = response.Routes
|
||||
}
|
||||
|
||||
response, err := client.GetMachineRoute(ctx, request)
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response.Routes, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
tableData := routesToPtables(response.Routes)
|
||||
tableData := routesToPtables(routes)
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||
|
||||
@@ -107,16 +138,12 @@ var listRoutesCmd = &cobra.Command{
|
||||
|
||||
var enableRouteCmd = &cobra.Command{
|
||||
Use: "enable",
|
||||
Short: "Set the enabled routes for a given node",
|
||||
Long: `This command will take a list of routes that will _replace_
|
||||
the current set of routes on a given node.
|
||||
If you would like to disable a route, simply run the command again, but
|
||||
omit the route you do not want to enable.
|
||||
`,
|
||||
Short: "Set a route as enabled",
|
||||
Long: `This command will make as enabled a given route.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
machineID, err := cmd.Flags().GetUint64("identifier")
|
||||
routeID, err := cmd.Flags().GetUint64("route")
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
@@ -131,52 +158,13 @@ omit the route you do not want to enable.
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
var routes []string
|
||||
|
||||
isAll, _ := cmd.Flags().GetBool("all")
|
||||
if isAll {
|
||||
response, err := client.GetMachineRoute(ctx, &v1.GetMachineRouteRequest{
|
||||
MachineId: machineID,
|
||||
})
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf(
|
||||
"Cannot get machine routes: %s\n",
|
||||
status.Convert(err).Message(),
|
||||
),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
routes = response.GetRoutes().GetAdvertisedRoutes()
|
||||
} else {
|
||||
routes, err = cmd.Flags().GetStringSlice("route")
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Error getting routes from flag: %s", err),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
request := &v1.EnableMachineRoutesRequest{
|
||||
MachineId: machineID,
|
||||
Routes: routes,
|
||||
}
|
||||
|
||||
response, err := client.EnableMachineRoutes(ctx, request)
|
||||
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
|
||||
RouteId: routeID,
|
||||
})
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf(
|
||||
"Cannot register machine: %s\n",
|
||||
status.Convert(err).Message(),
|
||||
),
|
||||
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
@@ -184,50 +172,127 @@ omit the route you do not want to enable.
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response.Routes, "", output)
|
||||
SuccessOutput(response, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
tableData := routesToPtables(response.Routes)
|
||||
if err != nil {
|
||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||
var disableRouteCmd = &cobra.Command{
|
||||
Use: "disable",
|
||||
Short: "Set as disabled a given route",
|
||||
Long: `This command will make as disabled a given route.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||
routeID, err := cmd.Flags().GetUint64("route")
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
|
||||
RouteId: routeID,
|
||||
})
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
var deleteRouteCmd = &cobra.Command{
|
||||
Use: "delete",
|
||||
Short: "Delete a given route",
|
||||
Long: `This command will delete a given route.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
routeID, err := cmd.Flags().GetUint64("route")
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
|
||||
RouteId: routeID,
|
||||
})
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
// routesToPtables converts the list of routes to a nice table.
|
||||
func routesToPtables(routes *v1.Routes) pterm.TableData {
|
||||
tableData := pterm.TableData{{"Route", "Enabled"}}
|
||||
func routesToPtables(routes []*v1.Route) pterm.TableData {
|
||||
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
|
||||
|
||||
for _, route := range routes.GetAdvertisedRoutes() {
|
||||
enabled := isStringInSlice(routes.EnabledRoutes, route)
|
||||
for _, route := range routes {
|
||||
var isPrimaryStr string
|
||||
prefix, err := netip.ParsePrefix(route.Prefix)
|
||||
if err != nil {
|
||||
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
|
||||
|
||||
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
|
||||
continue
|
||||
}
|
||||
if prefix == headscale.ExitRouteV4 || prefix == headscale.ExitRouteV6 {
|
||||
isPrimaryStr = "-"
|
||||
} else {
|
||||
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
|
||||
}
|
||||
|
||||
tableData = append(tableData,
|
||||
[]string{
|
||||
strconv.FormatUint(route.Id, Base10),
|
||||
route.Machine.GivenName,
|
||||
route.Prefix,
|
||||
strconv.FormatBool(route.Advertised),
|
||||
strconv.FormatBool(route.Enabled),
|
||||
isPrimaryStr,
|
||||
})
|
||||
}
|
||||
|
||||
return tableData
|
||||
}
|
||||
|
||||
func isStringInSlice(strs []string, s string) bool {
|
||||
for _, s2 := range strs {
|
||||
if s == s2 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
@@ -13,26 +13,26 @@ import (
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(namespaceCmd)
|
||||
namespaceCmd.AddCommand(createNamespaceCmd)
|
||||
namespaceCmd.AddCommand(listNamespacesCmd)
|
||||
namespaceCmd.AddCommand(destroyNamespaceCmd)
|
||||
namespaceCmd.AddCommand(renameNamespaceCmd)
|
||||
rootCmd.AddCommand(userCmd)
|
||||
userCmd.AddCommand(createUserCmd)
|
||||
userCmd.AddCommand(listUsersCmd)
|
||||
userCmd.AddCommand(destroyUserCmd)
|
||||
userCmd.AddCommand(renameUserCmd)
|
||||
}
|
||||
|
||||
const (
|
||||
errMissingParameter = headscale.Error("missing parameters")
|
||||
)
|
||||
|
||||
var namespaceCmd = &cobra.Command{
|
||||
Use: "namespaces",
|
||||
Short: "Manage the namespaces of Headscale",
|
||||
Aliases: []string{"namespace", "ns", "user", "users"},
|
||||
var userCmd = &cobra.Command{
|
||||
Use: "users",
|
||||
Short: "Manage the users of Headscale",
|
||||
Aliases: []string{"user", "namespace", "namespaces", "ns"},
|
||||
}
|
||||
|
||||
var createNamespaceCmd = &cobra.Command{
|
||||
var createUserCmd = &cobra.Command{
|
||||
Use: "create NAME",
|
||||
Short: "Creates a new namespace",
|
||||
Short: "Creates a new user",
|
||||
Aliases: []string{"c", "new"},
|
||||
Args: func(cmd *cobra.Command, args []string) error {
|
||||
if len(args) < 1 {
|
||||
@@ -44,7 +44,7 @@ var createNamespaceCmd = &cobra.Command{
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
namespaceName := args[0]
|
||||
userName := args[0]
|
||||
|
||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||
defer cancel()
|
||||
@@ -52,15 +52,15 @@ var createNamespaceCmd = &cobra.Command{
|
||||
|
||||
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
||||
|
||||
request := &v1.CreateNamespaceRequest{Name: namespaceName}
|
||||
request := &v1.CreateUserRequest{Name: userName}
|
||||
|
||||
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
|
||||
response, err := client.CreateNamespace(ctx, request)
|
||||
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
|
||||
response, err := client.CreateUser(ctx, request)
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf(
|
||||
"Cannot create namespace: %s",
|
||||
"Cannot create user: %s",
|
||||
status.Convert(err).Message(),
|
||||
),
|
||||
output,
|
||||
@@ -69,13 +69,13 @@ var createNamespaceCmd = &cobra.Command{
|
||||
return
|
||||
}
|
||||
|
||||
SuccessOutput(response.Namespace, "Namespace created", output)
|
||||
SuccessOutput(response.User, "User created", output)
|
||||
},
|
||||
}
|
||||
|
||||
var destroyNamespaceCmd = &cobra.Command{
|
||||
var destroyUserCmd = &cobra.Command{
|
||||
Use: "destroy NAME",
|
||||
Short: "Destroys a namespace",
|
||||
Short: "Destroys a user",
|
||||
Aliases: []string{"delete"},
|
||||
Args: func(cmd *cobra.Command, args []string) error {
|
||||
if len(args) < 1 {
|
||||
@@ -87,17 +87,17 @@ var destroyNamespaceCmd = &cobra.Command{
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
|
||||
namespaceName := args[0]
|
||||
userName := args[0]
|
||||
|
||||
request := &v1.GetNamespaceRequest{
|
||||
Name: namespaceName,
|
||||
request := &v1.GetUserRequest{
|
||||
Name: userName,
|
||||
}
|
||||
|
||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
_, err := client.GetNamespace(ctx, request)
|
||||
_, err := client.GetUser(ctx, request)
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
@@ -113,8 +113,8 @@ var destroyNamespaceCmd = &cobra.Command{
|
||||
if !force {
|
||||
prompt := &survey.Confirm{
|
||||
Message: fmt.Sprintf(
|
||||
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
|
||||
namespaceName,
|
||||
"Do you want to remove the user '%s' and any associated preauthkeys?",
|
||||
userName,
|
||||
),
|
||||
}
|
||||
err := survey.AskOne(prompt, &confirm)
|
||||
@@ -124,14 +124,14 @@ var destroyNamespaceCmd = &cobra.Command{
|
||||
}
|
||||
|
||||
if confirm || force {
|
||||
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
|
||||
request := &v1.DeleteUserRequest{Name: userName}
|
||||
|
||||
response, err := client.DeleteNamespace(ctx, request)
|
||||
response, err := client.DeleteUser(ctx, request)
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf(
|
||||
"Cannot destroy namespace: %s",
|
||||
"Cannot destroy user: %s",
|
||||
status.Convert(err).Message(),
|
||||
),
|
||||
output,
|
||||
@@ -139,16 +139,16 @@ var destroyNamespaceCmd = &cobra.Command{
|
||||
|
||||
return
|
||||
}
|
||||
SuccessOutput(response, "Namespace destroyed", output)
|
||||
SuccessOutput(response, "User destroyed", output)
|
||||
} else {
|
||||
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
|
||||
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
var listNamespacesCmd = &cobra.Command{
|
||||
var listUsersCmd = &cobra.Command{
|
||||
Use: "list",
|
||||
Short: "List all the namespaces",
|
||||
Short: "List all the users",
|
||||
Aliases: []string{"ls", "show"},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
output, _ := cmd.Flags().GetString("output")
|
||||
@@ -157,13 +157,13 @@ var listNamespacesCmd = &cobra.Command{
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
request := &v1.ListNamespacesRequest{}
|
||||
request := &v1.ListUsersRequest{}
|
||||
|
||||
response, err := client.ListNamespaces(ctx, request)
|
||||
response, err := client.ListUsers(ctx, request)
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
|
||||
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
|
||||
output,
|
||||
)
|
||||
|
||||
@@ -171,19 +171,19 @@ var listNamespacesCmd = &cobra.Command{
|
||||
}
|
||||
|
||||
if output != "" {
|
||||
SuccessOutput(response.Namespaces, "", output)
|
||||
SuccessOutput(response.Users, "", output)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
||||
for _, namespace := range response.GetNamespaces() {
|
||||
for _, user := range response.GetUsers() {
|
||||
tableData = append(
|
||||
tableData,
|
||||
[]string{
|
||||
namespace.GetId(),
|
||||
namespace.GetName(),
|
||||
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||
user.GetId(),
|
||||
user.GetName(),
|
||||
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||
},
|
||||
)
|
||||
}
|
||||
@@ -200,9 +200,9 @@ var listNamespacesCmd = &cobra.Command{
|
||||
},
|
||||
}
|
||||
|
||||
var renameNamespaceCmd = &cobra.Command{
|
||||
var renameUserCmd = &cobra.Command{
|
||||
Use: "rename OLD_NAME NEW_NAME",
|
||||
Short: "Renames a namespace",
|
||||
Short: "Renames a user",
|
||||
Aliases: []string{"mv"},
|
||||
Args: func(cmd *cobra.Command, args []string) error {
|
||||
expectedArguments := 2
|
||||
@@ -219,17 +219,17 @@ var renameNamespaceCmd = &cobra.Command{
|
||||
defer cancel()
|
||||
defer conn.Close()
|
||||
|
||||
request := &v1.RenameNamespaceRequest{
|
||||
request := &v1.RenameUserRequest{
|
||||
OldName: args[0],
|
||||
NewName: args[1],
|
||||
}
|
||||
|
||||
response, err := client.RenameNamespace(ctx, request)
|
||||
response, err := client.RenameUser(ctx, request)
|
||||
if err != nil {
|
||||
ErrorOutput(
|
||||
err,
|
||||
fmt.Sprintf(
|
||||
"Cannot rename namespace: %s",
|
||||
"Cannot rename user: %s",
|
||||
status.Convert(err).Message(),
|
||||
),
|
||||
output,
|
||||
@@ -238,6 +238,6 @@ var renameNamespaceCmd = &cobra.Command{
|
||||
return
|
||||
}
|
||||
|
||||
SuccessOutput(response.Namespace, "Namespace renamed", output)
|
||||
SuccessOutput(response.User, "User renamed", output)
|
||||
},
|
||||
}
|
@@ -14,7 +14,7 @@ import (
|
||||
"google.golang.org/grpc"
|
||||
"google.golang.org/grpc/credentials"
|
||||
"google.golang.org/grpc/credentials/insecure"
|
||||
"gopkg.in/yaml.v2"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@@ -6,11 +6,25 @@ import (
|
||||
|
||||
"github.com/efekarakus/termcolor"
|
||||
"github.com/juanfont/headscale/cmd/headscale/cli"
|
||||
"github.com/pkg/profile"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
)
|
||||
|
||||
func main() {
|
||||
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
|
||||
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
|
||||
err := os.MkdirAll(profilePath, os.ModePerm)
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("failed to create profiling directory")
|
||||
}
|
||||
|
||||
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
|
||||
} else {
|
||||
defer profile.Start().Stop()
|
||||
}
|
||||
}
|
||||
|
||||
var colors bool
|
||||
switch l := termcolor.SupportLevel(os.Stderr); l {
|
||||
case termcolor.Level16M:
|
||||
|
@@ -55,7 +55,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
|
||||
|
||||
// Test that config file was interpreted correctly
|
||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||
@@ -98,7 +98,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
|
||||
|
||||
// Test that config file was interpreted correctly
|
||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||
|
@@ -14,7 +14,9 @@ server_url: http://127.0.0.1:8080
|
||||
|
||||
# Address to listen to / bind to on the server
|
||||
#
|
||||
listen_addr: 0.0.0.0:8080
|
||||
# For production:
|
||||
# listen_addr: 0.0.0.0:8080
|
||||
listen_addr: 127.0.0.1:8080
|
||||
|
||||
# Address to listen to /metrics, you may want
|
||||
# to keep this endpoint private to your internal
|
||||
@@ -27,7 +29,10 @@ metrics_listen_addr: 127.0.0.1:9090
|
||||
# remotely with the CLI
|
||||
# Note: Remote access _only_ works if you have
|
||||
# valid certificates.
|
||||
grpc_listen_addr: 0.0.0.0:50443
|
||||
#
|
||||
# For production:
|
||||
# grpc_listen_addr: 0.0.0.0:50443
|
||||
grpc_listen_addr: 127.0.0.1:50443
|
||||
|
||||
# Allow the gRPC admin interface to run in INSECURE
|
||||
# mode. This is not recommended as the traffic will
|
||||
@@ -35,14 +40,14 @@ grpc_listen_addr: 0.0.0.0:50443
|
||||
# are doing.
|
||||
grpc_allow_insecure: false
|
||||
|
||||
# Private key used encrypt the traffic between headscale
|
||||
# Private key used to encrypt the traffic between headscale
|
||||
# and Tailscale clients.
|
||||
# The private key file which will be
|
||||
# autogenerated if it's missing
|
||||
# The private key file will be autogenerated if it's missing.
|
||||
#
|
||||
private_key_path: /var/lib/headscale/private.key
|
||||
|
||||
# The Noise section includes specific configuration for the
|
||||
# TS2021 Noise procotol
|
||||
# TS2021 Noise protocol
|
||||
noise:
|
||||
# The Noise private key is used to encrypt the
|
||||
# traffic between headscale and Tailscale clients when
|
||||
@@ -53,6 +58,12 @@ noise:
|
||||
# List of IP prefixes to allocate tailaddresses from.
|
||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||
# and the associated prefix length, delimited by a slash.
|
||||
# It must be within IP ranges supported by the Tailscale
|
||||
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
||||
# See below:
|
||||
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
||||
# Any other range is NOT supported, and it will cause unexpected issues.
|
||||
ip_prefixes:
|
||||
- fd7a:115c:a1e0::/48
|
||||
- 100.64.0.0/10
|
||||
@@ -78,7 +89,7 @@ derp:
|
||||
region_code: "headscale"
|
||||
region_name: "Headscale Embedded DERP"
|
||||
|
||||
# Listens in UDP at the configured address for STUN connections to help on NAT traversal.
|
||||
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
|
||||
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
|
||||
#
|
||||
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
|
||||
@@ -112,14 +123,16 @@ disable_check_updates: false
|
||||
# Time before an inactive ephemeral node is deleted?
|
||||
ephemeral_node_inactivity_timeout: 30m
|
||||
|
||||
# Period to check for node updates in the tailnet. A value too low will severily affect
|
||||
# Period to check for node updates within the tailnet. A value too low will severely affect
|
||||
# CPU consumption of Headscale. A value too high (over 60s) will cause problems
|
||||
# to the nodes, as they won't get updates or keep alive messages in time.
|
||||
# for the nodes, as they won't get updates or keep alive messages frequently enough.
|
||||
# In case of doubts, do not touch the default 10s.
|
||||
node_update_check_interval: 10s
|
||||
|
||||
# SQLite config
|
||||
db_type: sqlite3
|
||||
|
||||
# For production:
|
||||
db_path: /var/lib/headscale/db.sqlite
|
||||
|
||||
# # Postgres config
|
||||
@@ -130,6 +143,9 @@ db_path: /var/lib/headscale/db.sqlite
|
||||
# db_name: headscale
|
||||
# db_user: foo
|
||||
# db_pass: bar
|
||||
|
||||
# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
|
||||
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
|
||||
# db_ssl: false
|
||||
|
||||
### TLS configuration
|
||||
@@ -148,15 +164,9 @@ acme_email: ""
|
||||
# Domain name to request a TLS certificate for:
|
||||
tls_letsencrypt_hostname: ""
|
||||
|
||||
# Client (Tailscale/Browser) authentication mode (mTLS)
|
||||
# Acceptable values:
|
||||
# - disabled: client authentication disabled
|
||||
# - relaxed: client certificate is required but not verified
|
||||
# - enforced: client certificate is required and verified
|
||||
tls_client_auth_mode: relaxed
|
||||
|
||||
# Path to store certificates and metadata needed by
|
||||
# letsencrypt
|
||||
# For production:
|
||||
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
||||
|
||||
# Type of ACME challenge to use, currently supported types:
|
||||
@@ -199,6 +209,18 @@ dns_config:
|
||||
nameservers:
|
||||
- 1.1.1.1
|
||||
|
||||
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
|
||||
# "abc123" is example NextDNS ID, replace with yours.
|
||||
#
|
||||
# With metadata sharing:
|
||||
# nameservers:
|
||||
# - https://dns.nextdns.io/abc123
|
||||
#
|
||||
# Without metadata sharing:
|
||||
# nameservers:
|
||||
# - 2a07:a8c0::ab:c123
|
||||
# - 2a07:a8c1::ab:c123
|
||||
|
||||
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
||||
# list of search domains and the DNS to query for each one.
|
||||
#
|
||||
@@ -212,6 +234,17 @@ dns_config:
|
||||
# Search domains to inject.
|
||||
domains: []
|
||||
|
||||
# Extra DNS records
|
||||
# so far only A-records are supported (on the tailscale side)
|
||||
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
|
||||
# extra_records:
|
||||
# - name: "grafana.myvpn.example.com"
|
||||
# type: "A"
|
||||
# value: "100.64.0.3"
|
||||
#
|
||||
# # you can also put it in one line
|
||||
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
|
||||
|
||||
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
||||
# Only works if there is at least a nameserver defined.
|
||||
magic_dns: true
|
||||
@@ -219,13 +252,12 @@ dns_config:
|
||||
# Defines the base domain to create the hostnames for MagicDNS.
|
||||
# `base_domain` must be a FQDNs, without the trailing dot.
|
||||
# The FQDN of the hosts will be
|
||||
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
|
||||
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
|
||||
base_domain: example.com
|
||||
|
||||
# Unix socket used for the CLI to connect without authentication
|
||||
# Note: for local development, you probably want to change this to:
|
||||
# unix_socket: ./headscale.sock
|
||||
unix_socket: /var/run/headscale.sock
|
||||
# Note: for production you will want to set this to something like:
|
||||
unix_socket: /var/run/headscale/headscale.sock
|
||||
unix_socket_permission: "0770"
|
||||
#
|
||||
# headscale supports experimental OpenID connect support,
|
||||
@@ -237,26 +269,45 @@ unix_socket_permission: "0770"
|
||||
# issuer: "https://your-oidc.issuer.com/path"
|
||||
# client_id: "your-oidc-client-id"
|
||||
# client_secret: "your-oidc-client-secret"
|
||||
# # Alternatively, set `client_secret_path` to read the secret from the file.
|
||||
# # It resolves environment variables, making integration to systemd's
|
||||
# # `LoadCredential` straightforward:
|
||||
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
||||
# # client_secret and client_secret_path are mutually exclusive.
|
||||
#
|
||||
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
||||
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
||||
# # The amount of time from a node is authenticated with OpenID until it
|
||||
# # expires and needs to reauthenticate.
|
||||
# # Setting the value to "0" will mean no expiry.
|
||||
# expiry: 180d
|
||||
#
|
||||
# # Use the expiry from the token received from OpenID when the user logged
|
||||
# # in, this will typically lead to frequent need to reauthenticate and should
|
||||
# # only been enabled if you know what you are doing.
|
||||
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
||||
# use_expiry_from_token: false
|
||||
#
|
||||
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
||||
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
||||
#
|
||||
# scope: ["openid", "profile", "email", "custom"]
|
||||
# extra_params:
|
||||
# domain_hint: example.com
|
||||
#
|
||||
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
||||
# authentication request will be rejected.
|
||||
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
||||
# # authentication request will be rejected.
|
||||
#
|
||||
# allowed_domains:
|
||||
# - example.com
|
||||
# # Note: Groups from keycloak have a leading '/'
|
||||
# allowed_groups:
|
||||
# - /headscale
|
||||
# allowed_users:
|
||||
# - alice@example.com
|
||||
#
|
||||
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
||||
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
|
||||
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
||||
# namespace: `first-name.last-name.example.com`
|
||||
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
||||
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
|
||||
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
||||
# user: `first-name.last-name.example.com`
|
||||
#
|
||||
# strip_email_domain: true
|
||||
|
||||
|
211
config.go
211
config.go
@@ -1,20 +1,22 @@
|
||||
package headscale
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"net/netip"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/coreos/go-oidc/v3/oidc"
|
||||
"github.com/prometheus/common/model"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
"github.com/spf13/viper"
|
||||
"go4.org/netipx"
|
||||
"tailscale.com/net/tsaddr"
|
||||
"tailscale.com/tailcfg"
|
||||
"tailscale.com/types/dnstype"
|
||||
)
|
||||
@@ -25,6 +27,13 @@ const (
|
||||
|
||||
JSONLogFormat = "json"
|
||||
TextLogFormat = "text"
|
||||
|
||||
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
|
||||
maxDuration time.Duration = 1<<63 - 1
|
||||
)
|
||||
|
||||
var errOidcMutuallyExclusive = errors.New(
|
||||
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
|
||||
)
|
||||
|
||||
// Config contains the initial Headscale configuration.
|
||||
@@ -52,7 +61,7 @@ type Config struct {
|
||||
DBname string
|
||||
DBuser string
|
||||
DBpass string
|
||||
DBssl bool
|
||||
DBssl string
|
||||
|
||||
TLS TLSConfig
|
||||
|
||||
@@ -75,9 +84,8 @@ type Config struct {
|
||||
}
|
||||
|
||||
type TLSConfig struct {
|
||||
CertPath string
|
||||
KeyPath string
|
||||
ClientAuthMode tls.ClientAuthType
|
||||
CertPath string
|
||||
KeyPath string
|
||||
|
||||
LetsEncrypt LetsEncryptConfig
|
||||
}
|
||||
@@ -98,7 +106,10 @@ type OIDCConfig struct {
|
||||
ExtraParams map[string]string
|
||||
AllowedDomains []string
|
||||
AllowedUsers []string
|
||||
AllowedGroups []string
|
||||
StripEmaildomain bool
|
||||
Expiry time.Duration
|
||||
UseExpiryFromToken bool
|
||||
}
|
||||
|
||||
type DERPConfig struct {
|
||||
@@ -154,7 +165,6 @@ func LoadConfig(path string, isFile bool) error {
|
||||
|
||||
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
||||
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
||||
viper.SetDefault("tls_client_auth_mode", "relaxed")
|
||||
|
||||
viper.SetDefault("log.level", "info")
|
||||
viper.SetDefault("log.format", TextLogFormat)
|
||||
@@ -165,7 +175,7 @@ func LoadConfig(path string, isFile bool) error {
|
||||
viper.SetDefault("derp.server.enabled", false)
|
||||
viper.SetDefault("derp.server.stun.enabled", true)
|
||||
|
||||
viper.SetDefault("unix_socket", "/var/run/headscale.sock")
|
||||
viper.SetDefault("unix_socket", "/var/run/headscale/headscale.sock")
|
||||
viper.SetDefault("unix_socket_permission", "0o770")
|
||||
|
||||
viper.SetDefault("grpc_listen_addr", ":50443")
|
||||
@@ -174,9 +184,13 @@ func LoadConfig(path string, isFile bool) error {
|
||||
viper.SetDefault("cli.timeout", "5s")
|
||||
viper.SetDefault("cli.insecure", false)
|
||||
|
||||
viper.SetDefault("db_ssl", false)
|
||||
|
||||
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
||||
viper.SetDefault("oidc.strip_email_domain", true)
|
||||
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
||||
viper.SetDefault("oidc.expiry", "180d")
|
||||
viper.SetDefault("oidc.use_expiry_from_token", false)
|
||||
|
||||
viper.SetDefault("logtail.enabled", false)
|
||||
viper.SetDefault("randomize_client_port", false)
|
||||
@@ -185,6 +199,10 @@ func LoadConfig(path string, isFile bool) error {
|
||||
|
||||
viper.SetDefault("node_update_check_interval", "10s")
|
||||
|
||||
if IsCLIConfigured() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
||||
|
||||
@@ -220,19 +238,6 @@ func LoadConfig(path string, isFile bool) error {
|
||||
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
||||
}
|
||||
|
||||
_, authModeValid := LookupTLSClientAuthMode(
|
||||
viper.GetString("tls_client_auth_mode"),
|
||||
)
|
||||
|
||||
if !authModeValid {
|
||||
errorText += fmt.Sprintf(
|
||||
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
|
||||
viper.GetString("tls_client_auth_mode"),
|
||||
DisabledClientAuth,
|
||||
RelaxedClientAuth,
|
||||
EnforcedClientAuth)
|
||||
}
|
||||
|
||||
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
||||
// to avoid races
|
||||
minInactivityTimeout, _ := time.ParseDuration("65s")
|
||||
@@ -262,10 +267,6 @@ func LoadConfig(path string, isFile bool) error {
|
||||
}
|
||||
|
||||
func GetTLSConfig() TLSConfig {
|
||||
tlsClientAuthMode, _ := LookupTLSClientAuthMode(
|
||||
viper.GetString("tls_client_auth_mode"),
|
||||
)
|
||||
|
||||
return TLSConfig{
|
||||
LetsEncrypt: LetsEncryptConfig{
|
||||
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
||||
@@ -281,7 +282,6 @@ func GetTLSConfig() TLSConfig {
|
||||
KeyPath: AbsolutePathFromConfigPath(
|
||||
viper.GetString("tls_key_path"),
|
||||
),
|
||||
ClientAuthMode: tlsClientAuthMode,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -383,10 +383,21 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
||||
if viper.IsSet("dns_config.nameservers") {
|
||||
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
||||
|
||||
nameservers := make([]netip.Addr, len(nameserversStr))
|
||||
resolvers := make([]*dnstype.Resolver, len(nameserversStr))
|
||||
nameservers := []netip.Addr{}
|
||||
resolvers := []*dnstype.Resolver{}
|
||||
|
||||
for index, nameserverStr := range nameserversStr {
|
||||
for _, nameserverStr := range nameserversStr {
|
||||
// Search for explicit DNS-over-HTTPS resolvers
|
||||
if strings.HasPrefix(nameserverStr, "https://") {
|
||||
resolvers = append(resolvers, &dnstype.Resolver{
|
||||
Addr: nameserverStr,
|
||||
})
|
||||
|
||||
// This nameserver can not be parsed as an IP address
|
||||
continue
|
||||
}
|
||||
|
||||
// Parse nameserver as a regular IP
|
||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
@@ -395,10 +406,10 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
||||
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
||||
}
|
||||
|
||||
nameservers[index] = nameserver
|
||||
resolvers[index] = &dnstype.Resolver{
|
||||
nameservers = append(nameservers, nameserver)
|
||||
resolvers = append(resolvers, &dnstype.Resolver{
|
||||
Addr: nameserver.String(),
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
dnsConfig.Nameservers = nameservers
|
||||
@@ -411,39 +422,37 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
||||
}
|
||||
|
||||
if viper.IsSet("dns_config.restricted_nameservers") {
|
||||
if len(dnsConfig.Nameservers) > 0 {
|
||||
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
||||
restrictedDNS := viper.GetStringMapStringSlice(
|
||||
"dns_config.restricted_nameservers",
|
||||
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
||||
domains := []string{}
|
||||
restrictedDNS := viper.GetStringMapStringSlice(
|
||||
"dns_config.restricted_nameservers",
|
||||
)
|
||||
for domain, restrictedNameservers := range restrictedDNS {
|
||||
restrictedResolvers := make(
|
||||
[]*dnstype.Resolver,
|
||||
len(restrictedNameservers),
|
||||
)
|
||||
for domain, restrictedNameservers := range restrictedDNS {
|
||||
restrictedResolvers := make(
|
||||
[]*dnstype.Resolver,
|
||||
len(restrictedNameservers),
|
||||
)
|
||||
for index, nameserverStr := range restrictedNameservers {
|
||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Str("func", "getDNSConfig").
|
||||
Err(err).
|
||||
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
|
||||
}
|
||||
restrictedResolvers[index] = &dnstype.Resolver{
|
||||
Addr: nameserver.String(),
|
||||
}
|
||||
for index, nameserverStr := range restrictedNameservers {
|
||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Str("func", "getDNSConfig").
|
||||
Err(err).
|
||||
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
|
||||
}
|
||||
restrictedResolvers[index] = &dnstype.Resolver{
|
||||
Addr: nameserver.String(),
|
||||
}
|
||||
dnsConfig.Routes[domain] = restrictedResolvers
|
||||
}
|
||||
} else {
|
||||
log.Warn().
|
||||
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
|
||||
dnsConfig.Routes[domain] = restrictedResolvers
|
||||
domains = append(domains, domain)
|
||||
}
|
||||
dnsConfig.Domains = domains
|
||||
}
|
||||
|
||||
if viper.IsSet("dns_config.domains") {
|
||||
domains := viper.GetStringSlice("dns_config.domains")
|
||||
if len(dnsConfig.Nameservers) > 0 {
|
||||
if len(dnsConfig.Resolvers) > 0 {
|
||||
dnsConfig.Domains = domains
|
||||
} else if domains != nil {
|
||||
log.Warn().
|
||||
@@ -451,6 +460,20 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
||||
}
|
||||
}
|
||||
|
||||
if viper.IsSet("dns_config.extra_records") {
|
||||
var extraRecords []tailcfg.DNSRecord
|
||||
|
||||
err := viper.UnmarshalKey("dns_config.extra_records", &extraRecords)
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Str("func", "getDNSConfig").
|
||||
Err(err).
|
||||
Msgf("Could not parse dns_config.extra_records")
|
||||
}
|
||||
|
||||
dnsConfig.ExtraRecords = extraRecords
|
||||
}
|
||||
|
||||
if viper.IsSet("dns_config.magic_dns") {
|
||||
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
|
||||
}
|
||||
@@ -469,6 +492,17 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
||||
}
|
||||
|
||||
func GetHeadscaleConfig() (*Config, error) {
|
||||
if IsCLIConfigured() {
|
||||
return &Config{
|
||||
CLI: CLIConfig{
|
||||
Address: viper.GetString("cli.address"),
|
||||
APIKey: viper.GetString("cli.api_key"),
|
||||
Timeout: viper.GetDuration("cli.timeout"),
|
||||
Insecure: viper.GetBool("cli.insecure"),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
dnsConfig, baseDomain := GetDNSConfig()
|
||||
derpConfig := GetDERPConfig()
|
||||
logConfig := GetLogTailConfig()
|
||||
@@ -482,6 +516,29 @@ func GetHeadscaleConfig() (*Config, error) {
|
||||
if err != nil {
|
||||
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
||||
}
|
||||
|
||||
if prefix.Addr().Is4() {
|
||||
builder := netipx.IPSetBuilder{}
|
||||
builder.AddPrefix(tsaddr.CGNATRange())
|
||||
ipSet, _ := builder.IPSet()
|
||||
if !ipSet.ContainsPrefix(prefix) {
|
||||
log.Warn().
|
||||
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
||||
prefixInConfig, tsaddr.CGNATRange())
|
||||
}
|
||||
}
|
||||
|
||||
if prefix.Addr().Is6() {
|
||||
builder := netipx.IPSetBuilder{}
|
||||
builder.AddPrefix(tsaddr.TailscaleULARange())
|
||||
ipSet, _ := builder.IPSet()
|
||||
if !ipSet.ContainsPrefix(prefix) {
|
||||
log.Warn().
|
||||
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
||||
prefixInConfig, tsaddr.TailscaleULARange())
|
||||
}
|
||||
}
|
||||
|
||||
parsedPrefixes = append(parsedPrefixes, prefix)
|
||||
}
|
||||
|
||||
@@ -506,6 +563,19 @@ func GetHeadscaleConfig() (*Config, error) {
|
||||
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
||||
}
|
||||
|
||||
oidcClientSecret := viper.GetString("oidc.client_secret")
|
||||
oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
|
||||
if oidcClientSecretPath != "" && oidcClientSecret != "" {
|
||||
return nil, errOidcMutuallyExclusive
|
||||
}
|
||||
if oidcClientSecretPath != "" {
|
||||
secretBytes, err := os.ReadFile(os.ExpandEnv(oidcClientSecretPath))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
oidcClientSecret = string(secretBytes)
|
||||
}
|
||||
|
||||
return &Config{
|
||||
ServerURL: viper.GetString("server_url"),
|
||||
Addr: viper.GetString("listen_addr"),
|
||||
@@ -540,7 +610,7 @@ func GetHeadscaleConfig() (*Config, error) {
|
||||
DBname: viper.GetString("db_name"),
|
||||
DBuser: viper.GetString("db_user"),
|
||||
DBpass: viper.GetString("db_pass"),
|
||||
DBssl: viper.GetBool("db_ssl"),
|
||||
DBssl: viper.GetString("db_ssl"),
|
||||
|
||||
TLS: GetTLSConfig(),
|
||||
|
||||
@@ -558,17 +628,36 @@ func GetHeadscaleConfig() (*Config, error) {
|
||||
),
|
||||
Issuer: viper.GetString("oidc.issuer"),
|
||||
ClientID: viper.GetString("oidc.client_id"),
|
||||
ClientSecret: viper.GetString("oidc.client_secret"),
|
||||
ClientSecret: oidcClientSecret,
|
||||
Scope: viper.GetStringSlice("oidc.scope"),
|
||||
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
||||
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
||||
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
||||
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
|
||||
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
||||
Expiry: func() time.Duration {
|
||||
// if set to 0, we assume no expiry
|
||||
if value := viper.GetString("oidc.expiry"); value == "0" {
|
||||
return maxDuration
|
||||
} else {
|
||||
expiry, err := model.ParseDuration(value)
|
||||
if err != nil {
|
||||
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
|
||||
|
||||
return defaultOIDCExpiryTime
|
||||
}
|
||||
|
||||
return time.Duration(expiry)
|
||||
}
|
||||
}(),
|
||||
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
|
||||
},
|
||||
|
||||
LogTail: logConfig,
|
||||
RandomizeClientPort: randomizeClientPort,
|
||||
|
||||
ACL: GetACLConfig(),
|
||||
|
||||
CLI: CLIConfig{
|
||||
Address: viper.GetString("cli.address"),
|
||||
APIKey: viper.GetString("cli.api_key"),
|
||||
@@ -576,8 +665,10 @@ func GetHeadscaleConfig() (*Config, error) {
|
||||
Insecure: viper.GetBool("cli.insecure"),
|
||||
},
|
||||
|
||||
ACL: GetACLConfig(),
|
||||
|
||||
Log: GetLogConfig(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func IsCLIConfigured() bool {
|
||||
return viper.GetString("cli.address") != "" && viper.GetString("cli.api_key") != ""
|
||||
}
|
||||
|
109
db.go
109
db.go
@@ -18,8 +18,10 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
dbVersion = "1"
|
||||
errValueNotFound = Error("not found")
|
||||
dbVersion = "1"
|
||||
|
||||
errValueNotFound = Error("not found")
|
||||
ErrCannotParsePrefix = Error("cannot parse prefix")
|
||||
)
|
||||
|
||||
// KV is a key-value store in a psql table. For future use...
|
||||
@@ -39,6 +41,16 @@ func (h *Headscale) initDB() error {
|
||||
db.Exec(`create extension if not exists "uuid-ossp";`)
|
||||
}
|
||||
|
||||
_ = db.Migrator().RenameTable("namespaces", "users")
|
||||
|
||||
err = db.AutoMigrate(&User{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_ = db.Migrator().RenameColumn(&Machine{}, "namespace_id", "user_id")
|
||||
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
|
||||
|
||||
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
||||
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
||||
|
||||
@@ -79,6 +91,70 @@ func (h *Headscale) initDB() error {
|
||||
}
|
||||
}
|
||||
|
||||
err = db.AutoMigrate(&Route{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
|
||||
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
|
||||
|
||||
type MachineAux struct {
|
||||
ID uint64
|
||||
EnabledRoutes IPPrefixes
|
||||
}
|
||||
|
||||
machinesAux := []MachineAux{}
|
||||
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
|
||||
if err != nil {
|
||||
log.Fatal().Err(err).Msg("Error accessing db")
|
||||
}
|
||||
for _, machine := range machinesAux {
|
||||
for _, prefix := range machine.EnabledRoutes {
|
||||
if err != nil {
|
||||
log.Error().
|
||||
Err(err).
|
||||
Str("enabled_route", prefix.String()).
|
||||
Msg("Error parsing enabled_route")
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
err = db.Preload("Machine").
|
||||
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
|
||||
First(&Route{}).
|
||||
Error
|
||||
if err == nil {
|
||||
log.Info().
|
||||
Str("enabled_route", prefix.String()).
|
||||
Msg("Route already migrated to new table, skipping")
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
route := Route{
|
||||
MachineID: machine.ID,
|
||||
Advertised: true,
|
||||
Enabled: true,
|
||||
Prefix: IPPrefix(prefix),
|
||||
}
|
||||
if err := h.db.Create(&route).Error; err != nil {
|
||||
log.Error().Err(err).Msg("Error creating route")
|
||||
} else {
|
||||
log.Info().
|
||||
Uint64("machine_id", route.MachineID).
|
||||
Str("prefix", prefix.String()).
|
||||
Msg("Route migrated")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
|
||||
if err != nil {
|
||||
log.Error().Err(err).Msg("Error dropping enabled_routes column")
|
||||
}
|
||||
}
|
||||
|
||||
err = db.AutoMigrate(&Machine{})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -121,11 +197,6 @@ func (h *Headscale) initDB() error {
|
||||
return err
|
||||
}
|
||||
|
||||
err = db.AutoMigrate(&Namespace{})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = db.AutoMigrate(&PreAuthKey{})
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -264,6 +335,30 @@ func (hi HostInfo) Value() (driver.Value, error) {
|
||||
return string(bytes), err
|
||||
}
|
||||
|
||||
type IPPrefix netip.Prefix
|
||||
|
||||
func (i *IPPrefix) Scan(destination interface{}) error {
|
||||
switch value := destination.(type) {
|
||||
case string:
|
||||
prefix, err := netip.ParsePrefix(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*i = IPPrefix(prefix)
|
||||
|
||||
return nil
|
||||
default:
|
||||
return fmt.Errorf("%w: unexpected data type %T", ErrCannotParsePrefix, destination)
|
||||
}
|
||||
}
|
||||
|
||||
// Value return json value, implement driver.Valuer interface.
|
||||
func (i IPPrefix) Value() (driver.Value, error) {
|
||||
prefixStr := netip.Prefix(i).String()
|
||||
|
||||
return prefixStr, nil
|
||||
}
|
||||
|
||||
type IPPrefixes []netip.Prefix
|
||||
|
||||
func (i *IPPrefixes) Scan(destination interface{}) error {
|
||||
|
2
derp.go
2
derp.go
@@ -10,7 +10,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/rs/zerolog/log"
|
||||
"gopkg.in/yaml.v2"
|
||||
"gopkg.in/yaml.v3"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
|
@@ -157,14 +157,14 @@ func (h *Headscale) DERPHandler(
|
||||
|
||||
if !fastStart {
|
||||
pubKey := h.privateKey.Public()
|
||||
pubKeyStr := pubKey.UntypedHexString() //nolint
|
||||
pubKeyStr, _ := pubKey.MarshalText() //nolint
|
||||
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
||||
"Upgrade: DERP\r\n"+
|
||||
"Connection: Upgrade\r\n"+
|
||||
"Derp-Version: %v\r\n"+
|
||||
"Derp-Public-Key: %s\r\n\r\n",
|
||||
derp.ProtocolVersion,
|
||||
pubKeyStr)
|
||||
string(pubKeyStr))
|
||||
}
|
||||
|
||||
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
||||
|
49
dns.go
49
dns.go
@@ -3,11 +3,13 @@ package headscale
|
||||
import (
|
||||
"fmt"
|
||||
"net/netip"
|
||||
"net/url"
|
||||
"strings"
|
||||
|
||||
mapset "github.com/deckarep/golang-set/v2"
|
||||
"go4.org/netipx"
|
||||
"tailscale.com/tailcfg"
|
||||
"tailscale.com/types/dnstype"
|
||||
"tailscale.com/util/dnsname"
|
||||
)
|
||||
|
||||
@@ -20,6 +22,10 @@ const (
|
||||
ipv6AddressLength = 128
|
||||
)
|
||||
|
||||
const (
|
||||
nextDNSDoHPrefix = "https://dns.nextdns.io"
|
||||
)
|
||||
|
||||
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
||||
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
||||
// server (listening in 100.100.100.100 udp/53) should be used for.
|
||||
@@ -152,37 +158,62 @@ func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
|
||||
return fqdns
|
||||
}
|
||||
|
||||
// If any nextdns DoH resolvers are present in the list of resolvers it will
|
||||
// take metadata from the machine metadata and instruct tailscale to add it
|
||||
// to the requests. This makes it possible to identify from which device the
|
||||
// requests come in the NextDNS dashboard.
|
||||
//
|
||||
// This will produce a resolver like:
|
||||
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
|
||||
func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
|
||||
for _, resolver := range resolvers {
|
||||
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
|
||||
attrs := url.Values{
|
||||
"device_name": []string{machine.Hostname},
|
||||
"device_model": []string{machine.HostInfo.OS},
|
||||
}
|
||||
|
||||
if len(machine.IPAddresses) > 0 {
|
||||
attrs.Add("device_ip", machine.IPAddresses[0].String())
|
||||
}
|
||||
|
||||
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getMapResponseDNSConfig(
|
||||
dnsConfigOrig *tailcfg.DNSConfig,
|
||||
baseDomain string,
|
||||
machine Machine,
|
||||
peers Machines,
|
||||
) *tailcfg.DNSConfig {
|
||||
var dnsConfig *tailcfg.DNSConfig
|
||||
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
|
||||
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
||||
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
|
||||
dnsConfig = dnsConfigOrig.Clone()
|
||||
// Only inject the Search Domain of the current user - shared nodes should use their full FQDN
|
||||
dnsConfig.Domains = append(
|
||||
dnsConfig.Domains,
|
||||
fmt.Sprintf(
|
||||
"%s.%s",
|
||||
machine.Namespace.Name,
|
||||
machine.User.Name,
|
||||
baseDomain,
|
||||
),
|
||||
)
|
||||
|
||||
namespaceSet := mapset.NewSet[Namespace]()
|
||||
namespaceSet.Add(machine.Namespace)
|
||||
userSet := mapset.NewSet[User]()
|
||||
userSet.Add(machine.User)
|
||||
for _, p := range peers {
|
||||
namespaceSet.Add(p.Namespace)
|
||||
userSet.Add(p.User)
|
||||
}
|
||||
for _, namespace := range namespaceSet.ToSlice() {
|
||||
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
|
||||
for _, user := range userSet.ToSlice() {
|
||||
dnsRoute := fmt.Sprintf("%v.%v", user.Name, baseDomain)
|
||||
dnsConfig.Routes[dnsRoute] = nil
|
||||
}
|
||||
} else {
|
||||
dnsConfig = dnsConfigOrig
|
||||
}
|
||||
|
||||
addNextDNSMetadata(dnsConfig.Resolvers, machine)
|
||||
|
||||
return dnsConfig
|
||||
}
|
||||
|
82
dns_test.go
82
dns_test.go
@@ -112,17 +112,17 @@ func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
|
||||
}
|
||||
|
||||
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
namespaceShared1, err := app.CreateNamespace("shared1")
|
||||
userShared1, err := app.CreateUser("shared1")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
namespaceShared2, err := app.CreateNamespace("shared2")
|
||||
userShared2, err := app.CreateUser("shared2")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
namespaceShared3, err := app.CreateNamespace("shared3")
|
||||
userShared3, err := app.CreateUser("shared3")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||
namespaceShared1.Name,
|
||||
userShared1.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -131,7 +131,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||
namespaceShared2.Name,
|
||||
userShared2.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -140,7 +140,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||
namespaceShared3.Name,
|
||||
userShared3.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -149,7 +149,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||
namespaceShared1.Name,
|
||||
userShared1.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -157,7 +157,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
||||
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
||||
c.Assert(err, check.NotNil)
|
||||
|
||||
machineInShared1 := &Machine{
|
||||
@@ -166,15 +166,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||
Hostname: "test_get_shared_nodes_1",
|
||||
NamespaceID: namespaceShared1.ID,
|
||||
Namespace: *namespaceShared1,
|
||||
UserID: userShared1.ID,
|
||||
User: *userShared1,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||
}
|
||||
app.db.Save(machineInShared1)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
||||
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
machineInShared2 := &Machine{
|
||||
@@ -183,15 +183,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
Hostname: "test_get_shared_nodes_2",
|
||||
NamespaceID: namespaceShared2.ID,
|
||||
Namespace: *namespaceShared2,
|
||||
UserID: userShared2.ID,
|
||||
User: *userShared2,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||
}
|
||||
app.db.Save(machineInShared2)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
||||
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
machineInShared3 := &Machine{
|
||||
@@ -200,15 +200,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
Hostname: "test_get_shared_nodes_3",
|
||||
NamespaceID: namespaceShared3.ID,
|
||||
Namespace: *namespaceShared3,
|
||||
UserID: userShared3.ID,
|
||||
User: *userShared3,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||
}
|
||||
app.db.Save(machineInShared3)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
||||
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
machine2InShared1 := &Machine{
|
||||
@@ -217,8 +217,8 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
Hostname: "test_get_shared_nodes_4",
|
||||
NamespaceID: namespaceShared1.ID,
|
||||
Namespace: *namespaceShared1,
|
||||
UserID: userShared1.ID,
|
||||
User: *userShared1,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
||||
@@ -245,31 +245,31 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||
|
||||
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
||||
|
||||
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
|
||||
domainRouteShared1 := fmt.Sprintf("%s.%s", userShared1.Name, baseDomain)
|
||||
_, ok := dnsConfig.Routes[domainRouteShared1]
|
||||
c.Assert(ok, check.Equals, true)
|
||||
|
||||
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
|
||||
domainRouteShared2 := fmt.Sprintf("%s.%s", userShared2.Name, baseDomain)
|
||||
_, ok = dnsConfig.Routes[domainRouteShared2]
|
||||
c.Assert(ok, check.Equals, true)
|
||||
|
||||
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
|
||||
domainRouteShared3 := fmt.Sprintf("%s.%s", userShared3.Name, baseDomain)
|
||||
_, ok = dnsConfig.Routes[domainRouteShared3]
|
||||
c.Assert(ok, check.Equals, true)
|
||||
}
|
||||
|
||||
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
namespaceShared1, err := app.CreateNamespace("shared1")
|
||||
userShared1, err := app.CreateUser("shared1")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
namespaceShared2, err := app.CreateNamespace("shared2")
|
||||
userShared2, err := app.CreateUser("shared2")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
namespaceShared3, err := app.CreateNamespace("shared3")
|
||||
userShared3, err := app.CreateUser("shared3")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||
namespaceShared1.Name,
|
||||
userShared1.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -278,7 +278,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||
namespaceShared2.Name,
|
||||
userShared2.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -287,7 +287,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||
namespaceShared3.Name,
|
||||
userShared3.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -296,7 +296,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||
namespaceShared1.Name,
|
||||
userShared1.Name,
|
||||
false,
|
||||
false,
|
||||
nil,
|
||||
@@ -304,7 +304,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
||||
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
||||
c.Assert(err, check.NotNil)
|
||||
|
||||
machineInShared1 := &Machine{
|
||||
@@ -313,15 +313,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||
Hostname: "test_get_shared_nodes_1",
|
||||
NamespaceID: namespaceShared1.ID,
|
||||
Namespace: *namespaceShared1,
|
||||
UserID: userShared1.ID,
|
||||
User: *userShared1,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||
}
|
||||
app.db.Save(machineInShared1)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
||||
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
machineInShared2 := &Machine{
|
||||
@@ -330,15 +330,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
Hostname: "test_get_shared_nodes_2",
|
||||
NamespaceID: namespaceShared2.ID,
|
||||
Namespace: *namespaceShared2,
|
||||
UserID: userShared2.ID,
|
||||
User: *userShared2,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||
}
|
||||
app.db.Save(machineInShared2)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
||||
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
machineInShared3 := &Machine{
|
||||
@@ -347,15 +347,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
Hostname: "test_get_shared_nodes_3",
|
||||
NamespaceID: namespaceShared3.ID,
|
||||
Namespace: *namespaceShared3,
|
||||
UserID: userShared3.ID,
|
||||
User: *userShared3,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||
}
|
||||
app.db.Save(machineInShared3)
|
||||
|
||||
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
||||
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
machine2InShared1 := &Machine{
|
||||
@@ -364,8 +364,8 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||
Hostname: "test_get_shared_nodes_4",
|
||||
NamespaceID: namespaceShared1.ID,
|
||||
Namespace: *namespaceShared1,
|
||||
UserID: userShared1.ID,
|
||||
User: *userShared1,
|
||||
RegisterMethod: RegisterMethodAuthKey,
|
||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
||||
|
@@ -1,54 +0,0 @@
|
||||
# headscale documentation
|
||||
|
||||
This page contains the official and community contributed documentation for `headscale`.
|
||||
|
||||
If you are having trouble with following the documentation or get unexpected results,
|
||||
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
|
||||
|
||||
## Official documentation
|
||||
|
||||
### How-to
|
||||
|
||||
- [Running headscale on Linux](running-headscale-linux.md)
|
||||
- [Control headscale remotely](remote-cli.md)
|
||||
- [Using a Windows client with headscale](windows-client.md)
|
||||
|
||||
### References
|
||||
|
||||
- [Configuration](../config-example.yaml)
|
||||
- [Glossary](glossary.md)
|
||||
- [TLS](tls.md)
|
||||
|
||||
## Community documentation
|
||||
|
||||
Community documentation is not actively maintained by the headscale authors and is
|
||||
written by community members. It is _not_ verified by `headscale` developers.
|
||||
|
||||
**It might be outdated and it might miss necessary steps**.
|
||||
|
||||
- [Running headscale in a container](running-headscale-container.md)
|
||||
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
|
||||
- [Running headscale behind a reverse proxy](reverse-proxy.md)
|
||||
|
||||
## Misc
|
||||
|
||||
### Policy ACLs
|
||||
|
||||
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
||||
|
||||
For instance, instead of referring to users when defining groups you must
|
||||
use namespaces (which are the equivalent to user/logins in Tailscale.com).
|
||||
|
||||
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
||||
|
||||
When using ACL's the Namespace borders are no longer applied. All machines
|
||||
whichever the Namespace have the ability to communicate with other hosts as
|
||||
long as the ACL's permits this exchange.
|
||||
|
||||
The [ACLs](acls.md) document should help understand a fictional case of setting
|
||||
up ACLs in a small company. All concepts presented in this document could be
|
||||
applied outside of business oriented usage.
|
||||
|
||||
### Apple devices
|
||||
|
||||
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.
|
25
docs/acls.md
25
docs/acls.md
@@ -1,4 +1,15 @@
|
||||
# ACLs use case example
|
||||
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
||||
|
||||
For instance, instead of referring to users when defining groups you must
|
||||
use users (which are the equivalent to user/logins in Tailscale.com).
|
||||
|
||||
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
||||
|
||||
When using ACL's the User borders are no longer applied. All machines
|
||||
whichever the User have the ability to communicate with other hosts as
|
||||
long as the ACL's permits this exchange.
|
||||
|
||||
## ACLs use case example
|
||||
|
||||
Let's build an example use case for a small business (It may be the place where
|
||||
ACL's are the most useful).
|
||||
@@ -29,19 +40,21 @@ servers.
|
||||
|
||||
## ACL setup
|
||||
|
||||
Note: Namespaces will be created automatically when users authenticate with the
|
||||
Note: Users will be created automatically when users authenticate with the
|
||||
Headscale server.
|
||||
|
||||
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
||||
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
||||
|
||||
When registering the servers we will need to add the flag
|
||||
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
|
||||
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
|
||||
registering the server should be allowed to do it. Since anyone can add tags to
|
||||
a server they can register, the check of the tags is done on headscale server
|
||||
and only valid tags are applied. A tag is valid if the namespace that is
|
||||
and only valid tags are applied. A tag is valid if the user that is
|
||||
registering it is allowed to do it.
|
||||
|
||||
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
|
||||
|
||||
Here are the ACL's to implement the same permissions as above:
|
||||
|
||||
```json
|
||||
@@ -164,8 +177,8 @@ Here are the ACL's to implement the same permissions as above:
|
||||
"dst": ["tag:dev-app-servers:80,443"]
|
||||
},
|
||||
|
||||
// We still have to allow internal namespaces communications since nothing guarantees that each user have
|
||||
// their own namespaces.
|
||||
// We still have to allow internal users communications since nothing guarantees that each user have
|
||||
// their own users.
|
||||
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
||||
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
||||
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
||||
|
90
docs/dns-records.md
Normal file
90
docs/dns-records.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Setting custom DNS records
|
||||
|
||||
!!! warning "Community documentation"
|
||||
|
||||
This page is not actively maintained by the headscale authors and is
|
||||
written by community members. It is _not_ verified by `headscale` developers.
|
||||
|
||||
**It might be outdated and it might miss necessary steps**.
|
||||
|
||||
## Goal
|
||||
|
||||
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
|
||||
An example use case is to serve apps on the same host via a reverse proxy like NGINX, in this case a Prometheus monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the hostname and portnum combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Change the configuration
|
||||
|
||||
1. Change the `config.yaml` to contain the desired records like so:
|
||||
|
||||
```yaml
|
||||
dns_config:
|
||||
...
|
||||
extra_records:
|
||||
- name: "prometheus.myvpn.example.com"
|
||||
type: "A"
|
||||
value: "100.64.0.3"
|
||||
|
||||
- name: "grafana.myvpn.example.com"
|
||||
type: "A"
|
||||
value: "100.64.0.3"
|
||||
...
|
||||
```
|
||||
|
||||
2. Restart your headscale instance.
|
||||
|
||||
Beware of the limitations listed later on!
|
||||
|
||||
### 2. Verify that the records are set
|
||||
|
||||
You can use a DNS querying tool of your choice on one of your hosts to verify that your newly set records are actually available in MagicDNS, here we used [`dig`](https://man.archlinux.org/man/dig.1.en):
|
||||
|
||||
```
|
||||
$ dig grafana.myvpn.example.com
|
||||
|
||||
; <<>> DiG 9.18.10 <<>> grafana.myvpn.example.com
|
||||
;; global options: +cmd
|
||||
;; Got answer:
|
||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44054
|
||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
|
||||
|
||||
;; OPT PSEUDOSECTION:
|
||||
; EDNS: version: 0, flags:; udp: 65494
|
||||
;; QUESTION SECTION:
|
||||
;grafana.myvpn.example.com. IN A
|
||||
|
||||
;; ANSWER SECTION:
|
||||
grafana.myvpn.example.com. 593 IN A 100.64.0.3
|
||||
|
||||
;; Query time: 0 msec
|
||||
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
|
||||
;; WHEN: Sat Dec 31 11:46:55 CET 2022
|
||||
;; MSG SIZE rcvd: 66
|
||||
```
|
||||
|
||||
### 3. Optional: Setup the reverse proxy
|
||||
|
||||
The motivating example here was to be able to access internal monitoring services on the same host without specifying a port:
|
||||
|
||||
```
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
server_name grafana.myvpn.example.com;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:3000;
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
|
||||
}
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
[Not all types of records are supported](https://github.com/tailscale/tailscale/blob/6edf357b96b28ee1be659a70232c0135b2ffedfd/ipn/ipnlocal/local.go#L2989-L3007), especially no CNAME records.
|
49
docs/exit-node.md
Normal file
49
docs/exit-node.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# Exit Nodes
|
||||
|
||||
## On the node
|
||||
|
||||
Register the node and make it advertise itself as an exit node:
|
||||
|
||||
```console
|
||||
$ sudo tailscale up --login-server https://my-server.com --advertise-exit-node
|
||||
```
|
||||
|
||||
If the node is already registered, it can advertise exit capabilities like this:
|
||||
|
||||
```console
|
||||
$ sudo tailscale set --advertise-exit-node
|
||||
```
|
||||
|
||||
To use a node as an exit node, IP forwarding must be enabled on the node. Check the official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to enable IP fowarding.
|
||||
|
||||
## On the control server
|
||||
|
||||
```console
|
||||
$ # list nodes
|
||||
$ headscale routes list
|
||||
ID | Machine | Prefix | Advertised | Enabled | Primary
|
||||
1 | | 0.0.0.0/0 | false | false | -
|
||||
2 | | ::/0 | false | false | -
|
||||
3 | phobos | 0.0.0.0/0 | true | false | -
|
||||
4 | phobos | ::/0 | true | false | -
|
||||
$ # enable routes for phobos
|
||||
$ headscale routes enable -r 3
|
||||
$ headscale routes enable -r 4
|
||||
$ # Check node list again. The routes are now enabled.
|
||||
$ headscale routes list
|
||||
ID | Machine | Prefix | Advertised | Enabled | Primary
|
||||
1 | | 0.0.0.0/0 | false | false | -
|
||||
2 | | ::/0 | false | false | -
|
||||
3 | phobos | 0.0.0.0/0 | true | true | -
|
||||
4 | phobos | ::/0 | true | true | -
|
||||
```
|
||||
|
||||
## On the client
|
||||
|
||||
The exit node can now be used with:
|
||||
|
||||
```console
|
||||
$ sudo tailscale set --exit-node phobos
|
||||
```
|
||||
|
||||
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes/?q=exit#step-3-use-the-exit-node) for how to do it on your device.
|
53
docs/faq.md
Normal file
53
docs/faq.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
hide:
|
||||
- navigation
|
||||
---
|
||||
|
||||
# Frequently Asked Questions
|
||||
|
||||
## What is the design goal of headscale?
|
||||
|
||||
`headscale` aims to implement a self-hosted, open source alternative to the [Tailscale](https://tailscale.com/)
|
||||
control server.
|
||||
`headscale`'s goal is to provide self-hosters and hobbyists with an open-source
|
||||
server they can use for their projects and labs.
|
||||
It implements a narrow scope, a _single_ Tailnet, suitable for a personal use, or a small
|
||||
open-source organisation.
|
||||
|
||||
## How can I contribute?
|
||||
|
||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||
contribution will have to be discussed with the Maintainers before being submitted.
|
||||
|
||||
Headscale is open to code contributions for bug fixes without discussion.
|
||||
|
||||
If you find mistakes in the documentation, please also submit a fix to the documentation.
|
||||
|
||||
## Why is 'acknowledged contribution' the chosen model?
|
||||
|
||||
Both maintainers have full-time jobs and families, and we want to avoid burnout. We also want to avoid frustration from contributors when their PRs are not accepted.
|
||||
|
||||
We are more than happy to exchange emails, or to have dedicated calls before a PR is submitted.
|
||||
|
||||
## When/Why is Feature X going to be implemented?
|
||||
|
||||
We don't know. We might be working on it. If you want to help, please send us a PR.
|
||||
|
||||
Please be aware that there are a number of reasons why we might not accept specific contributions:
|
||||
|
||||
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
|
||||
- Given that we are reverse-engineering Tailscale to satify our own curiosity, we might be interested in implementing the feature ourselves.
|
||||
- You are not sending unit and integration tests with it.
|
||||
|
||||
## Do you support Y method of deploying Headscale?
|
||||
|
||||
We currently support deploying `headscale` using our binaries and the DEB packages. Both can be found in the
|
||||
[GitHub releases page](https://github.com/juanfont/headscale/releases).
|
||||
|
||||
In addition to that, there are semi-official RPM packages by the Fedora infra team https://copr.fedorainfracloud.org/coprs/jonathanspw/headscale/
|
||||
|
||||
For convenience, we also build Docker images with `headscale`. But **please be aware that we don't officially support deploying `headscale` using Docker**. We have a [Discord channel](https://discord.com/channels/896711691637780480/1070619770942148618) where you can ask for Docker-specific help to the community.
|
||||
|
||||
## Why is my reverse proxy not working with Headscale?
|
||||
|
||||
We don't know. We don't use reverse proxies with `headscale` ourselves, so we don't have any experience with them. We have [community documentation](https://headscale.net/reverse-proxy/) on how to configure various reverse proxies, and a dedicated [Discord channel](https://discord.com/channels/896711691637780480/1070619818346164324) where you can ask for help to the community.
|
@@ -1,6 +1,6 @@
|
||||
# Glossary
|
||||
|
||||
| Term | Description |
|
||||
| --------- | --------------------------------------------------------------------------------------------------------------------- |
|
||||
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
|
||||
| Namespace | A namespace is a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User |
|
||||
| Term | Description |
|
||||
| --------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Machine | A machine is a single entity connected to `headscale`, typically an installation of Tailscale. Also known as **Node** |
|
||||
| Namespace | A namespace was a logical grouping of machines "owned" by the same entity, in Tailscale, this is typically a User (This is now called user) |
|
||||
|
30
docs/iOS-client.md
Normal file
30
docs/iOS-client.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Connecting an iOS client
|
||||
|
||||
## Goal
|
||||
|
||||
This documentation has the goal of showing how a user can use the official iOS [Tailscale](https://tailscale.com) client with `headscale`.
|
||||
|
||||
## Installation
|
||||
|
||||
Install the official Tailscale iOS client from the [App Store](https://apps.apple.com/app/tailscale/id1470499037).
|
||||
|
||||
Ensure that the installed version is at least 1.38.1, as that is the first release to support alternate control servers.
|
||||
|
||||
## Configuring the headscale URL
|
||||
|
||||
!!! info "Apple devices"
|
||||
|
||||
An endpoint with information on how to connect your Apple devices
|
||||
(currently macOS only) is available at `/apple` on your running instance.
|
||||
|
||||
Ensure that the tailscale app is logged out before proceeding.
|
||||
|
||||
Go to iOS settings, scroll down past game center and tv provider to the tailscale app and select it. The headscale URL can be entered into the _"ALTERNATE COORDINATION SERVER URL"_ box.
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> If the app was previously logged into tailscale, toggle on the _Reset Keychain_ switch.
|
||||
|
||||
Restart the app by closing it from the iOS app switcher, open the app and select the regular _Sign in_ option (non-SSO), and it should open up to the headscale authentication page.
|
||||
|
||||
Enter your credentials and log in. Headscale should now be working on your iOS device.
|
43
docs/index.md
Normal file
43
docs/index.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
hide:
|
||||
- navigation
|
||||
- toc
|
||||
---
|
||||
|
||||
# headscale
|
||||
|
||||
`headscale` is an open source, self-hosted implementation of the Tailscale control server.
|
||||
|
||||
This page contains the documentation for the latest version of headscale. Please also check our [FAQ](/faq/).
|
||||
|
||||
Join our [Discord](https://discord.gg/c84AZQhmpx) server for a chat and community support.
|
||||
|
||||
## Design goal
|
||||
|
||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
||||
control server.
|
||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
||||
server they can use for their projects and labs.
|
||||
It implements a narrower scope, a single Tailnet, suitable for a personal use, or a small
|
||||
open-source organisation.
|
||||
|
||||
## Supporting headscale
|
||||
|
||||
If you like `headscale` and find it useful, there is a sponsorship and donation
|
||||
buttons available in the repo.
|
||||
|
||||
## Contributing
|
||||
|
||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
||||
contribution will have to be discussed with the Maintainers before being submitted.
|
||||
|
||||
This model has been chosen to reduce the risk of burnout by limiting the
|
||||
maintenance overhead of reviewing and validating third-party code.
|
||||
|
||||
Headscale is open to code contributions for bug fixes without discussion.
|
||||
|
||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
||||
|
||||
## About
|
||||
|
||||
`headscale` is maintained by [Kristoffer Dalby](https://kradalby.no/) and [Juan Font](https://font.eu).
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user