mirror of
https://github.com/juanfont/headscale.git
synced 2025-08-17 22:37:30 +00:00
Compare commits
29 Commits
v0.22.3
...
web-auth-f
Author | SHA1 | Date | |
---|---|---|---|
![]() |
70e08462b3 | ||
![]() |
a231ece825 | ||
![]() |
ae43d82a33 | ||
![]() |
ad3c36fd07 | ||
![]() |
f176503448 | ||
![]() |
f7ad88aa08 | ||
![]() |
f63d22655c | ||
![]() |
89c468fc43 | ||
![]() |
b0fda6b216 | ||
![]() |
154fb59bdb | ||
![]() |
d3e9703fb5 | ||
![]() |
7ce3f8c7d1 | ||
![]() |
58c8633cc1 | ||
![]() |
b3f5af30a4 | ||
![]() |
9f64ac8a33 | ||
![]() |
aa1cc05cfb | ||
![]() |
670ef9a93e | ||
![]() |
987abcfdce | ||
![]() |
c70f5696dc | ||
![]() |
825e88311e | ||
![]() |
bbc8cb11da | ||
![]() |
3a6ef6bece | ||
![]() |
b2dc480f22 | ||
![]() |
5d7eae46f8 | ||
![]() |
45cb0f3fa3 | ||
![]() |
658478cba3 | ||
![]() |
ec90e9d716 | ||
![]() |
181f1eeb4f | ||
![]() |
e270cf6d20 |
3
.github/FUNDING.yml
vendored
3
.github/FUNDING.yml
vendored
@@ -1,3 +0,0 @@
|
|||||||
# These are supported funding model platforms
|
|
||||||
|
|
||||||
ko_fi: headscale
|
|
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -6,24 +6,19 @@ labels: ["bug"]
|
|||||||
assignees: ""
|
assignees: ""
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
|
||||||
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
|
|
||||||
to make sure that it is truly a bug.
|
|
||||||
The issue tracker is not the place to ask for support or how to set up Headscale.
|
|
||||||
|
|
||||||
Bug reports without the sufficient information will be closed.
|
**Bug description**
|
||||||
|
|
||||||
Headscale is a multinational community across the globe. Our language is English.
|
|
||||||
All bug reports needs to be in English.
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Bug description
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
||||||
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
||||||
it on our Discord server first. -->
|
it on our Discord server first. -->
|
||||||
|
|
||||||
## Environment
|
**To Reproduce**
|
||||||
|
|
||||||
|
<!-- Steps to reproduce the behavior. -->
|
||||||
|
|
||||||
|
**Context info**
|
||||||
|
|
||||||
<!-- Please add relevant information about your system. For example:
|
<!-- Please add relevant information about your system. For example:
|
||||||
- Version of headscale used
|
- Version of headscale used
|
||||||
@@ -33,20 +28,3 @@ All bug reports needs to be in English.
|
|||||||
- The relevant config parameters you used
|
- The relevant config parameters you used
|
||||||
- Log output
|
- Log output
|
||||||
-->
|
-->
|
||||||
|
|
||||||
- OS:
|
|
||||||
- Headscale version:
|
|
||||||
- Tailscale version:
|
|
||||||
|
|
||||||
<!--
|
|
||||||
We do not support running Headscale in a container nor behind a (reverse) proxy.
|
|
||||||
If either of these are true for your environment, ask the community in Discord
|
|
||||||
instead of filing a bug report.
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] Headscale is behind a (reverse) proxy
|
|
||||||
- [ ] Headscale runs in a container
|
|
||||||
|
|
||||||
## To Reproduce
|
|
||||||
|
|
||||||
<!-- Steps to reproduce the behavior. -->
|
|
||||||
|
21
.github/ISSUE_TEMPLATE/feature_request.md
vendored
21
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -6,21 +6,12 @@ labels: ["enhancement"]
|
|||||||
assignees: ""
|
assignees: ""
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
|
||||||
We typically have a clear roadmap for what we want to improve and reserve the right
|
|
||||||
to close feature requests that does not fit in the roadmap, or fit with the scope
|
|
||||||
of the project, or we actually want to implement ourselves.
|
|
||||||
|
|
||||||
Headscale is a multinational community across the globe. Our language is English.
|
**Feature request**
|
||||||
All bug reports needs to be in English.
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Why
|
|
||||||
|
|
||||||
<!-- Include the reason, why you would need the feature. E.g. what problem
|
|
||||||
does it solve? Or which workflow is currently frustrating and will be improved by
|
|
||||||
this? -->
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!-- A clear and precise description of what new or changed feature you want. -->
|
<!-- A clear and precise description of what new or changed feature you want. -->
|
||||||
|
|
||||||
|
<!-- Please include the reason, why you would need the feature. E.g. what problem
|
||||||
|
does it solve? Or which workflow is currently frustrating and will be improved by
|
||||||
|
this? -->
|
||||||
|
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
Normal file
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
name: "Other issue"
|
||||||
|
about: "Report a different issue"
|
||||||
|
title: ""
|
||||||
|
labels: ["bug"]
|
||||||
|
assignees: ""
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
|
||||||
|
|
||||||
|
<!-- If you have a question, please consider using our Discord for asking questions -->
|
||||||
|
|
||||||
|
**Issue description**
|
||||||
|
|
||||||
|
<!-- Please add your issue description. -->
|
||||||
|
|
||||||
|
**To Reproduce**
|
||||||
|
|
||||||
|
<!-- Steps to reproduce the behavior. -->
|
||||||
|
|
||||||
|
**Context info**
|
||||||
|
|
||||||
|
<!-- Please add relevant information about your system. For example:
|
||||||
|
- Version of headscale used
|
||||||
|
- Version of tailscale client
|
||||||
|
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
|
||||||
|
- Kernel version
|
||||||
|
- The relevant config parameters you used
|
||||||
|
- Log output
|
||||||
|
-->
|
12
.github/pull_request_template.md
vendored
12
.github/pull_request_template.md
vendored
@@ -1,15 +1,3 @@
|
|||||||
<!--
|
|
||||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
|
||||||
contribution will have to be discussed with the Maintainers before being submitted.
|
|
||||||
|
|
||||||
This model has been chosen to reduce the risk of burnout by limiting the
|
|
||||||
maintenance overhead of reviewing and validating third-party code.
|
|
||||||
|
|
||||||
Headscale is open to code contributions for bug fixes without discussion.
|
|
||||||
|
|
||||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
|
||||||
-->
|
|
||||||
|
|
||||||
<!-- Please tick if the following things apply. You… -->
|
<!-- Please tick if the following things apply. You… -->
|
||||||
|
|
||||||
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
||||||
|
26
.github/renovate.json
vendored
26
.github/renovate.json
vendored
@@ -6,27 +6,31 @@
|
|||||||
"onboarding": false,
|
"onboarding": false,
|
||||||
"extends": ["config:base", ":rebaseStalePrs"],
|
"extends": ["config:base", ":rebaseStalePrs"],
|
||||||
"ignorePresets": [":prHourlyLimit2"],
|
"ignorePresets": [":prHourlyLimit2"],
|
||||||
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"],
|
"enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
|
||||||
"includeForks": true,
|
"includeForks": true,
|
||||||
"repositories": ["juanfont/headscale"],
|
"repositories": ["juanfont/headscale"],
|
||||||
"platform": "github",
|
"platform": "github",
|
||||||
"packageRules": [
|
"packageRules": [
|
||||||
{
|
{
|
||||||
"matchDatasources": ["go"],
|
"matchDatasources": ["go"],
|
||||||
"groupName": "Go modules",
|
"groupName": "Go modules",
|
||||||
"groupSlug": "gomod",
|
"groupSlug": "gomod",
|
||||||
"separateMajorMinor": false
|
"separateMajorMinor": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"matchDatasources": ["docker"],
|
"matchDatasources": ["docker"],
|
||||||
"groupName": "Dockerfiles",
|
"groupName": "Dockerfiles",
|
||||||
"groupSlug": "dockerfiles"
|
"groupSlug": "dockerfiles"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"regexManagers": [
|
"regexManagers": [
|
||||||
{
|
{
|
||||||
"fileMatch": [".github/workflows/.*.yml$"],
|
"fileMatch": [
|
||||||
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"],
|
".github/workflows/.*.yml$"
|
||||||
|
],
|
||||||
|
"matchStrings": [
|
||||||
|
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
|
||||||
|
],
|
||||||
"datasourceTemplate": "golang-version",
|
"datasourceTemplate": "golang-version",
|
||||||
"depNameTemplate": "actions/go-version"
|
"depNameTemplate": "actions/go-version"
|
||||||
}
|
}
|
||||||
|
33
.github/workflows/build.yml
vendored
33
.github/workflows/build.yml
vendored
@@ -8,14 +8,9 @@ on:
|
|||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions: write-all
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
@@ -37,34 +32,10 @@ jobs:
|
|||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
- name: Run build
|
- name: Run build
|
||||||
id: build
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
run: |
|
run: nix build
|
||||||
nix build |& tee build-result
|
|
||||||
BUILD_STATUS="${PIPESTATUS[0]}"
|
|
||||||
|
|
||||||
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
|
- uses: actions/upload-artifact@v2
|
||||||
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
|
|
||||||
|
|
||||||
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
|
|
||||||
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
exit $BUILD_STATUS
|
|
||||||
|
|
||||||
- name: Nix gosum diverging
|
|
||||||
uses: actions/github-script@v6
|
|
||||||
if: failure() && steps.build.outcome == 'failure'
|
|
||||||
with:
|
|
||||||
github-token: ${{secrets.GITHUB_TOKEN}}
|
|
||||||
script: |
|
|
||||||
github.rest.pulls.createReviewComment({
|
|
||||||
pull_number: context.issue.number,
|
|
||||||
owner: context.repo.owner,
|
|
||||||
repo: context.repo.repo,
|
|
||||||
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
|
||||||
})
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
with:
|
with:
|
||||||
name: headscale-linux
|
name: headscale-linux
|
||||||
|
45
.github/workflows/docs.yml
vendored
45
.github/workflows/docs.yml
vendored
@@ -1,45 +0,0 @@
|
|||||||
name: Build documentation
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
pages: write
|
|
||||||
id-token: write
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
- name: Install python
|
|
||||||
uses: actions/setup-python@v4
|
|
||||||
with:
|
|
||||||
python-version: 3.x
|
|
||||||
- name: Setup cache
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
key: ${{ github.ref }}
|
|
||||||
path: .cache
|
|
||||||
- name: Setup dependencies
|
|
||||||
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
|
|
||||||
- name: Build docs
|
|
||||||
run: mkdocs build --strict
|
|
||||||
- name: Upload artifact
|
|
||||||
uses: actions/upload-pages-artifact@v1
|
|
||||||
with:
|
|
||||||
path: ./site
|
|
||||||
deploy:
|
|
||||||
environment:
|
|
||||||
name: github-pages
|
|
||||||
url: ${{ steps.deployment.outputs.page_url }}
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs: build
|
|
||||||
steps:
|
|
||||||
- name: Deploy to GitHub Pages
|
|
||||||
id: deployment
|
|
||||||
uses: actions/deploy-pages@v1
|
|
23
.github/workflows/gh-actions-updater.yaml
vendored
23
.github/workflows/gh-actions-updater.yaml
vendored
@@ -1,23 +0,0 @@
|
|||||||
name: GitHub Actions Version Updater
|
|
||||||
|
|
||||||
# Controls when the action will run.
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
# Automatically run on every Sunday
|
|
||||||
- cron: "0 0 * * 0"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v2
|
|
||||||
with:
|
|
||||||
# [Required] Access token with `workflow` scope.
|
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
|
||||||
|
|
||||||
- name: Run GitHub Actions Version Updater
|
|
||||||
uses: saadmk11/github-actions-version-updater@v0.7.1
|
|
||||||
with:
|
|
||||||
# [Required] Access token with `workflow` scope.
|
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
|
8
.github/workflows/lint.yml
vendored
8
.github/workflows/lint.yml
vendored
@@ -3,10 +3,6 @@ name: Lint
|
|||||||
|
|
||||||
on: [push, pull_request]
|
on: [push, pull_request]
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
golangci-lint:
|
golangci-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
@@ -30,7 +26,7 @@ jobs:
|
|||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
uses: golangci/golangci-lint-action@v2
|
uses: golangci/golangci-lint-action@v2
|
||||||
with:
|
with:
|
||||||
version: v1.51.2
|
version: v1.49.0
|
||||||
|
|
||||||
# Only block PRs on new problems.
|
# Only block PRs on new problems.
|
||||||
# If this is not enabled, we will end up having PRs
|
# If this is not enabled, we will end up having PRs
|
||||||
@@ -63,7 +59,7 @@ jobs:
|
|||||||
|
|
||||||
- name: Prettify code
|
- name: Prettify code
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
uses: creyD/prettier_action@v4.3
|
uses: creyD/prettier_action@v4.0
|
||||||
with:
|
with:
|
||||||
prettier_options: >-
|
prettier_options: >-
|
||||||
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
||||||
|
138
.github/workflows/release-docker.yml
vendored
138
.github/workflows/release-docker.yml
vendored
@@ -1,138 +0,0 @@
|
|||||||
---
|
|
||||||
name: Release Docker
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- "*" # triggers only if push new tag version
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
docker-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache
|
|
||||||
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}
|
|
||||||
type=semver,pattern={{major}}.{{minor}}
|
|
||||||
type=semver,pattern={{major}}
|
|
||||||
type=sha
|
|
||||||
type=raw,value=develop
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
tags: ${{ steps.meta.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache
|
|
||||||
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
|
||||||
|
|
||||||
docker-debug-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache-debug
|
|
||||||
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-debug-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta-debug
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
flavor: |
|
|
||||||
suffix=-debug,onlatest=true
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}
|
|
||||||
type=semver,pattern={{major}}.{{minor}}
|
|
||||||
type=semver,pattern={{major}}
|
|
||||||
type=sha
|
|
||||||
type=raw,value=develop
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
file: Dockerfile.debug
|
|
||||||
tags: ${{ steps.meta-debug.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta-debug.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta-debug.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache-debug
|
|
||||||
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
|
215
.github/workflows/release.yml
vendored
215
.github/workflows/release.yml
vendored
@@ -9,16 +9,221 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
goreleaser:
|
goreleaser:
|
||||||
|
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Go
|
||||||
|
uses: actions/setup-go@v3
|
||||||
|
with:
|
||||||
|
go-version: 1.19.0
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y gcc-aarch64-linux-gnu
|
||||||
|
- name: Run GoReleaser
|
||||||
|
uses: goreleaser/goreleaser-action@v2
|
||||||
|
with:
|
||||||
|
distribution: goreleaser
|
||||||
|
version: latest
|
||||||
|
args: release --rm-dist
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
docker-release:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache
|
||||||
|
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
type=semver,pattern={{major}}.{{minor}}
|
||||||
|
type=semver,pattern={{major}}
|
||||||
|
type=raw,value=latest
|
||||||
|
type=sha
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache
|
||||||
|
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v16
|
docker-debug-release:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache-debug
|
||||||
|
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-debug-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta-debug
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
flavor: |
|
||||||
|
latest=false
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}-debug
|
||||||
|
type=semver,pattern={{major}}.{{minor}}-debug
|
||||||
|
type=semver,pattern={{major}}-debug
|
||||||
|
type=raw,value=latest-debug
|
||||||
|
type=sha,suffix=-debug
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
file: Dockerfile.debug
|
||||||
|
tags: ${{ steps.meta-debug.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta-debug.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta-debug.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache-debug
|
||||||
|
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
||||||
|
|
||||||
- name: Run goreleaser
|
docker-alpine-release:
|
||||||
run: nix develop --command -- goreleaser release --clean
|
runs-on: ubuntu-latest
|
||||||
env:
|
steps:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache-alpine
|
||||||
|
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-alpine-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta-alpine
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
flavor: |
|
||||||
|
latest=false
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}-alpine
|
||||||
|
type=semver,pattern={{major}}.{{minor}}-alpine
|
||||||
|
type=semver,pattern={{major}}-alpine
|
||||||
|
type=raw,value=latest-alpine
|
||||||
|
type=sha,suffix=-alpine
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
file: Dockerfile.alpine
|
||||||
|
tags: ${{ steps.meta-alpine.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta-alpine.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache-alpine
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta-alpine.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache-alpine
|
||||||
|
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine
|
||||||
|
27
.github/workflows/renovatebot.yml
vendored
Normal file
27
.github/workflows/renovatebot.yml
vendored
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
---
|
||||||
|
name: Renovate
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
|
||||||
|
workflow_dispatch:
|
||||||
|
jobs:
|
||||||
|
renovate:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Get token
|
||||||
|
id: get_token
|
||||||
|
uses: machine-learning-apps/actions-app-token@master
|
||||||
|
with:
|
||||||
|
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
|
||||||
|
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
|
||||||
|
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
|
||||||
|
- name: Self-hosted Renovate
|
||||||
|
uses: renovatebot/github-action@v31.81.3
|
||||||
|
with:
|
||||||
|
configurationFile: .github/renovate.json
|
||||||
|
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
|
||||||
|
# env:
|
||||||
|
# LOG_LEVEL: "debug"
|
35
.github/workflows/test-integration-cli.yml
vendored
Normal file
35
.github/workflows/test-integration-cli.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
name: Integration Test CLI
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
integration-test-cli:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Set Swap Space
|
||||||
|
uses: pierotofy/set-swap-space@master
|
||||||
|
with:
|
||||||
|
swap-size-gb: 10
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v16
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run CLI integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: nix develop --command -- make test_integration_cli
|
35
.github/workflows/test-integration-derp.yml
vendored
Normal file
35
.github/workflows/test-integration-derp.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
name: Integration Test DERP
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
integration-test-derp:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Set Swap Space
|
||||||
|
uses: pierotofy/set-swap-space@master
|
||||||
|
with:
|
||||||
|
swap-size-gb: 10
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v16
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run Embedded DERP server integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: nix develop --command -- make test_integration_derp
|
35
.github/workflows/test-integration-oidc.yml
vendored
Normal file
35
.github/workflows/test-integration-oidc.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
name: Integration Test OIDC
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
integration-test-oidc:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Set Swap Space
|
||||||
|
uses: pierotofy/set-swap-space@master
|
||||||
|
with:
|
||||||
|
swap-size-gb: 10
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v16
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run OIDC integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: nix develop --command -- make test_integration_oidc
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLAllowStarDst
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLAllowStarDst$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLAllowUser80Dst
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLAllowUser80Dst$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLAllowUserDst
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLAllowUserDst$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLDenyAllPort80
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLDenyAllPort80$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLDevice1CanAccessDevice2$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLHostsInNetMapTable
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLHostsInNetMapTable$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLNamedHostsCanReach
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLNamedHostsCanReach$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLNamedHostsCanReachBySubnet$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestAuthKeyLogoutAndRelogin$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestAuthWebFlowAuthenticationPingAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestAuthWebFlowLogoutAndRelogin$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestCreateTailscale
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestCreateTailscale$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestDERPServerScenario
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestDERPServerScenario$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestEnablingRoutes
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestEnablingRoutes$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestEphemeral
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestEphemeral$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestExpireNode
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestExpireNode$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestHeadscale
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestHeadscale$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestOIDCAuthenticationPingAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestOIDCAuthenticationPingAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPingAllByHostname
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPingAllByHostname$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPingAllByIP
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPingAllByIP$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPreAuthKeyCommand
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPreAuthKeyCommand$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPreAuthKeyCommandReusableEphemeral$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPreAuthKeyCommandWithoutExpiry$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestResolveMagicDNS
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestResolveMagicDNS$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHIsBlockedInACL
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHIsBlockedInACL$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHMultipleUsersAllToAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHNoSSHConfigured
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHNoSSHConfigured$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHOneUserAllToAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHOneUserAllToAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSUserOnlyIsolation
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSUserOnlyIsolation$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestTaildrop
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestTaildrop$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestTailscaleNodesJoiningHeadcale$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestUserCommand
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestUserCommand$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
35
.github/workflows/test-integration-v2-general-auth.yml
vendored
Normal file
35
.github/workflows/test-integration-v2-general-auth.yml
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
name: Integration Test v2
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
integration-test-v2:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Set Swap Space
|
||||||
|
uses: pierotofy/set-swap-space@master
|
||||||
|
with:
|
||||||
|
swap-size-gb: 10
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v14.1
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v16
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: nix develop --command -- make test_integration_v2_auth_web_flow
|
27
.github/workflows/test-integration-v2-kradalby.yml
vendored
Normal file
27
.github/workflows/test-integration-v2-kradalby.yml
vendored
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
name: Integration Test v2 - kradalby
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
integration-test-v2-kradalby:
|
||||||
|
runs-on: [self-hosted, linux, x64, nixos, docker]
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v34
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
run: nix develop --command -- make test_integration_v2_general
|
4
.github/workflows/test.yml
vendored
4
.github/workflows/test.yml
vendored
@@ -2,10 +2,6 @@ name: Tests
|
|||||||
|
|
||||||
on: [push, pull_request]
|
on: [push, pull_request]
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
test:
|
test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
12
.gitignore
vendored
12
.gitignore
vendored
@@ -1,5 +1,3 @@
|
|||||||
ignored/
|
|
||||||
|
|
||||||
# Binaries for programs and plugins
|
# Binaries for programs and plugins
|
||||||
*.exe
|
*.exe
|
||||||
*.exe~
|
*.exe~
|
||||||
@@ -14,9 +12,8 @@ ignored/
|
|||||||
*.out
|
*.out
|
||||||
|
|
||||||
# Dependency directories (remove the comment below to include it)
|
# Dependency directories (remove the comment below to include it)
|
||||||
vendor/
|
# vendor/
|
||||||
|
|
||||||
dist/
|
|
||||||
/headscale
|
/headscale
|
||||||
config.json
|
config.json
|
||||||
config.yaml
|
config.yaml
|
||||||
@@ -29,15 +26,10 @@ derp.yaml
|
|||||||
# Exclude Jetbrains Editors
|
# Exclude Jetbrains Editors
|
||||||
.idea
|
.idea
|
||||||
|
|
||||||
test_output/
|
test_output/
|
||||||
control_logs/
|
|
||||||
|
|
||||||
# Nix build output
|
# Nix build output
|
||||||
result
|
result
|
||||||
.direnv/
|
.direnv/
|
||||||
|
|
||||||
integration_test/etc/config.dump.yaml
|
integration_test/etc/config.dump.yaml
|
||||||
|
|
||||||
# MkDocs
|
|
||||||
.cache
|
|
||||||
/site
|
|
||||||
|
@@ -29,14 +29,6 @@ linters:
|
|||||||
- execinquery
|
- execinquery
|
||||||
- exhaustruct
|
- exhaustruct
|
||||||
- nolintlint
|
- nolintlint
|
||||||
- musttag # causes issues with imported libs
|
|
||||||
|
|
||||||
# deprecated
|
|
||||||
- structcheck # replaced by unused
|
|
||||||
- ifshort # deprecated by the owner
|
|
||||||
- varcheck # replaced by unused
|
|
||||||
- nosnakecase # replaced by revive
|
|
||||||
- deadcode # replaced by unused
|
|
||||||
|
|
||||||
# We should strive to enable these:
|
# We should strive to enable these:
|
||||||
- wrapcheck
|
- wrapcheck
|
||||||
|
115
.goreleaser.yml
115
.goreleaser.yml
@@ -1,28 +1,21 @@
|
|||||||
---
|
---
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- go mod tidy -compat=1.20
|
- go mod tidy -compat=1.19
|
||||||
- go mod vendor
|
|
||||||
|
|
||||||
release:
|
release:
|
||||||
prerelease: auto
|
prerelease: auto
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
- id: headscale
|
- id: darwin-amd64
|
||||||
main: ./cmd/headscale/headscale.go
|
main: ./cmd/headscale/headscale.go
|
||||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
env:
|
env:
|
||||||
- CGO_ENABLED=0
|
- CGO_ENABLED=0
|
||||||
targets:
|
goos:
|
||||||
- darwin_amd64
|
- darwin
|
||||||
- darwin_arm64
|
goarch:
|
||||||
- freebsd_amd64
|
- amd64
|
||||||
- linux_386
|
|
||||||
- linux_amd64
|
|
||||||
- linux_arm64
|
|
||||||
- linux_arm_5
|
|
||||||
- linux_arm_6
|
|
||||||
- linux_arm_7
|
|
||||||
flags:
|
flags:
|
||||||
- -mod=readonly
|
- -mod=readonly
|
||||||
ldflags:
|
ldflags:
|
||||||
@@ -30,56 +23,60 @@ builds:
|
|||||||
tags:
|
tags:
|
||||||
- ts2019
|
- ts2019
|
||||||
|
|
||||||
|
- id: darwin-arm64
|
||||||
|
main: ./cmd/headscale/headscale.go
|
||||||
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
|
env:
|
||||||
|
- CGO_ENABLED=0
|
||||||
|
goos:
|
||||||
|
- darwin
|
||||||
|
goarch:
|
||||||
|
- arm64
|
||||||
|
flags:
|
||||||
|
- -mod=readonly
|
||||||
|
ldflags:
|
||||||
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
|
tags:
|
||||||
|
- ts2019
|
||||||
|
|
||||||
|
- id: linux-amd64
|
||||||
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
|
env:
|
||||||
|
- CGO_ENABLED=0
|
||||||
|
goos:
|
||||||
|
- linux
|
||||||
|
goarch:
|
||||||
|
- amd64
|
||||||
|
main: ./cmd/headscale/headscale.go
|
||||||
|
ldflags:
|
||||||
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
|
tags:
|
||||||
|
- ts2019
|
||||||
|
|
||||||
|
- id: linux-arm64
|
||||||
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
|
env:
|
||||||
|
- CGO_ENABLED=0
|
||||||
|
goos:
|
||||||
|
- linux
|
||||||
|
goarch:
|
||||||
|
- arm64
|
||||||
|
main: ./cmd/headscale/headscale.go
|
||||||
|
ldflags:
|
||||||
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
|
tags:
|
||||||
|
- ts2019
|
||||||
|
|
||||||
archives:
|
archives:
|
||||||
- id: golang-cross
|
- id: golang-cross
|
||||||
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
|
builds:
|
||||||
|
- darwin-amd64
|
||||||
|
- darwin-arm64
|
||||||
|
- linux-amd64
|
||||||
|
- linux-arm64
|
||||||
|
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
|
||||||
format: binary
|
format: binary
|
||||||
|
|
||||||
source:
|
|
||||||
enabled: true
|
|
||||||
name_template: "{{ .ProjectName }}_{{ .Version }}"
|
|
||||||
format: tar.gz
|
|
||||||
files:
|
|
||||||
- "vendor/"
|
|
||||||
|
|
||||||
nfpms:
|
|
||||||
# Configure nFPM for .deb and .rpm releases
|
|
||||||
#
|
|
||||||
# See https://nfpm.goreleaser.com/configuration/
|
|
||||||
# and https://goreleaser.com/customization/nfpm/
|
|
||||||
#
|
|
||||||
# Useful tools for debugging .debs:
|
|
||||||
# List file contents: dpkg -c dist/headscale...deb
|
|
||||||
# Package metadata: dpkg --info dist/headscale....deb
|
|
||||||
#
|
|
||||||
- builds:
|
|
||||||
- headscale
|
|
||||||
package_name: headscale
|
|
||||||
priority: optional
|
|
||||||
vendor: headscale
|
|
||||||
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
|
|
||||||
homepage: https://github.com/juanfont/headscale
|
|
||||||
license: BSD
|
|
||||||
bindir: /usr/bin
|
|
||||||
formats:
|
|
||||||
- deb
|
|
||||||
# - rpm
|
|
||||||
contents:
|
|
||||||
- src: ./config-example.yaml
|
|
||||||
dst: /etc/headscale/config.yaml
|
|
||||||
type: config|noreplace
|
|
||||||
file_info:
|
|
||||||
mode: 0644
|
|
||||||
- src: ./docs/packaging/headscale.systemd.service
|
|
||||||
dst: /usr/lib/systemd/system/headscale.service
|
|
||||||
- dst: /var/lib/headscale
|
|
||||||
type: dir
|
|
||||||
- dst: /var/run/headscale
|
|
||||||
type: dir
|
|
||||||
scripts:
|
|
||||||
postinstall: ./docs/packaging/postinstall.sh
|
|
||||||
postremove: ./docs/packaging/postremove.sh
|
|
||||||
|
|
||||||
checksum:
|
checksum:
|
||||||
name_template: "checksums.txt"
|
name_template: "checksums.txt"
|
||||||
snapshot:
|
snapshot:
|
||||||
|
@@ -1 +0,0 @@
|
|||||||
.github/workflows/test-integration-v2*
|
|
114
CHANGELOG.md
114
CHANGELOG.md
@@ -1,121 +1,14 @@
|
|||||||
# CHANGELOG
|
# CHANGELOG
|
||||||
|
|
||||||
## 0.23.0 (2023-XX-XX)
|
## 0.17.0 (2022-XX-XX)
|
||||||
|
|
||||||
### BREAKING
|
### BREAKING
|
||||||
|
|
||||||
- Code reorganisation, a lot of code has moved, please review the following PRs accordingly [#1444](https://github.com/juanfont/headscale/pull/1444)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
## 0.22.3 (2023-05-12)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Added missing ca-certificates in Docker image [#1463](https://github.com/juanfont/headscale/pull/1463)
|
|
||||||
|
|
||||||
## 0.22.2 (2023-05-10)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
|
|
||||||
- Profiles are continously generated in our integration tests.
|
|
||||||
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
|
|
||||||
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
|
|
||||||
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
|
|
||||||
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
|
|
||||||
- Ditch distroless for Docker image, create default socket dir in `/var/run/headscale` [#1450](https://github.com/juanfont/headscale/pull/1450)
|
|
||||||
|
|
||||||
## 0.22.1 (2023-04-20)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
|
|
||||||
|
|
||||||
## 0.22.0 (2023-04-20)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
|
||||||
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
|
|
||||||
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
|
||||||
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
|
|
||||||
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
|
|
||||||
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
|
|
||||||
|
|
||||||
## 0.21.0 (2023-03-20)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
|
|
||||||
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
|
|
||||||
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
|
|
||||||
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
|
|
||||||
|
|
||||||
## 0.20.0 (2023-02-03)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
|
|
||||||
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
|
|
||||||
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
|
|
||||||
- defaults to 180 days like Tailscale SaaS
|
|
||||||
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
|
|
||||||
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
|
|
||||||
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
|
|
||||||
|
|
||||||
## 0.19.0 (2023-01-29)
|
|
||||||
|
|
||||||
### BREAKING
|
|
||||||
|
|
||||||
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
|
|
||||||
- **BACKUP your database before upgrading**
|
|
||||||
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
|
|
||||||
|
|
||||||
## 0.18.0 (2023-01-14)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
|
|
||||||
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
|
|
||||||
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
|
|
||||||
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
|
|
||||||
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
|
|
||||||
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
|
|
||||||
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
|
|
||||||
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
|
|
||||||
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
|
|
||||||
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
|
|
||||||
|
|
||||||
## 0.17.1 (2022-12-05)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
|
|
||||||
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
|
|
||||||
|
|
||||||
## 0.17.0 (2022-11-26)
|
|
||||||
|
|
||||||
### BREAKING
|
|
||||||
|
|
||||||
- `noise.private_key_path` has been added and is required for the new noise protocol.
|
|
||||||
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
||||||
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
|
|
||||||
|
|
||||||
### Important Changes
|
### Changes
|
||||||
|
|
||||||
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
||||||
- Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847)
|
|
||||||
- Please note that this support should be considered _partially_ implemented
|
|
||||||
- SSH ACLs status:
|
|
||||||
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
|
|
||||||
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
|
|
||||||
- If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
|
|
||||||
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
|
|
||||||
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
||||||
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
||||||
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
||||||
@@ -132,9 +25,6 @@
|
|||||||
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
|
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
|
||||||
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
|
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
|
||||||
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
|
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
|
||||||
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971)
|
|
||||||
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
|
|
||||||
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
|
|
||||||
|
|
||||||
## 0.16.4 (2022-08-21)
|
## 0.16.4 (2022-08-21)
|
||||||
|
|
||||||
|
11
Dockerfile
11
Dockerfile
@@ -1,5 +1,5 @@
|
|||||||
# Builder image
|
# Builder image
|
||||||
FROM docker.io/golang:1.20-bullseye AS build
|
FROM docker.io/golang:1.19.0-bullseye AS build
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
@@ -14,17 +14,10 @@ RUN strip /go/bin/headscale
|
|||||||
RUN test -e /go/bin/headscale
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
# Production image
|
# Production image
|
||||||
FROM docker.io/debian:bullseye-slim
|
FROM gcr.io/distroless/base-debian11
|
||||||
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y ca-certificates \
|
|
||||||
&& rm -rf /var/lib/apt/lists/* \
|
|
||||||
&& apt-get clean
|
|
||||||
|
|
||||||
COPY --from=build /go/bin/headscale /bin/headscale
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
ENV TZ UTC
|
ENV TZ UTC
|
||||||
|
|
||||||
RUN mkdir -p /var/run/headscale
|
|
||||||
|
|
||||||
EXPOSE 8080/tcp
|
EXPOSE 8080/tcp
|
||||||
CMD ["headscale"]
|
CMD ["headscale"]
|
||||||
|
24
Dockerfile.alpine
Normal file
24
Dockerfile.alpine
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# Builder image
|
||||||
|
FROM docker.io/golang:1.19.0-alpine AS build
|
||||||
|
ARG VERSION=dev
|
||||||
|
ENV GOPATH /go
|
||||||
|
WORKDIR /go/src/headscale
|
||||||
|
|
||||||
|
COPY go.mod go.sum /go/src/headscale/
|
||||||
|
RUN apk add gcc musl-dev
|
||||||
|
RUN go mod download
|
||||||
|
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||||
|
RUN strip /go/bin/headscale
|
||||||
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
|
# Production image
|
||||||
|
FROM docker.io/alpine:latest
|
||||||
|
|
||||||
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
|
ENV TZ UTC
|
||||||
|
|
||||||
|
EXPOSE 8080/tcp
|
||||||
|
CMD ["headscale"]
|
@@ -1,5 +1,5 @@
|
|||||||
# Builder image
|
# Builder image
|
||||||
FROM docker.io/golang:1.20-bullseye AS build
|
FROM docker.io/golang:1.19.0-bullseye AS build
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
@@ -13,13 +13,11 @@ RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.c
|
|||||||
RUN test -e /go/bin/headscale
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
# Debug image
|
# Debug image
|
||||||
FROM docker.io/golang:1.20.0-bullseye
|
FROM docker.io/golang:1.19.0-bullseye
|
||||||
|
|
||||||
COPY --from=build /go/bin/headscale /bin/headscale
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
ENV TZ UTC
|
ENV TZ UTC
|
||||||
|
|
||||||
RUN mkdir -p /var/run/headscale
|
|
||||||
|
|
||||||
# Need to reset the entrypoint or everything will run as a busybox script
|
# Need to reset the entrypoint or everything will run as a busybox script
|
||||||
ENTRYPOINT []
|
ENTRYPOINT []
|
||||||
EXPOSE 8080/tcp
|
EXPOSE 8080/tcp
|
||||||
|
@@ -1,16 +1,17 @@
|
|||||||
FROM ubuntu:22.04
|
FROM ubuntu:latest
|
||||||
|
|
||||||
ARG TAILSCALE_VERSION=*
|
ARG TAILSCALE_VERSION=*
|
||||||
ARG TAILSCALE_CHANNEL=stable
|
ARG TAILSCALE_CHANNEL=stable
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y gnupg curl ssh dnsutils ca-certificates \
|
&& apt-get install -y gnupg curl \
|
||||||
&& adduser --shell=/bin/bash ssh-it-user
|
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
||||||
|
|
||||||
# Tailscale is deliberately split into a second stage so we can cash utils as a seperate layer.
|
|
||||||
RUN curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
|
||||||
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y tailscale=${TAILSCALE_VERSION} \
|
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
||||||
|
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
||||||
|
|
||||||
|
RUN update-ca-certificates
|
||||||
|
@@ -1,17 +1,23 @@
|
|||||||
FROM golang:latest
|
FROM golang:latest
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
|
&& apt-get install -y ca-certificates dnsutils git iptables \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
RUN useradd --shell=/bin/bash --create-home ssh-it-user
|
|
||||||
|
|
||||||
RUN git clone https://github.com/tailscale/tailscale.git
|
RUN git clone https://github.com/tailscale/tailscale.git
|
||||||
|
|
||||||
WORKDIR /go/tailscale
|
WORKDIR /go/tailscale
|
||||||
|
|
||||||
RUN git checkout main \
|
RUN git checkout main
|
||||||
&& sh build_dist.sh tailscale.com/cmd/tailscale \
|
|
||||||
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
|
RUN sh build_dist.sh tailscale.com/cmd/tailscale
|
||||||
&& cp tailscale /usr/local/bin/ \
|
RUN sh build_dist.sh tailscale.com/cmd/tailscaled
|
||||||
&& cp tailscaled /usr/local/bin/
|
|
||||||
|
RUN cp tailscale /usr/local/bin/
|
||||||
|
RUN cp tailscaled /usr/local/bin/
|
||||||
|
|
||||||
|
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
||||||
|
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
||||||
|
|
||||||
|
RUN update-ca-certificates
|
||||||
|
64
Makefile
64
Makefile
@@ -24,9 +24,41 @@ build:
|
|||||||
dev: lint test build
|
dev: lint test build
|
||||||
|
|
||||||
test:
|
test:
|
||||||
gotestsum -- $(TAGS) -short -coverprofile=coverage.out ./...
|
@go test $(TAGS) -short -coverprofile=coverage.out ./...
|
||||||
|
|
||||||
test_integration:
|
test_integration: test_integration_cli test_integration_derp test_integration_oidc test_integration_v2_general
|
||||||
|
|
||||||
|
test_integration_cli:
|
||||||
|
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||||
|
docker network create headscale-test || true
|
||||||
|
docker run -t --rm \
|
||||||
|
--network headscale-test \
|
||||||
|
-v ~/.cache/hs-integration-go:/go \
|
||||||
|
-v $$PWD:$$PWD -w $$PWD \
|
||||||
|
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||||
|
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
|
||||||
|
|
||||||
|
test_integration_derp:
|
||||||
|
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||||
|
docker network create headscale-test || true
|
||||||
|
docker run -t --rm \
|
||||||
|
--network headscale-test \
|
||||||
|
-v ~/.cache/hs-integration-go:/go \
|
||||||
|
-v $$PWD:$$PWD -w $$PWD \
|
||||||
|
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||||
|
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationDERP ./...
|
||||||
|
|
||||||
|
test_integration_oidc:
|
||||||
|
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
||||||
|
docker network create headscale-test || true
|
||||||
|
docker run -t --rm \
|
||||||
|
--network headscale-test \
|
||||||
|
-v ~/.cache/hs-integration-go:/go \
|
||||||
|
-v $$PWD:$$PWD -w $$PWD \
|
||||||
|
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
||||||
|
go test $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationOIDC ./...
|
||||||
|
|
||||||
|
test_integration_v2_general:
|
||||||
docker run \
|
docker run \
|
||||||
-t --rm \
|
-t --rm \
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
-v ~/.cache/hs-integration-go:/go \
|
||||||
@@ -34,7 +66,24 @@ test_integration:
|
|||||||
-v $$PWD:$$PWD -w $$PWD/integration \
|
-v $$PWD:$$PWD -w $$PWD/integration \
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||||
golang:1 \
|
golang:1 \
|
||||||
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
|
go test $(TAGS) -failfast ./... -timeout 60m -parallel 6
|
||||||
|
|
||||||
|
|
||||||
|
test_integration_v2_auth_web_flow:
|
||||||
|
docker run \
|
||||||
|
-t --rm \
|
||||||
|
-v ~/.cache/hs-integration-go:/go \
|
||||||
|
--name headscale-test-suite \
|
||||||
|
-v $$PWD:$$PWD -w $$PWD/integration \
|
||||||
|
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||||
|
golang:1 \
|
||||||
|
go test ./... -timeout 60m -parallel 6 -run TestAuthWebFlow
|
||||||
|
|
||||||
|
coverprofile_func:
|
||||||
|
go tool cover -func=coverage.out
|
||||||
|
|
||||||
|
coverprofile_html:
|
||||||
|
go tool cover -html=coverage.out
|
||||||
|
|
||||||
lint:
|
lint:
|
||||||
golangci-lint run --fix --timeout 10m
|
golangci-lint run --fix --timeout 10m
|
||||||
@@ -52,4 +101,11 @@ compress: build
|
|||||||
|
|
||||||
generate:
|
generate:
|
||||||
rm -rf gen
|
rm -rf gen
|
||||||
buf generate proto
|
go run github.com/bufbuild/buf/cmd/buf generate proto
|
||||||
|
|
||||||
|
install-protobuf-plugins:
|
||||||
|
go install \
|
||||||
|
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
|
||||||
|
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
|
||||||
|
google.golang.org/protobuf/cmd/protoc-gen-go \
|
||||||
|
google.golang.org/grpc/cmd/protoc-gen-go-grpc
|
||||||
|
407
README.md
407
README.md
@@ -32,18 +32,22 @@ organisation.
|
|||||||
|
|
||||||
## Design goal
|
## Design goal
|
||||||
|
|
||||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
`headscale` aims to implement a self-hosted, open source alternative to the Tailscale
|
||||||
control server.
|
control server. `headscale` has a narrower scope and an instance of `headscale`
|
||||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
implements a _single_ Tailnet, which is typically what a single organisation, or
|
||||||
server they can use for their projects and labs.
|
home/personal setup would use.
|
||||||
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
|
|
||||||
open-source organisation.
|
|
||||||
|
|
||||||
## Supporting Headscale
|
`headscale` uses terms that maps to Tailscale's control server, consult the
|
||||||
|
[glossary](./docs/glossary.md) for explainations.
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
If you like `headscale` and find it useful, there is a sponsorship and donation
|
If you like `headscale` and find it useful, there is a sponsorship and donation
|
||||||
buttons available in the repo.
|
buttons available in the repo.
|
||||||
|
|
||||||
|
If you would like to sponsor features, bugs or prioritisation, reach out to
|
||||||
|
one of the maintainers.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Full "base" support of Tailscale's features
|
- Full "base" support of Tailscale's features
|
||||||
@@ -71,39 +75,19 @@ buttons available in the repo.
|
|||||||
| macOS | Yes (see `/apple` on your headscale for more information) |
|
| macOS | Yes (see `/apple` on your headscale for more information) |
|
||||||
| Windows | Yes [docs](./docs/windows-client.md) |
|
| Windows | Yes [docs](./docs/windows-client.md) |
|
||||||
| Android | Yes [docs](./docs/android-client.md) |
|
| Android | Yes [docs](./docs/android-client.md) |
|
||||||
| iOS | Yes [docs](./docs/iOS-client.md) |
|
| iOS | Not yet |
|
||||||
|
|
||||||
## Running headscale
|
## Running headscale
|
||||||
|
|
||||||
**Please note that we do not support nor encourage the use of reverse proxies
|
Please have a look at the documentation under [`docs/`](docs/).
|
||||||
and container to run Headscale.**
|
|
||||||
|
|
||||||
Please have a look at the [`documentation`](https://headscale.net/).
|
|
||||||
|
|
||||||
## Talks
|
|
||||||
|
|
||||||
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
|
||||||
- presented by Juan Font Alonso and Kristoffer Dalby
|
|
||||||
|
|
||||||
## Disclaimer
|
## Disclaimer
|
||||||
|
|
||||||
1. This project is not associated with Tailscale Inc.
|
1. We have nothing to do with Tailscale, or Tailscale Inc.
|
||||||
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
|
||||||
contribution will have to be discussed with the Maintainers before being submitted.
|
|
||||||
|
|
||||||
This model has been chosen to reduce the risk of burnout by limiting the
|
|
||||||
maintenance overhead of reviewing and validating third-party code.
|
|
||||||
|
|
||||||
Headscale is open to code contributions for bug fixes without discussion.
|
|
||||||
|
|
||||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
|
|
||||||
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
||||||
and [Buf](https://buf.build)(Protobuf generator).
|
and [Buf](https://buf.build)(Protobuf generator).
|
||||||
|
|
||||||
@@ -111,6 +95,8 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
|
|||||||
be done with `nix develop`, which will install the tools and give you a shell.
|
be done with `nix develop`, which will install the tools and give you a shell.
|
||||||
This guarantees that you will have the same dev env as `headscale` maintainers.
|
This guarantees that you will have the same dev env as `headscale` maintainers.
|
||||||
|
|
||||||
|
PRs and suggestions are welcome.
|
||||||
|
|
||||||
### Code style
|
### Code style
|
||||||
|
|
||||||
To ensure we have some consistency with a growing number of contributions,
|
To ensure we have some consistency with a growing number of contributions,
|
||||||
@@ -188,6 +174,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/restanrm>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/cure>
|
<a href=https://github.com/cure>
|
||||||
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
||||||
@@ -209,6 +202,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
|
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/reynico>
|
<a href=https://github.com/reynico>
|
||||||
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
||||||
@@ -216,15 +211,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Nico</b></sub>
|
<sub style="font-size:14px"><b>Nico</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/evenh>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Even Holthe</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/e-zk>
|
<a href=https://github.com/e-zk>
|
||||||
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
||||||
@@ -253,15 +239,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>unreality</b></sub>
|
<sub style="font-size:14px"><b>unreality</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/mpldr>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ohdearaugustin>
|
<a href=https://github.com/ohdearaugustin>
|
||||||
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
||||||
@@ -269,11 +246,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/restanrm>
|
<a href=https://github.com/mpldr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
<sub style="font-size:14px"><b>Moritz Poldrack</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -283,13 +262,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/christian-heusel>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/mike-lloyd03>
|
<a href=https://github.com/mike-lloyd03>
|
||||||
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
|
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
|
||||||
@@ -297,15 +269,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
|
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/iSchluff>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/Niek>
|
<a href=https://github.com/Niek>
|
||||||
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
||||||
@@ -327,6 +290,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Azz</b></sub>
|
<sub style="font-size:14px"><b>Azz</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/iSchluff>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/qbit>
|
<a href=https://github.com/qbit>
|
||||||
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
||||||
@@ -348,15 +320,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/majst01>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/fdelucchijr>
|
<a href=https://github.com/fdelucchijr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
||||||
@@ -364,13 +327,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/OrvilleQ>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Orville Q. Song</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/hdhoang>
|
<a href=https://github.com/hdhoang>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
|
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
|
||||||
@@ -378,22 +334,15 @@ make build
|
|||||||
<sub style="font-size:14px"><b>hdhoang</b></sub>
|
<sub style="font-size:14px"><b>hdhoang</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/bravechamp>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/bravechamp>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/bravechamp>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/deonthomasgy>
|
<a href=https://github.com/deonthomasgy>
|
||||||
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
||||||
@@ -429,6 +378,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Michael G.</b></sub>
|
<sub style="font-size:14px"><b>Michael G.</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ptman>
|
<a href=https://github.com/ptman>
|
||||||
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
|
<img src=https://avatars.githubusercontent.com/u/24669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Paul Tötterman/>
|
||||||
@@ -436,8 +387,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/samson4649>
|
<a href=https://github.com/samson4649>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
||||||
@@ -445,6 +394,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
|
<sub style="font-size:14px"><b>Samuel Lock</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/majst01>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/kevin1sMe>
|
<a href=https://github.com/kevin1sMe>
|
||||||
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
|
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
|
||||||
@@ -452,13 +408,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>kevinlin</b></sub>
|
<sub style="font-size:14px"><b>kevinlin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/QZAiXH>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Snack</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/artemklevtsov>
|
<a href=https://github.com/artemklevtsov>
|
||||||
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
|
<img src=https://avatars.githubusercontent.com/u/603798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Artem Klevtsov/>
|
||||||
@@ -473,22 +422,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/dbevacqua>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>dbevacqua</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/joshuataylor>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/CNLHC>
|
<a href=https://github.com/CNLHC>
|
||||||
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
|
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
|
||||||
@@ -496,13 +431,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
|
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/motiejus>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/pvinis>
|
<a href=https://github.com/pvinis>
|
||||||
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
||||||
@@ -517,15 +445,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
|
<sub style="font-size:14px"><b>Silver Bullet</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/snh>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/2051768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Steven Honson/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Steven Honson</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ratsclub>
|
<a href=https://github.com/ratsclub>
|
||||||
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
||||||
@@ -533,6 +452,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/lachy2849>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>lachy2849</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/t56k>
|
<a href=https://github.com/t56k>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
||||||
@@ -540,13 +466,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>thomas</b></sub>
|
<sub style="font-size:14px"><b>thomas</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
</tr>
|
||||||
<a href=https://github.com/linsomniac>
|
<tr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/aberoham>
|
<a href=https://github.com/aberoham>
|
||||||
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
||||||
@@ -554,13 +475,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/iFargle>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/puzpuzpuz>
|
<a href=https://github.com/puzpuzpuz>
|
||||||
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
|
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
|
||||||
@@ -568,15 +482,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
|
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/theryecatcher>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/apognu>
|
<a href=https://github.com/apognu>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
||||||
@@ -584,13 +489,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/tony1661>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/aofei>
|
<a href=https://github.com/aofei>
|
||||||
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
||||||
@@ -599,21 +497,12 @@ make build
|
|||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/arnarg>
|
<a href=https://github.com/awoimbee>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
|
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Arnar</b></sub>
|
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/avirut>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/stensonb>
|
<a href=https://github.com/stensonb>
|
||||||
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
||||||
@@ -621,6 +510,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
|
<sub style="font-size:14px"><b>Bryan Stenson</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/yangchuansheng>
|
<a href=https://github.com/yangchuansheng>
|
||||||
<img src=https://avatars.githubusercontent.com/u/15308462?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt= Carson Yang/>
|
<img src=https://avatars.githubusercontent.com/u/15308462?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt= Carson Yang/>
|
||||||
@@ -635,13 +526,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>kundel</b></sub>
|
<sub style="font-size:14px"><b>kundel</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/fatih-acar>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>fatih-acar</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/fkr>
|
<a href=https://github.com/fkr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
||||||
@@ -656,15 +540,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/gabe565>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/JJGadgets>
|
<a href=https://github.com/JJGadgets>
|
||||||
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
||||||
@@ -672,13 +547,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>JJGadgets</b></sub>
|
<sub style="font-size:14px"><b>JJGadgets</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/hrtkpf>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>hrtkpf</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/jimt>
|
<a href=https://github.com/jimt>
|
||||||
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
|
<img src=https://avatars.githubusercontent.com/u/180326?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jim Tittsler/>
|
||||||
@@ -686,20 +554,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/jsiebens>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/johnae>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -709,43 +563,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
|
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/JulienFloris>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/win-t>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/foxtrot>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Marc</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/magf>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/mikejsavage>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Michael Savage</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/piec>
|
<a href=https://github.com/piec>
|
||||||
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
|
<img src=https://avatars.githubusercontent.com/u/781471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pierre Carru/>
|
||||||
@@ -781,6 +598,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
|
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ryanfowler>
|
<a href=https://github.com/ryanfowler>
|
||||||
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
|
<img src=https://avatars.githubusercontent.com/u/2668821?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ryan Fowler/>
|
||||||
@@ -788,8 +607,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/shaananc>
|
<a href=https://github.com/shaananc>
|
||||||
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
||||||
@@ -825,13 +642,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
<sub style="font-size:14px"><b>Teteros</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/Teteros>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -878,13 +688,6 @@ make build
|
|||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/newellz2>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/zekker6>
|
<a href=https://github.com/zekker6>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
||||||
@@ -906,13 +709,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/caelansar>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>caelansar</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/derelm>
|
<a href=https://github.com/derelm>
|
||||||
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
||||||
@@ -920,15 +716,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>derelm</b></sub>
|
<sub style="font-size:14px"><b>derelm</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/dnaq>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>dnaq</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/nning>
|
<a href=https://github.com/nning>
|
||||||
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
||||||
@@ -943,20 +730,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
</tr>
|
||||||
<a href=https://github.com/jimyag>
|
<tr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>jimyag</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/magichuihui>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>suhelen</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/lion24>
|
<a href=https://github.com/lion24>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
|
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
|
||||||
@@ -964,34 +739,11 @@ make build
|
|||||||
<sub style="font-size:14px"><b>sharkonet</b></sub>
|
<sub style="font-size:14px"><b>sharkonet</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/ma6174>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>ma6174</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/manju-rn>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>manju-rn</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/nicholas-yap>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/pernila>
|
<a href=https://github.com/pernila>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/>
|
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Tommi Pernila</b></sub>
|
<sub style="font-size:14px"><b>pernila</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -1008,8 +760,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/xpzouying>
|
<a href=https://github.com/xpzouying>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
||||||
@@ -1017,12 +767,5 @@ make build
|
|||||||
<sub style="font-size:14px"><b>zy</b></sub>
|
<sub style="font-size:14px"><b>zy</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/atorregrosa-smd>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
578
acls.go
Normal file
578
acls.go
Normal file
@@ -0,0 +1,578 @@
|
|||||||
|
package headscale
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"net/netip"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
|
"github.com/tailscale/hujson"
|
||||||
|
"gopkg.in/yaml.v3"
|
||||||
|
"tailscale.com/tailcfg"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
errEmptyPolicy = Error("empty policy")
|
||||||
|
errInvalidAction = Error("invalid action")
|
||||||
|
errInvalidGroup = Error("invalid group")
|
||||||
|
errInvalidTag = Error("invalid tag")
|
||||||
|
errInvalidPortFormat = Error("invalid port format")
|
||||||
|
errWildcardIsNeeded = Error("wildcard as port is required for the protocol")
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
Base8 = 8
|
||||||
|
Base10 = 10
|
||||||
|
BitSize16 = 16
|
||||||
|
BitSize32 = 32
|
||||||
|
BitSize64 = 64
|
||||||
|
portRangeBegin = 0
|
||||||
|
portRangeEnd = 65535
|
||||||
|
expectedTokenItems = 2
|
||||||
|
)
|
||||||
|
|
||||||
|
// For some reason golang.org/x/net/internal/iana is an internal package.
|
||||||
|
const (
|
||||||
|
protocolICMP = 1 // Internet Control Message
|
||||||
|
protocolIGMP = 2 // Internet Group Management
|
||||||
|
protocolIPv4 = 4 // IPv4 encapsulation
|
||||||
|
protocolTCP = 6 // Transmission Control
|
||||||
|
protocolEGP = 8 // Exterior Gateway Protocol
|
||||||
|
protocolIGP = 9 // any private interior gateway (used by Cisco for their IGRP)
|
||||||
|
protocolUDP = 17 // User Datagram
|
||||||
|
protocolGRE = 47 // Generic Routing Encapsulation
|
||||||
|
protocolESP = 50 // Encap Security Payload
|
||||||
|
protocolAH = 51 // Authentication Header
|
||||||
|
protocolIPv6ICMP = 58 // ICMP for IPv6
|
||||||
|
protocolSCTP = 132 // Stream Control Transmission Protocol
|
||||||
|
ProtocolFC = 133 // Fibre Channel
|
||||||
|
)
|
||||||
|
|
||||||
|
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
|
||||||
|
func (h *Headscale) LoadACLPolicy(path string) error {
|
||||||
|
log.Debug().
|
||||||
|
Str("func", "LoadACLPolicy").
|
||||||
|
Str("path", path).
|
||||||
|
Msg("Loading ACL policy from path")
|
||||||
|
|
||||||
|
policyFile, err := os.Open(path)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer policyFile.Close()
|
||||||
|
|
||||||
|
var policy ACLPolicy
|
||||||
|
policyBytes, err := io.ReadAll(policyFile)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
switch filepath.Ext(path) {
|
||||||
|
case ".yml", ".yaml":
|
||||||
|
log.Debug().
|
||||||
|
Str("path", path).
|
||||||
|
Bytes("file", policyBytes).
|
||||||
|
Msg("Loading ACLs from YAML")
|
||||||
|
|
||||||
|
err := yaml.Unmarshal(policyBytes, &policy)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Trace().
|
||||||
|
Interface("policy", policy).
|
||||||
|
Msg("Loaded policy from YAML")
|
||||||
|
|
||||||
|
default:
|
||||||
|
ast, err := hujson.Parse(policyBytes)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
ast.Standardize()
|
||||||
|
policyBytes = ast.Pack()
|
||||||
|
err = json.Unmarshal(policyBytes, &policy)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if policy.IsZero() {
|
||||||
|
return errEmptyPolicy
|
||||||
|
}
|
||||||
|
|
||||||
|
h.aclPolicy = &policy
|
||||||
|
|
||||||
|
return h.UpdateACLRules()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) UpdateACLRules() error {
|
||||||
|
rules, err := h.generateACLRules()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
|
||||||
|
h.aclRules = rules
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
||||||
|
rules := []tailcfg.FilterRule{}
|
||||||
|
|
||||||
|
if h.aclPolicy == nil {
|
||||||
|
return nil, errEmptyPolicy
|
||||||
|
}
|
||||||
|
|
||||||
|
machines, err := h.ListMachines()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for index, acl := range h.aclPolicy.ACLs {
|
||||||
|
if acl.Action != "accept" {
|
||||||
|
return nil, errInvalidAction
|
||||||
|
}
|
||||||
|
|
||||||
|
srcIPs := []string{}
|
||||||
|
for innerIndex, src := range acl.Sources {
|
||||||
|
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, src)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
srcIPs = append(srcIPs, srcs...)
|
||||||
|
}
|
||||||
|
|
||||||
|
protocols, needsWildcard, err := parseProtocol(acl.Protocol)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing ACL %d. protocol unknown %s", index, acl.Protocol)
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
destPorts := []tailcfg.NetPortRange{}
|
||||||
|
for innerIndex, dest := range acl.Destinations {
|
||||||
|
dests, err := h.generateACLPolicyDest(
|
||||||
|
machines,
|
||||||
|
*h.aclPolicy,
|
||||||
|
dest,
|
||||||
|
needsWildcard,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
log.Error().
|
||||||
|
Msgf("Error parsing ACL %d, Destination %d", index, innerIndex)
|
||||||
|
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
destPorts = append(destPorts, dests...)
|
||||||
|
}
|
||||||
|
|
||||||
|
rules = append(rules, tailcfg.FilterRule{
|
||||||
|
SrcIPs: srcIPs,
|
||||||
|
DstPorts: destPorts,
|
||||||
|
IPProto: protocols,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return rules, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) generateACLPolicySrcIP(
|
||||||
|
machines []Machine,
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
src string,
|
||||||
|
) ([]string, error) {
|
||||||
|
return expandAlias(machines, aclPolicy, src, h.cfg.OIDC.StripEmaildomain)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) generateACLPolicyDest(
|
||||||
|
machines []Machine,
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
dest string,
|
||||||
|
needsWildcard bool,
|
||||||
|
) ([]tailcfg.NetPortRange, error) {
|
||||||
|
tokens := strings.Split(dest, ":")
|
||||||
|
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
||||||
|
return nil, errInvalidPortFormat
|
||||||
|
}
|
||||||
|
|
||||||
|
var alias string
|
||||||
|
// We can have here stuff like:
|
||||||
|
// git-server:*
|
||||||
|
// 192.168.1.0/24:22
|
||||||
|
// tag:montreal-webserver:80,443
|
||||||
|
// tag:api-server:443
|
||||||
|
// example-host-1:*
|
||||||
|
if len(tokens) == expectedTokenItems {
|
||||||
|
alias = tokens[0]
|
||||||
|
} else {
|
||||||
|
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
|
||||||
|
}
|
||||||
|
|
||||||
|
expanded, err := expandAlias(
|
||||||
|
machines,
|
||||||
|
aclPolicy,
|
||||||
|
alias,
|
||||||
|
h.cfg.OIDC.StripEmaildomain,
|
||||||
|
)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ports, err := expandPorts(tokens[len(tokens)-1], needsWildcard)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
dests := []tailcfg.NetPortRange{}
|
||||||
|
for _, d := range expanded {
|
||||||
|
for _, p := range *ports {
|
||||||
|
pr := tailcfg.NetPortRange{
|
||||||
|
IP: d,
|
||||||
|
Ports: p,
|
||||||
|
}
|
||||||
|
dests = append(dests, pr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return dests, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// parseProtocol reads the proto field of the ACL and generates a list of
|
||||||
|
// protocols that will be allowed, following the IANA IP protocol number
|
||||||
|
// https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
|
||||||
|
//
|
||||||
|
// If the ACL proto field is empty, it allows ICMPv4, ICMPv6, TCP, and UDP,
|
||||||
|
// as per Tailscale behaviour (see tailcfg.FilterRule).
|
||||||
|
//
|
||||||
|
// Also returns a boolean indicating if the protocol
|
||||||
|
// requires all the destinations to use wildcard as port number (only TCP,
|
||||||
|
// UDP and SCTP support specifying ports).
|
||||||
|
func parseProtocol(protocol string) ([]int, bool, error) {
|
||||||
|
switch protocol {
|
||||||
|
case "":
|
||||||
|
return []int{
|
||||||
|
protocolICMP,
|
||||||
|
protocolIPv6ICMP,
|
||||||
|
protocolTCP,
|
||||||
|
protocolUDP,
|
||||||
|
}, false, nil
|
||||||
|
case "igmp":
|
||||||
|
return []int{protocolIGMP}, true, nil
|
||||||
|
case "ipv4", "ip-in-ip":
|
||||||
|
return []int{protocolIPv4}, true, nil
|
||||||
|
case "tcp":
|
||||||
|
return []int{protocolTCP}, false, nil
|
||||||
|
case "egp":
|
||||||
|
return []int{protocolEGP}, true, nil
|
||||||
|
case "igp":
|
||||||
|
return []int{protocolIGP}, true, nil
|
||||||
|
case "udp":
|
||||||
|
return []int{protocolUDP}, false, nil
|
||||||
|
case "gre":
|
||||||
|
return []int{protocolGRE}, true, nil
|
||||||
|
case "esp":
|
||||||
|
return []int{protocolESP}, true, nil
|
||||||
|
case "ah":
|
||||||
|
return []int{protocolAH}, true, nil
|
||||||
|
case "sctp":
|
||||||
|
return []int{protocolSCTP}, false, nil
|
||||||
|
case "icmp":
|
||||||
|
return []int{protocolICMP, protocolIPv6ICMP}, true, nil
|
||||||
|
|
||||||
|
default:
|
||||||
|
protocolNumber, err := strconv.Atoi(protocol)
|
||||||
|
if err != nil {
|
||||||
|
return nil, false, err
|
||||||
|
}
|
||||||
|
needsWildcard := protocolNumber != protocolTCP &&
|
||||||
|
protocolNumber != protocolUDP &&
|
||||||
|
protocolNumber != protocolSCTP
|
||||||
|
|
||||||
|
return []int{protocolNumber}, needsWildcard, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// expandalias has an input of either
|
||||||
|
// - a namespace
|
||||||
|
// - a group
|
||||||
|
// - a tag
|
||||||
|
// and transform these in IPAddresses.
|
||||||
|
func expandAlias(
|
||||||
|
machines []Machine,
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
alias string,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) ([]string, error) {
|
||||||
|
ips := []string{}
|
||||||
|
if alias == "*" {
|
||||||
|
return []string{"*"}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Debug().
|
||||||
|
Str("alias", alias).
|
||||||
|
Msg("Expanding")
|
||||||
|
|
||||||
|
if strings.HasPrefix(alias, "group:") {
|
||||||
|
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
return ips, err
|
||||||
|
}
|
||||||
|
for _, n := range namespaces {
|
||||||
|
nodes := filterMachinesByNamespace(machines, n)
|
||||||
|
for _, node := range nodes {
|
||||||
|
ips = append(ips, node.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if strings.HasPrefix(alias, "tag:") {
|
||||||
|
// check for forced tags
|
||||||
|
for _, machine := range machines {
|
||||||
|
if contains(machine.ForcedTags, alias) {
|
||||||
|
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// find tag owners
|
||||||
|
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, errInvalidTag) {
|
||||||
|
if len(ips) == 0 {
|
||||||
|
return ips, fmt.Errorf(
|
||||||
|
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
||||||
|
errInvalidTag,
|
||||||
|
alias,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
|
} else {
|
||||||
|
return ips, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// filter out machines per tag owner
|
||||||
|
for _, namespace := range owners {
|
||||||
|
machines := filterMachinesByNamespace(machines, namespace)
|
||||||
|
for _, machine := range machines {
|
||||||
|
hi := machine.GetHostInfo()
|
||||||
|
if contains(hi.RequestTags, alias) {
|
||||||
|
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// if alias is a namespace
|
||||||
|
nodes := filterMachinesByNamespace(machines, alias)
|
||||||
|
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
|
||||||
|
|
||||||
|
for _, n := range nodes {
|
||||||
|
ips = append(ips, n.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
if len(ips) > 0 {
|
||||||
|
return ips, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// if alias is an host
|
||||||
|
if h, ok := aclPolicy.Hosts[alias]; ok {
|
||||||
|
return []string{h.String()}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// if alias is an IP
|
||||||
|
ip, err := netip.ParseAddr(alias)
|
||||||
|
if err == nil {
|
||||||
|
return []string{ip.String()}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// if alias is an CIDR
|
||||||
|
cidr, err := netip.ParsePrefix(alias)
|
||||||
|
if err == nil {
|
||||||
|
return []string{cidr.String()}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Warn().Msgf("No IPs found with the alias %v", alias)
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
|
||||||
|
// that are correctly tagged since they should not be listed as being in the namespace
|
||||||
|
// we assume in this function that we only have nodes from 1 namespace.
|
||||||
|
func excludeCorrectlyTaggedNodes(
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
nodes []Machine,
|
||||||
|
namespace string,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) []Machine {
|
||||||
|
out := []Machine{}
|
||||||
|
tags := []string{}
|
||||||
|
for tag := range aclPolicy.TagOwners {
|
||||||
|
owners, _ := expandTagOwners(aclPolicy, namespace, stripEmailDomain)
|
||||||
|
ns := append(owners, namespace)
|
||||||
|
if contains(ns, namespace) {
|
||||||
|
tags = append(tags, tag)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// for each machine if tag is in tags list, don't append it.
|
||||||
|
for _, machine := range nodes {
|
||||||
|
hi := machine.GetHostInfo()
|
||||||
|
|
||||||
|
found := false
|
||||||
|
for _, t := range hi.RequestTags {
|
||||||
|
if contains(tags, t) {
|
||||||
|
found = true
|
||||||
|
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(machine.ForcedTags) > 0 {
|
||||||
|
found = true
|
||||||
|
}
|
||||||
|
if !found {
|
||||||
|
out = append(out, machine)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
|
||||||
|
if portsStr == "*" {
|
||||||
|
return &[]tailcfg.PortRange{
|
||||||
|
{First: portRangeBegin, Last: portRangeEnd},
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if needsWildcard {
|
||||||
|
return nil, errWildcardIsNeeded
|
||||||
|
}
|
||||||
|
|
||||||
|
ports := []tailcfg.PortRange{}
|
||||||
|
for _, portStr := range strings.Split(portsStr, ",") {
|
||||||
|
rang := strings.Split(portStr, "-")
|
||||||
|
switch len(rang) {
|
||||||
|
case 1:
|
||||||
|
port, err := strconv.ParseUint(rang[0], Base10, BitSize16)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ports = append(ports, tailcfg.PortRange{
|
||||||
|
First: uint16(port),
|
||||||
|
Last: uint16(port),
|
||||||
|
})
|
||||||
|
|
||||||
|
case expectedTokenItems:
|
||||||
|
start, err := strconv.ParseUint(rang[0], Base10, BitSize16)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
last, err := strconv.ParseUint(rang[1], Base10, BitSize16)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
ports = append(ports, tailcfg.PortRange{
|
||||||
|
First: uint16(start),
|
||||||
|
Last: uint16(last),
|
||||||
|
})
|
||||||
|
|
||||||
|
default:
|
||||||
|
return nil, errInvalidPortFormat
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return &ports, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
|
||||||
|
out := []Machine{}
|
||||||
|
for _, machine := range machines {
|
||||||
|
if machine.Namespace.Name == namespace {
|
||||||
|
out = append(out, machine)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
|
||||||
|
// a group cannot be composed of groups.
|
||||||
|
func expandTagOwners(
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
tag string,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) ([]string, error) {
|
||||||
|
var owners []string
|
||||||
|
ows, ok := aclPolicy.TagOwners[tag]
|
||||||
|
if !ok {
|
||||||
|
return []string{}, fmt.Errorf(
|
||||||
|
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
|
||||||
|
errInvalidTag,
|
||||||
|
tag,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
for _, owner := range ows {
|
||||||
|
if strings.HasPrefix(owner, "group:") {
|
||||||
|
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
return []string{}, err
|
||||||
|
}
|
||||||
|
owners = append(owners, gs...)
|
||||||
|
} else {
|
||||||
|
owners = append(owners, owner)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return owners, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// expandGroup will return the list of namespace inside the group
|
||||||
|
// after some validation.
|
||||||
|
func expandGroup(
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
group string,
|
||||||
|
stripEmailDomain bool,
|
||||||
|
) ([]string, error) {
|
||||||
|
outGroups := []string{}
|
||||||
|
aclGroups, ok := aclPolicy.Groups[group]
|
||||||
|
if !ok {
|
||||||
|
return []string{}, fmt.Errorf(
|
||||||
|
"group %v isn't registered. %w",
|
||||||
|
group,
|
||||||
|
errInvalidGroup,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
for _, group := range aclGroups {
|
||||||
|
if strings.HasPrefix(group, "group:") {
|
||||||
|
return []string{}, fmt.Errorf(
|
||||||
|
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
|
||||||
|
errInvalidGroup,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
grp, err := NormalizeToFQDNRules(group, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
return []string{}, fmt.Errorf(
|
||||||
|
"failed to normalize group %q, err: %w",
|
||||||
|
group,
|
||||||
|
errInvalidGroup,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
outGroups = append(outGroups, grp)
|
||||||
|
}
|
||||||
|
|
||||||
|
return outGroups, nil
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
@@ -17,7 +17,6 @@ type ACLPolicy struct {
|
|||||||
ACLs []ACL `json:"acls" yaml:"acls"`
|
ACLs []ACL `json:"acls" yaml:"acls"`
|
||||||
Tests []ACLTest `json:"tests" yaml:"tests"`
|
Tests []ACLTest `json:"tests" yaml:"tests"`
|
||||||
AutoApprovers AutoApprovers `json:"autoApprovers" yaml:"autoApprovers"`
|
AutoApprovers AutoApprovers `json:"autoApprovers" yaml:"autoApprovers"`
|
||||||
SSHs []SSH `json:"ssh" yaml:"ssh"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ACL is a basic rule for the ACL Policy.
|
// ACL is a basic rule for the ACL Policy.
|
||||||
@@ -34,7 +33,7 @@ type Groups map[string][]string
|
|||||||
// Hosts are alias for IP addresses or subnets.
|
// Hosts are alias for IP addresses or subnets.
|
||||||
type Hosts map[string]netip.Prefix
|
type Hosts map[string]netip.Prefix
|
||||||
|
|
||||||
// TagOwners specify what users (users?) are allow to use certain tags.
|
// TagOwners specify what users (namespaces?) are allow to use certain tags.
|
||||||
type TagOwners map[string][]string
|
type TagOwners map[string][]string
|
||||||
|
|
||||||
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
||||||
@@ -44,22 +43,13 @@ type ACLTest struct {
|
|||||||
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AutoApprovers specify which users (users?), groups or tags have their advertised routes
|
// AutoApprovers specify which users (namespaces?), groups or tags have their advertised routes
|
||||||
// or exit node status automatically enabled.
|
// or exit node status automatically enabled.
|
||||||
type AutoApprovers struct {
|
type AutoApprovers struct {
|
||||||
Routes map[string][]string `json:"routes" yaml:"routes"`
|
Routes map[string][]string `json:"routes" yaml:"routes"`
|
||||||
ExitNode []string `json:"exitNode" yaml:"exitNode"`
|
ExitNode []string `json:"exitNode" yaml:"exitNode"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// SSH controls who can ssh into which machines.
|
|
||||||
type SSH struct {
|
|
||||||
Action string `json:"action" yaml:"action"`
|
|
||||||
Sources []string `json:"src" yaml:"src"`
|
|
||||||
Destinations []string `json:"dst" yaml:"dst"`
|
|
||||||
Users []string `json:"users" yaml:"users"`
|
|
||||||
CheckPeriod string `json:"checkPeriod,omitempty" yaml:"checkPeriod,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
||||||
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
||||||
newHosts := Hosts{}
|
newHosts := Hosts{}
|
||||||
@@ -111,15 +101,15 @@ func (hosts *Hosts) UnmarshalYAML(data []byte) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// IsZero is perhaps a bit naive here.
|
// IsZero is perhaps a bit naive here.
|
||||||
func (pol ACLPolicy) IsZero() bool {
|
func (policy ACLPolicy) IsZero() bool {
|
||||||
if len(pol.Groups) == 0 && len(pol.Hosts) == 0 && len(pol.ACLs) == 0 {
|
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Returns the list of autoApproving users, groups or tags for a given IPPrefix.
|
// Returns the list of autoApproving namespaces, groups or tags for a given IPPrefix.
|
||||||
func (autoApprovers *AutoApprovers) GetRouteApprovers(
|
func (autoApprovers *AutoApprovers) GetRouteApprovers(
|
||||||
prefix netip.Prefix,
|
prefix netip.Prefix,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
@@ -78,7 +78,7 @@ var registerWebAPITemplate = template.Must(
|
|||||||
<p>
|
<p>
|
||||||
Run the command below in the headscale server to add this machine to your network:
|
Run the command below in the headscale server to add this machine to your network:
|
||||||
</p>
|
</p>
|
||||||
<pre><code>headscale nodes register --user USERNAME --key {{.Key}}</code></pre>
|
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
`))
|
`))
|
@@ -1,8 +1,6 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
@@ -15,7 +13,7 @@ func (h *Headscale) generateMapResponse(
|
|||||||
Str("func", "generateMapResponse").
|
Str("func", "generateMapResponse").
|
||||||
Str("machine", mapRequest.Hostinfo.Hostname).
|
Str("machine", mapRequest.Hostinfo.Hostname).
|
||||||
Msg("Creating Map response")
|
Msg("Creating Map response")
|
||||||
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
@@ -37,9 +35,9 @@ func (h *Headscale) generateMapResponse(
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
profiles := h.getMapResponseUserProfiles(*machine, peers)
|
profiles := getMapResponseUserProfiles(*machine, peers)
|
||||||
|
|
||||||
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
@@ -57,46 +55,15 @@ func (h *Headscale) generateMapResponse(
|
|||||||
peers,
|
peers,
|
||||||
)
|
)
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
|
|
||||||
resp := tailcfg.MapResponse{
|
resp := tailcfg.MapResponse{
|
||||||
KeepAlive: false,
|
KeepAlive: false,
|
||||||
Node: node,
|
Node: node,
|
||||||
|
Peers: nodePeers,
|
||||||
// TODO: Only send if updated
|
DNSConfig: dnsConfig,
|
||||||
DERPMap: h.DERPMap,
|
Domain: h.cfg.BaseDomain,
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
Peers: nodePeers,
|
|
||||||
|
|
||||||
// TODO(kradalby): Implement:
|
|
||||||
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
|
|
||||||
// PeersChanged
|
|
||||||
// PeersRemoved
|
|
||||||
// PeersChangedPatch
|
|
||||||
// PeerSeenChange
|
|
||||||
// OnlineChange
|
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
DNSConfig: dnsConfig,
|
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
Domain: h.cfg.BaseDomain,
|
|
||||||
|
|
||||||
// Do not instruct clients to collect services, we do not
|
|
||||||
// support or do anything with them
|
|
||||||
CollectServices: "false",
|
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
PacketFilter: h.aclRules,
|
PacketFilter: h.aclRules,
|
||||||
|
DERPMap: h.DERPMap,
|
||||||
UserProfiles: profiles,
|
UserProfiles: profiles,
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
SSHPolicy: h.sshPolicy,
|
|
||||||
|
|
||||||
ControlTime: &now,
|
|
||||||
|
|
||||||
Debug: &tailcfg.Debug{
|
Debug: &tailcfg.Debug{
|
||||||
DisableLogTail: !h.cfg.LogTail.Enabled,
|
DisableLogTail: !h.cfg.LogTail.Enabled,
|
||||||
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -29,7 +29,7 @@ type APIKey struct {
|
|||||||
LastSeen *time.Time
|
LastSeen *time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateAPIKey creates a new ApiKey in a user, and returns it.
|
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
|
||||||
func (h *Headscale) CreateAPIKey(
|
func (h *Headscale) CreateAPIKey(
|
||||||
expiration *time.Time,
|
expiration *time.Time,
|
||||||
) (string, *APIKey, error) {
|
) (string, *APIKey, error) {
|
||||||
@@ -64,7 +64,7 @@ func (h *Headscale) CreateAPIKey(
|
|||||||
return keyStr, &key, nil
|
return keyStr, &key, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListAPIKeys returns the list of ApiKeys for a user.
|
// ListAPIKeys returns the list of ApiKeys for a namespace.
|
||||||
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
||||||
keys := []APIKey{}
|
keys := []APIKey{}
|
||||||
if err := h.db.Find(&keys).Error; err != nil {
|
if err := h.db.Find(&keys).Error; err != nil {
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
@@ -11,7 +11,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"syscall"
|
"syscall"
|
||||||
@@ -21,7 +20,6 @@ import (
|
|||||||
"github.com/gorilla/mux"
|
"github.com/gorilla/mux"
|
||||||
grpcMiddleware "github.com/grpc-ecosystem/go-grpc-middleware"
|
grpcMiddleware "github.com/grpc-ecosystem/go-grpc-middleware"
|
||||||
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
|
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
|
||||||
"github.com/juanfont/headscale"
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/patrickmn/go-cache"
|
"github.com/patrickmn/go-cache"
|
||||||
zerolog "github.com/philip-bui/grpc-zerolog"
|
zerolog "github.com/philip-bui/grpc-zerolog"
|
||||||
@@ -82,12 +80,13 @@ type Headscale struct {
|
|||||||
privateKey *key.MachinePrivate
|
privateKey *key.MachinePrivate
|
||||||
noisePrivateKey *key.MachinePrivate
|
noisePrivateKey *key.MachinePrivate
|
||||||
|
|
||||||
|
noiseMux *mux.Router
|
||||||
|
|
||||||
DERPMap *tailcfg.DERPMap
|
DERPMap *tailcfg.DERPMap
|
||||||
DERPServer *DERPServer
|
DERPServer *DERPServer
|
||||||
|
|
||||||
aclPolicy *ACLPolicy
|
aclPolicy *ACLPolicy
|
||||||
aclRules []tailcfg.FilterRule
|
aclRules []tailcfg.FilterRule
|
||||||
sshPolicy *tailcfg.SSHPolicy
|
|
||||||
|
|
||||||
lastStateChange *xsync.MapOf[string, time.Time]
|
lastStateChange *xsync.MapOf[string, time.Time]
|
||||||
|
|
||||||
@@ -102,6 +101,27 @@ type Headscale struct {
|
|||||||
pollNetMapStreamWG sync.WaitGroup
|
pollNetMapStreamWG sync.WaitGroup
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Look up the TLS constant relative to user-supplied TLS client
|
||||||
|
// authentication mode. If an unknown mode is supplied, the default
|
||||||
|
// value, tls.RequireAnyClientCert, is returned. The returned boolean
|
||||||
|
// indicates if the supplied mode was valid.
|
||||||
|
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
|
||||||
|
switch mode {
|
||||||
|
case DisabledClientAuth:
|
||||||
|
// Client cert is _not_ required.
|
||||||
|
return tls.NoClientCert, true
|
||||||
|
case RelaxedClientAuth:
|
||||||
|
// Client cert required, but _not verified_.
|
||||||
|
return tls.RequireAnyClientCert, true
|
||||||
|
case EnforcedClientAuth:
|
||||||
|
// Client cert is _required and verified_.
|
||||||
|
return tls.RequireAndVerifyClientCert, true
|
||||||
|
default:
|
||||||
|
// Return the default when an unknown value is supplied.
|
||||||
|
return tls.RequireAnyClientCert, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
||||||
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -128,12 +148,8 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
cfg.DBuser,
|
cfg.DBuser,
|
||||||
)
|
)
|
||||||
|
|
||||||
if sslEnabled, err := strconv.ParseBool(cfg.DBssl); err == nil {
|
if !cfg.DBssl {
|
||||||
if !sslEnabled {
|
dbString += " sslmode=disable"
|
||||||
dbString += " sslmode=disable"
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
dbString += fmt.Sprintf(" sslmode=%s", cfg.DBssl)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.DBport != 0 {
|
if cfg.DBport != 0 {
|
||||||
@@ -163,7 +179,6 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
aclRules: tailcfg.FilterAllowAll, // default allowall
|
aclRules: tailcfg.FilterAllowAll, // default allowall
|
||||||
registrationCache: registrationCache,
|
registrationCache: registrationCache,
|
||||||
pollNetMapStreamWG: sync.WaitGroup{},
|
pollNetMapStreamWG: sync.WaitGroup{},
|
||||||
lastStateChange: xsync.NewMapOf[time.Time](),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
err = app.initDB()
|
err = app.initDB()
|
||||||
@@ -219,47 +234,29 @@ func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// expireExpiredMachines expires machines that have an explicit expiry set
|
|
||||||
// after that expiry time has passed.
|
|
||||||
func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
|
|
||||||
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
|
||||||
for range ticker.C {
|
|
||||||
h.expireExpiredMachinesWorker()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
|
|
||||||
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
|
||||||
for range ticker.C {
|
|
||||||
err := h.handlePrimarySubnetFailover()
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msg("failed to handle primary subnet failover")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) expireEphemeralNodesWorker() {
|
func (h *Headscale) expireEphemeralNodesWorker() {
|
||||||
users, err := h.ListUsers()
|
namespaces, err := h.ListNamespaces()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().Err(err).Msg("Error listing users")
|
log.Error().Err(err).Msg("Error listing namespaces")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, user := range users {
|
for _, namespace := range namespaces {
|
||||||
machines, err := h.ListMachinesByUser(user.Name)
|
machines, err := h.ListMachinesInNamespace(namespace.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Err(err).
|
Err(err).
|
||||||
Str("user", user.Name).
|
Str("namespace", namespace.Name).
|
||||||
Msg("Error listing machines in user")
|
Msg("Error listing machines in namespace")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
expiredFound := false
|
expiredFound := false
|
||||||
for _, machine := range machines {
|
for _, machine := range machines {
|
||||||
if machine.isEphemeral() && machine.LastSeen != nil &&
|
if machine.AuthKey != nil && machine.LastSeen != nil &&
|
||||||
|
machine.AuthKey.Ephemeral &&
|
||||||
time.Now().
|
time.Now().
|
||||||
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
||||||
expiredFound = true
|
expiredFound = true
|
||||||
@@ -283,53 +280,6 @@ func (h *Headscale) expireEphemeralNodesWorker() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) expireExpiredMachinesWorker() {
|
|
||||||
users, err := h.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msg("Error listing users")
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, user := range users {
|
|
||||||
machines, err := h.ListMachinesByUser(user.Name)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Err(err).
|
|
||||||
Str("user", user.Name).
|
|
||||||
Msg("Error listing machines in user")
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
expiredFound := false
|
|
||||||
for index, machine := range machines {
|
|
||||||
if machine.isExpired() &&
|
|
||||||
machine.Expiry.After(h.getLastStateChange(user)) {
|
|
||||||
expiredFound = true
|
|
||||||
|
|
||||||
err := h.ExpireMachine(&machines[index])
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Err(err).
|
|
||||||
Str("machine", machine.Hostname).
|
|
||||||
Str("name", machine.GivenName).
|
|
||||||
Msg("🤮 Cannot expire machine")
|
|
||||||
} else {
|
|
||||||
log.Info().
|
|
||||||
Str("machine", machine.Hostname).
|
|
||||||
Str("name", machine.GivenName).
|
|
||||||
Msg("Machine successfully expired")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if expiredFound {
|
|
||||||
h.setLastStateChangeToNow()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
||||||
req interface{},
|
req interface{},
|
||||||
info *grpc.UnaryServerInfo,
|
info *grpc.UnaryServerInfo,
|
||||||
@@ -508,10 +458,8 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
|||||||
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet)
|
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig).
|
router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig).
|
||||||
Methods(http.MethodGet)
|
Methods(http.MethodGet)
|
||||||
|
router.HandleFunc("/swagger", SwaggerUI).Methods(http.MethodGet)
|
||||||
// TODO(kristoffer): move swagger into a package
|
router.HandleFunc("/swagger/v1/openapiv2.json", SwaggerAPIv1).
|
||||||
router.HandleFunc("/swagger", headscale.SwaggerUI).Methods(http.MethodGet)
|
|
||||||
router.HandleFunc("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1).
|
|
||||||
Methods(http.MethodGet)
|
Methods(http.MethodGet)
|
||||||
|
|
||||||
if h.cfg.DERP.ServerEnabled {
|
if h.cfg.DERP.ServerEnabled {
|
||||||
@@ -524,7 +472,17 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
|||||||
apiRouter.Use(h.httpAuthenticationMiddleware)
|
apiRouter.Use(h.httpAuthenticationMiddleware)
|
||||||
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
||||||
|
|
||||||
router.PathPrefix("/").HandlerFunc(notFoundHandler)
|
router.PathPrefix("/").HandlerFunc(stdoutHandler)
|
||||||
|
|
||||||
|
return router
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) createNoiseMux() *mux.Router {
|
||||||
|
router := mux.NewRouter()
|
||||||
|
|
||||||
|
router.HandleFunc("/machine/register", h.NoiseRegistrationHandler).
|
||||||
|
Methods(http.MethodPost)
|
||||||
|
router.HandleFunc("/machine/map", h.NoisePollNetMapHandler)
|
||||||
|
|
||||||
return router
|
return router
|
||||||
}
|
}
|
||||||
@@ -553,9 +511,6 @@ func (h *Headscale) Serve() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
go h.expireEphemeralNodes(updateInterval)
|
go h.expireEphemeralNodes(updateInterval)
|
||||||
go h.expireExpiredMachines(updateInterval)
|
|
||||||
|
|
||||||
go h.failoverSubnetRoutes(updateInterval)
|
|
||||||
|
|
||||||
if zl.GlobalLevel() == zl.TraceLevel {
|
if zl.GlobalLevel() == zl.TraceLevel {
|
||||||
zerolog.RespLog = true
|
zerolog.RespLog = true
|
||||||
@@ -689,6 +644,12 @@ func (h *Headscale) Serve() error {
|
|||||||
// over our main Addr. It also serves the legacy Tailcale API
|
// over our main Addr. It also serves the legacy Tailcale API
|
||||||
router := h.createRouter(grpcGatewayMux)
|
router := h.createRouter(grpcGatewayMux)
|
||||||
|
|
||||||
|
// This router is served only over the Noise connection, and exposes only the new API.
|
||||||
|
//
|
||||||
|
// The HTTP2 server that exposes this router is created for
|
||||||
|
// a single hijacked connection from /ts2021, using netutil.NewOneConnListener
|
||||||
|
h.noiseMux = h.createNoiseMux()
|
||||||
|
|
||||||
httpServer := &http.Server{
|
httpServer := &http.Server{
|
||||||
Addr: h.cfg.Addr,
|
Addr: h.cfg.Addr,
|
||||||
Handler: router,
|
Handler: router,
|
||||||
@@ -761,7 +722,7 @@ func (h *Headscale) Serve() error {
|
|||||||
|
|
||||||
if h.cfg.ACL.PolicyPath != "" {
|
if h.cfg.ACL.PolicyPath != "" {
|
||||||
aclPath := AbsolutePathFromConfigPath(h.cfg.ACL.PolicyPath)
|
aclPath := AbsolutePathFromConfigPath(h.cfg.ACL.PolicyPath)
|
||||||
err := h.LoadACLPolicyFromPath(aclPath)
|
err := h.LoadACLPolicy(aclPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().Err(err).Msg("Failed to reload ACL policy")
|
log.Error().Err(err).Msg("Failed to reload ACL policy")
|
||||||
}
|
}
|
||||||
@@ -821,6 +782,7 @@ func (h *Headscale) Serve() error {
|
|||||||
|
|
||||||
// And we're done:
|
// And we're done:
|
||||||
cancel()
|
cancel()
|
||||||
|
os.Exit(0)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -893,7 +855,12 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
|
|||||||
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
log.Info().Msg(fmt.Sprintf(
|
||||||
|
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
|
||||||
|
h.cfg.TLS.ClientAuthMode))
|
||||||
|
|
||||||
tlsConfig := &tls.Config{
|
tlsConfig := &tls.Config{
|
||||||
|
ClientAuth: h.cfg.TLS.ClientAuthMode,
|
||||||
NextProtos: []string{"http/1.1"},
|
NextProtos: []string{"http/1.1"},
|
||||||
Certificates: make([]tls.Certificate, 1),
|
Certificates: make([]tls.Certificate, 1),
|
||||||
MinVersion: tls.VersionTLS12,
|
MinVersion: tls.VersionTLS12,
|
||||||
@@ -910,31 +877,31 @@ func (h *Headscale) setLastStateChangeToNow() {
|
|||||||
|
|
||||||
now := time.Now().UTC()
|
now := time.Now().UTC()
|
||||||
|
|
||||||
users, err := h.ListUsers()
|
namespaces, err := h.ListNamespaces()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
Err(err).
|
Err(err).
|
||||||
Msg("failed to fetch all users, failing to update last changed state.")
|
Msg("failed to fetch all namespaces, failing to update last changed state.")
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, user := range users {
|
for _, namespace := range namespaces {
|
||||||
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix()))
|
lastStateUpdate.WithLabelValues(namespace.Name, "headscale").Set(float64(now.Unix()))
|
||||||
if h.lastStateChange == nil {
|
if h.lastStateChange == nil {
|
||||||
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
||||||
}
|
}
|
||||||
h.lastStateChange.Store(user.Name, now)
|
h.lastStateChange.Store(namespace.Name, now)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) getLastStateChange(users ...User) time.Time {
|
func (h *Headscale) getLastStateChange(namespaces ...Namespace) time.Time {
|
||||||
times := []time.Time{}
|
times := []time.Time{}
|
||||||
|
|
||||||
// getLastStateChange takes a list of users as a "filter", if no users
|
// getLastStateChange takes a list of namespaces as a "filter", if no namespaces
|
||||||
// are past, then use the entier list of users and look for the last update
|
// are past, then use the entier list of namespaces and look for the last update
|
||||||
if len(users) > 0 {
|
if len(namespaces) > 0 {
|
||||||
for _, user := range users {
|
for _, namespace := range namespaces {
|
||||||
if lastChange, ok := h.lastStateChange.Load(user.Name); ok {
|
if lastChange, ok := h.lastStateChange.Load(namespace.Name); ok {
|
||||||
times = append(times, lastChange)
|
times = append(times, lastChange)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -959,7 +926,7 @@ func (h *Headscale) getLastStateChange(users ...User) time.Time {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func notFoundHandler(
|
func stdoutHandler(
|
||||||
writer http.ResponseWriter,
|
writer http.ResponseWriter,
|
||||||
req *http.Request,
|
req *http.Request,
|
||||||
) {
|
) {
|
||||||
@@ -971,7 +938,6 @@ func notFoundHandler(
|
|||||||
Interface("url", req.URL).
|
Interface("url", req.URL).
|
||||||
Bytes("body", body).
|
Bytes("body", body).
|
||||||
Msg("Request did not match")
|
Msg("Request did not match")
|
||||||
writer.WriteHeader(http.StatusNotFound)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"net/netip"
|
"net/netip"
|
||||||
@@ -59,3 +59,20 @@ func (s *Suite) ResetDB(c *check.C) {
|
|||||||
}
|
}
|
||||||
app.db = db
|
app.db = db
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Enusre an error is returned when an invalid auth mode
|
||||||
|
// is supplied.
|
||||||
|
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
|
||||||
|
_, isValid := LookupTLSClientAuthMode("invalid")
|
||||||
|
c.Assert(isValid, check.Equals, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure that all client auth modes return a nil error.
|
||||||
|
func (s *Suite) TestAuthModes(c *check.C) {
|
||||||
|
modes := []string{"disabled", "relaxed", "enforced"}
|
||||||
|
|
||||||
|
for _, v := range modes {
|
||||||
|
_, isValid := LookupTLSClientAuthMode(v)
|
||||||
|
c.Assert(isValid, check.Equals, true)
|
||||||
|
}
|
||||||
|
}
|
@@ -1,47 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"log"
|
|
||||||
|
|
||||||
"github.com/juanfont/headscale/integration"
|
|
||||||
"github.com/juanfont/headscale/integration/tsic"
|
|
||||||
"github.com/ory/dockertest/v3"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
log.Printf("creating docker pool")
|
|
||||||
pool, err := dockertest.NewPool("")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("could not connect to docker: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("creating docker network")
|
|
||||||
network, err := pool.CreateNetwork("docker-integration-net")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to create or get network: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, version := range integration.TailscaleVersions {
|
|
||||||
log.Printf("creating container image for Tailscale (%s)", version)
|
|
||||||
|
|
||||||
tsClient, err := tsic.New(
|
|
||||||
pool,
|
|
||||||
version,
|
|
||||||
network,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to create tailscale node: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = tsClient.Shutdown()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to shut down container: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
network.Close()
|
|
||||||
err = pool.RemoveNetwork(network)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to remove network: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
@@ -1,169 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
//go:generate go run ./main.go
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"text/template"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
githubWorkflowPath = "../../.github/workflows/"
|
|
||||||
jobFileNameTemplate = `test-integration-v2-%s.yaml`
|
|
||||||
jobTemplate = template.Must(
|
|
||||||
template.New("jobTemplate").
|
|
||||||
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - {{.Name}}
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^{{.Name}}$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
||||||
`),
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
const workflowFilePerm = 0o600
|
|
||||||
|
|
||||||
func removeTests() {
|
|
||||||
glob := fmt.Sprintf(jobFileNameTemplate, "*")
|
|
||||||
|
|
||||||
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to find test files")
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, file := range files {
|
|
||||||
err := os.Remove(file)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("failed to remove: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func findTests() []string {
|
|
||||||
rgBin, err := exec.LookPath("rg")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to find rg (ripgrep) binary")
|
|
||||||
}
|
|
||||||
|
|
||||||
args := []string{
|
|
||||||
"--regexp", "func (Test.+)\\(.*",
|
|
||||||
"../../integration/",
|
|
||||||
"--replace", "$1",
|
|
||||||
"--sort", "path",
|
|
||||||
"--no-line-number",
|
|
||||||
"--no-filename",
|
|
||||||
"--no-heading",
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
|
|
||||||
|
|
||||||
ripgrep := exec.Command(
|
|
||||||
rgBin,
|
|
||||||
args...,
|
|
||||||
)
|
|
||||||
|
|
||||||
result, err := ripgrep.CombinedOutput()
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("out: %s", result)
|
|
||||||
log.Fatalf("failed to run ripgrep: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
tests := strings.Split(string(result), "\n")
|
|
||||||
tests = tests[:len(tests)-1]
|
|
||||||
|
|
||||||
return tests
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
type testConfig struct {
|
|
||||||
Name string
|
|
||||||
}
|
|
||||||
|
|
||||||
tests := findTests()
|
|
||||||
|
|
||||||
removeTests()
|
|
||||||
|
|
||||||
for _, test := range tests {
|
|
||||||
log.Printf("generating workflow for %s", test)
|
|
||||||
|
|
||||||
var content bytes.Buffer
|
|
||||||
|
|
||||||
if err := jobTemplate.Execute(&content, testConfig{
|
|
||||||
Name: test,
|
|
||||||
}); err != nil {
|
|
||||||
log.Fatalf("failed to render template: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
|
|
||||||
|
|
||||||
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to write github job: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@@ -5,8 +5,8 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/juanfont/headscale"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
|
||||||
"github.com/prometheus/common/model"
|
"github.com/prometheus/common/model"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
@@ -83,7 +83,7 @@ var listAPIKeys = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
tableData = append(tableData, []string{
|
tableData = append(tableData, []string{
|
||||||
strconv.FormatUint(key.GetId(), hscontrol.Base10),
|
strconv.FormatUint(key.GetId(), headscale.Base10),
|
||||||
key.GetPrefix(),
|
key.GetPrefix(),
|
||||||
expiration,
|
expiration,
|
||||||
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
||||||
|
@@ -1,22 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/rs/zerolog/log"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
rootCmd.AddCommand(configTestCmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
var configTestCmd = &cobra.Command{
|
|
||||||
Use: "configtest",
|
|
||||||
Short: "Test the configuration.",
|
|
||||||
Long: "Run a test of the configuration and exit.",
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
_, err := getHeadscaleApp()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
@@ -3,8 +3,8 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
|
"github.com/juanfont/headscale"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
"google.golang.org/grpc/status"
|
||||||
@@ -27,14 +27,8 @@ func init() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
||||||
|
err = createNodeCmd.MarkFlagRequired("namespace")
|
||||||
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
|
|
||||||
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
createNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err = createNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
@@ -61,9 +55,9 @@ var createNodeCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -93,7 +87,7 @@ var createNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if !hscontrol.NodePublicKeyRegex.Match([]byte(machineKey)) {
|
if !headscale.NodePublicKeyRegex.Match([]byte(machineKey)) {
|
||||||
err = errPreAuthKeyMalformed
|
err = errPreAuthKeyMalformed
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -116,10 +110,10 @@ var createNodeCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.DebugCreateMachineRequest{
|
request := &v1.DebugCreateMachineRequest{
|
||||||
Key: machineKey,
|
Key: machineKey,
|
||||||
Name: name,
|
Name: name,
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
Routes: routes,
|
Routes: routes,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.DebugCreateMachine(ctx, request)
|
response, err := client.DebugCreateMachine(ctx, request)
|
||||||
|
@@ -16,11 +16,10 @@ const (
|
|||||||
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
||||||
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
||||||
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
||||||
|
accessTTL = 10 * time.Minute
|
||||||
refreshTTL = 60 * time.Minute
|
refreshTTL = 60 * time.Minute
|
||||||
)
|
)
|
||||||
|
|
||||||
var accessTTL = 2 * time.Minute
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(mockOidcCmd)
|
rootCmd.AddCommand(mockOidcCmd)
|
||||||
}
|
}
|
||||||
@@ -55,16 +54,6 @@ func mockOIDC() error {
|
|||||||
if portStr == "" {
|
if portStr == "" {
|
||||||
return errMockOidcPortNotDefined
|
return errMockOidcPortNotDefined
|
||||||
}
|
}
|
||||||
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
|
||||||
if accessTTLOverride != "" {
|
|
||||||
newTTL, err := time.ParseDuration(accessTTLOverride)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
accessTTL = newTTL
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
|
||||||
|
|
||||||
port, err := strconv.Atoi(portStr)
|
port, err := strconv.Atoi(portStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
@@ -4,8 +4,8 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
survey "github.com/AlecAivazis/survey/v2"
|
survey "github.com/AlecAivazis/survey/v2"
|
||||||
|
"github.com/juanfont/headscale"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
@@ -13,26 +13,26 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(userCmd)
|
rootCmd.AddCommand(namespaceCmd)
|
||||||
userCmd.AddCommand(createUserCmd)
|
namespaceCmd.AddCommand(createNamespaceCmd)
|
||||||
userCmd.AddCommand(listUsersCmd)
|
namespaceCmd.AddCommand(listNamespacesCmd)
|
||||||
userCmd.AddCommand(destroyUserCmd)
|
namespaceCmd.AddCommand(destroyNamespaceCmd)
|
||||||
userCmd.AddCommand(renameUserCmd)
|
namespaceCmd.AddCommand(renameNamespaceCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
errMissingParameter = hscontrol.Error("missing parameters")
|
errMissingParameter = headscale.Error("missing parameters")
|
||||||
)
|
)
|
||||||
|
|
||||||
var userCmd = &cobra.Command{
|
var namespaceCmd = &cobra.Command{
|
||||||
Use: "users",
|
Use: "namespaces",
|
||||||
Short: "Manage the users of Headscale",
|
Short: "Manage the namespaces of Headscale",
|
||||||
Aliases: []string{"user", "namespace", "namespaces", "ns"},
|
Aliases: []string{"namespace", "ns", "user", "users"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createUserCmd = &cobra.Command{
|
var createNamespaceCmd = &cobra.Command{
|
||||||
Use: "create NAME",
|
Use: "create NAME",
|
||||||
Short: "Creates a new user",
|
Short: "Creates a new namespace",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
@@ -44,7 +44,7 @@ var createUserCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
userName := args[0]
|
namespaceName := args[0]
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@@ -52,15 +52,15 @@ var createUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
||||||
|
|
||||||
request := &v1.CreateUserRequest{Name: userName}
|
request := &v1.CreateNamespaceRequest{Name: namespaceName}
|
||||||
|
|
||||||
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
|
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
|
||||||
response, err := client.CreateUser(ctx, request)
|
response, err := client.CreateNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot create user: %s",
|
"Cannot create namespace: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -69,13 +69,13 @@ var createUserCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.User, "User created", output)
|
SuccessOutput(response.Namespace, "Namespace created", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var destroyUserCmd = &cobra.Command{
|
var destroyNamespaceCmd = &cobra.Command{
|
||||||
Use: "destroy NAME",
|
Use: "destroy NAME",
|
||||||
Short: "Destroys a user",
|
Short: "Destroys a namespace",
|
||||||
Aliases: []string{"delete"},
|
Aliases: []string{"delete"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
@@ -87,17 +87,17 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
userName := args[0]
|
namespaceName := args[0]
|
||||||
|
|
||||||
request := &v1.GetUserRequest{
|
request := &v1.GetNamespaceRequest{
|
||||||
Name: userName,
|
Name: namespaceName,
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
_, err := client.GetUser(ctx, request)
|
_, err := client.GetNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -113,8 +113,8 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
if !force {
|
if !force {
|
||||||
prompt := &survey.Confirm{
|
prompt := &survey.Confirm{
|
||||||
Message: fmt.Sprintf(
|
Message: fmt.Sprintf(
|
||||||
"Do you want to remove the user '%s' and any associated preauthkeys?",
|
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
|
||||||
userName,
|
namespaceName,
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
err := survey.AskOne(prompt, &confirm)
|
err := survey.AskOne(prompt, &confirm)
|
||||||
@@ -124,14 +124,14 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if confirm || force {
|
if confirm || force {
|
||||||
request := &v1.DeleteUserRequest{Name: userName}
|
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
|
||||||
|
|
||||||
response, err := client.DeleteUser(ctx, request)
|
response, err := client.DeleteNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot destroy user: %s",
|
"Cannot destroy namespace: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -139,16 +139,16 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
SuccessOutput(response, "User destroyed", output)
|
SuccessOutput(response, "Namespace destroyed", output)
|
||||||
} else {
|
} else {
|
||||||
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
|
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var listUsersCmd = &cobra.Command{
|
var listNamespacesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all the users",
|
Short: "List all the namespaces",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -157,13 +157,13 @@ var listUsersCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListUsersRequest{}
|
request := &v1.ListNamespacesRequest{}
|
||||||
|
|
||||||
response, err := client.ListUsers(ctx, request)
|
response, err := client.ListNamespaces(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
|
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -171,19 +171,19 @@ var listUsersCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
if output != "" {
|
||||||
SuccessOutput(response.Users, "", output)
|
SuccessOutput(response.Namespaces, "", output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
||||||
for _, user := range response.GetUsers() {
|
for _, namespace := range response.GetNamespaces() {
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
[]string{
|
[]string{
|
||||||
user.GetId(),
|
namespace.GetId(),
|
||||||
user.GetName(),
|
namespace.GetName(),
|
||||||
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -200,9 +200,9 @@ var listUsersCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameUserCmd = &cobra.Command{
|
var renameNamespaceCmd = &cobra.Command{
|
||||||
Use: "rename OLD_NAME NEW_NAME",
|
Use: "rename OLD_NAME NEW_NAME",
|
||||||
Short: "Renames a user",
|
Short: "Renames a namespace",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
expectedArguments := 2
|
expectedArguments := 2
|
||||||
@@ -219,17 +219,17 @@ var renameUserCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.RenameUserRequest{
|
request := &v1.RenameNamespaceRequest{
|
||||||
OldName: args[0],
|
OldName: args[0],
|
||||||
NewName: args[1],
|
NewName: args[1],
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RenameUser(ctx, request)
|
response, err := client.RenameNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot rename user: %s",
|
"Cannot rename namespace: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -238,6 +238,6 @@ var renameUserCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.User, "User renamed", output)
|
SuccessOutput(response.Namespace, "Namespace renamed", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
@@ -9,8 +9,8 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
survey "github.com/AlecAivazis/survey/v2"
|
survey "github.com/AlecAivazis/survey/v2"
|
||||||
|
"github.com/juanfont/headscale"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
"google.golang.org/grpc/status"
|
||||||
@@ -19,24 +19,12 @@ import (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(nodeCmd)
|
rootCmd.AddCommand(nodeCmd)
|
||||||
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
|
||||||
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
||||||
|
|
||||||
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
|
|
||||||
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
listNodesNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
nodeCmd.AddCommand(listNodesCmd)
|
nodeCmd.AddCommand(listNodesCmd)
|
||||||
|
|
||||||
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
||||||
|
err := registerNodeCmd.MarkFlagRequired("namespace")
|
||||||
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
|
|
||||||
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
registerNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err := registerNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
@@ -75,14 +63,9 @@ func init() {
|
|||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
|
moveNodeCmd.Flags().StringP("namespace", "n", "", "New namespace")
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
err = moveNodeCmd.MarkFlagRequired("namespace")
|
||||||
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
|
|
||||||
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
moveNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err = moveNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
@@ -110,9 +93,9 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
Short: "Registers a machine to your network",
|
Short: "Registers a machine to your network",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -133,8 +116,8 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.RegisterMachineRequest{
|
request := &v1.RegisterMachineRequest{
|
||||||
Key: machineKey,
|
Key: machineKey,
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RegisterMachine(ctx, request)
|
response, err := client.RegisterMachine(ctx, request)
|
||||||
@@ -151,9 +134,7 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(
|
SuccessOutput(response.Machine, fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
|
||||||
response.Machine,
|
|
||||||
fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -163,9 +144,9 @@ var listNodesCmd = &cobra.Command{
|
|||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -181,7 +162,7 @@ var listNodesCmd = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListMachinesRequest{
|
request := &v1.ListMachinesRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListMachines(ctx, request)
|
response, err := client.ListMachines(ctx, request)
|
||||||
@@ -201,7 +182,7 @@ var listNodesCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData, err := nodesToPtables(user, showTags, response.Machines)
|
tableData, err := nodesToPtables(namespace, showTags, response.Machines)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
|
||||||
@@ -405,7 +386,7 @@ var deleteNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
var moveNodeCmd = &cobra.Command{
|
var moveNodeCmd = &cobra.Command{
|
||||||
Use: "move",
|
Use: "move",
|
||||||
Short: "Move node to another user",
|
Short: "Move node to another namespace",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -421,11 +402,11 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Error getting user: %s", err),
|
fmt.Sprintf("Error getting namespace: %s", err),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -456,7 +437,7 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
moveRequest := &v1.MoveMachineRequest{
|
moveRequest := &v1.MoveMachineRequest{
|
||||||
MachineId: identifier,
|
MachineId: identifier,
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
||||||
@@ -473,12 +454,12 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
|
SuccessOutput(moveResponse.Machine, "Node moved to another namespace", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func nodesToPtables(
|
func nodesToPtables(
|
||||||
currentUser string,
|
currentNamespace string,
|
||||||
showTags bool,
|
showTags bool,
|
||||||
machines []*v1.Machine,
|
machines []*v1.Machine,
|
||||||
) (pterm.TableData, error) {
|
) (pterm.TableData, error) {
|
||||||
@@ -486,13 +467,11 @@ func nodesToPtables(
|
|||||||
"ID",
|
"ID",
|
||||||
"Hostname",
|
"Hostname",
|
||||||
"Name",
|
"Name",
|
||||||
"MachineKey",
|
|
||||||
"NodeKey",
|
"NodeKey",
|
||||||
"User",
|
"Namespace",
|
||||||
"IP addresses",
|
"IP addresses",
|
||||||
"Ephemeral",
|
"Ephemeral",
|
||||||
"Last seen",
|
"Last seen",
|
||||||
"Expiration",
|
|
||||||
"Online",
|
"Online",
|
||||||
"Expired",
|
"Expired",
|
||||||
}
|
}
|
||||||
@@ -519,32 +498,22 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var expiry time.Time
|
var expiry time.Time
|
||||||
var expiryTime string
|
|
||||||
if machine.Expiry != nil {
|
if machine.Expiry != nil {
|
||||||
expiry = machine.Expiry.AsTime()
|
expiry = machine.Expiry.AsTime()
|
||||||
expiryTime = expiry.Format("2006-01-02 15:04:05")
|
|
||||||
} else {
|
|
||||||
expiryTime = "N/A"
|
|
||||||
}
|
|
||||||
|
|
||||||
var machineKey key.MachinePublic
|
|
||||||
err := machineKey.UnmarshalText(
|
|
||||||
[]byte(hscontrol.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
machineKey = key.MachinePublic{}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var nodeKey key.NodePublic
|
var nodeKey key.NodePublic
|
||||||
err = nodeKey.UnmarshalText(
|
err := nodeKey.UnmarshalText(
|
||||||
[]byte(hscontrol.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
var online string
|
var online string
|
||||||
if machine.Online {
|
if lastSeen.After(
|
||||||
|
time.Now().Add(-5 * time.Minute),
|
||||||
|
) { // TODO: Find a better way to reliably show if online
|
||||||
online = pterm.LightGreen("online")
|
online = pterm.LightGreen("online")
|
||||||
} else {
|
} else {
|
||||||
online = pterm.LightRed("offline")
|
online = pterm.LightRed("offline")
|
||||||
@@ -577,12 +546,12 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
validTags = strings.TrimLeft(validTags, ",")
|
validTags = strings.TrimLeft(validTags, ",")
|
||||||
|
|
||||||
var user string
|
var namespace string
|
||||||
if currentUser == "" || (currentUser == machine.User.Name) {
|
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
|
||||||
user = pterm.LightMagenta(machine.User.Name)
|
namespace = pterm.LightMagenta(machine.Namespace.Name)
|
||||||
} else {
|
} else {
|
||||||
// Shared into this user
|
// Shared into this namespace
|
||||||
user = pterm.LightYellow(machine.User.Name)
|
namespace = pterm.LightYellow(machine.Namespace.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
var IPV4Address string
|
var IPV4Address string
|
||||||
@@ -596,16 +565,14 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
nodeData := []string{
|
nodeData := []string{
|
||||||
strconv.FormatUint(machine.Id, hscontrol.Base10),
|
strconv.FormatUint(machine.Id, headscale.Base10),
|
||||||
machine.Name,
|
machine.Name,
|
||||||
machine.GetGivenName(),
|
machine.GetGivenName(),
|
||||||
machineKey.ShortString(),
|
|
||||||
nodeKey.ShortString(),
|
nodeKey.ShortString(),
|
||||||
user,
|
namespace,
|
||||||
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
||||||
strconv.FormatBool(ephemeral),
|
strconv.FormatBool(ephemeral),
|
||||||
lastSeenTime,
|
lastSeenTime,
|
||||||
expiryTime,
|
|
||||||
online,
|
online,
|
||||||
expired,
|
expired,
|
||||||
}
|
}
|
||||||
|
@@ -20,14 +20,8 @@ const (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(preauthkeysCmd)
|
rootCmd.AddCommand(preauthkeysCmd)
|
||||||
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
|
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
|
||||||
|
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
|
||||||
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
|
|
||||||
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
|
|
||||||
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
pakNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
@@ -52,14 +46,14 @@ var preauthkeysCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listPreAuthKeys = &cobra.Command{
|
var listPreAuthKeys = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List the preauthkeys for this user",
|
Short: "List the preauthkeys for this namespace",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -69,7 +63,7 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListPreAuthKeysRequest{
|
request := &v1.ListPreAuthKeysRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListPreAuthKeys(ctx, request)
|
response, err := client.ListPreAuthKeys(ctx, request)
|
||||||
@@ -149,14 +143,14 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
|
|
||||||
var createPreAuthKeyCmd = &cobra.Command{
|
var createPreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "create",
|
Use: "create",
|
||||||
Short: "Creates a new preauthkey in the specified user",
|
Short: "Creates a new preauthkey in the specified namespace",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -168,11 +162,11 @@ var createPreAuthKeyCmd = &cobra.Command{
|
|||||||
log.Trace().
|
log.Trace().
|
||||||
Bool("reusable", reusable).
|
Bool("reusable", reusable).
|
||||||
Bool("ephemeral", ephemeral).
|
Bool("ephemeral", ephemeral).
|
||||||
Str("user", user).
|
Str("namespace", namespace).
|
||||||
Msg("Preparing to create preauthkey")
|
Msg("Preparing to create preauthkey")
|
||||||
|
|
||||||
request := &v1.CreatePreAuthKeyRequest{
|
request := &v1.CreatePreAuthKeyRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
Reusable: reusable,
|
Reusable: reusable,
|
||||||
Ephemeral: ephemeral,
|
Ephemeral: ephemeral,
|
||||||
AclTags: tags,
|
AclTags: tags,
|
||||||
@@ -231,9 +225,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -243,8 +237,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ExpirePreAuthKeyRequest{
|
request := &v1.ExpirePreAuthKeyRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
Key: args[0],
|
Key: args[0],
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpirePreAuthKey(ctx, request)
|
response, err := client.ExpirePreAuthKey(ctx, request)
|
||||||
|
@@ -5,22 +5,17 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
"github.com/juanfont/headscale"
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/tcnksm/go-latest"
|
"github.com/tcnksm/go-latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
deprecateNamespaceMessage = "use --user"
|
|
||||||
)
|
|
||||||
|
|
||||||
var cfgFile string = ""
|
var cfgFile string = ""
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
if len(os.Args) > 1 &&
|
if len(os.Args) > 1 && (os.Args[1] == "version" || os.Args[1] == "mockoidc") {
|
||||||
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -38,18 +33,18 @@ func initConfig() {
|
|||||||
cfgFile = os.Getenv("HEADSCALE_CONFIG")
|
cfgFile = os.Getenv("HEADSCALE_CONFIG")
|
||||||
}
|
}
|
||||||
if cfgFile != "" {
|
if cfgFile != "" {
|
||||||
err := hscontrol.LoadConfig(cfgFile, true)
|
err := headscale.LoadConfig(cfgFile, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile)
|
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
err := hscontrol.LoadConfig("", false)
|
err := headscale.LoadConfig("", false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("Error loading config")
|
log.Fatal().Caller().Err(err).Msgf("Error loading config")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
cfg, err := hscontrol.GetHeadscaleConfig()
|
cfg, err := headscale.GetHeadscaleConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err)
|
log.Fatal().Caller().Err(err)
|
||||||
}
|
}
|
||||||
@@ -64,7 +59,7 @@ func initConfig() {
|
|||||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.Log.Format == hscontrol.JSONLogFormat {
|
if cfg.Log.Format == headscale.JSONLogFormat {
|
||||||
log.Logger = log.Output(os.Stdout)
|
log.Logger = log.Output(os.Stdout)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -3,45 +3,37 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
"net/netip"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
"google.golang.org/grpc/status"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
Base10 = 10
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(routesCmd)
|
rootCmd.AddCommand(routesCmd)
|
||||||
|
|
||||||
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
err := listRoutesCmd.MarkFlagRequired("identifier")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf(err.Error())
|
||||||
|
}
|
||||||
routesCmd.AddCommand(listRoutesCmd)
|
routesCmd.AddCommand(listRoutesCmd)
|
||||||
|
|
||||||
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
enableRouteCmd.Flags().
|
||||||
err := enableRouteCmd.MarkFlagRequired("route")
|
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
|
||||||
|
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
enableRouteCmd.Flags().BoolP("all", "a", false, "All routes from host")
|
||||||
|
|
||||||
|
err = enableRouteCmd.MarkFlagRequired("identifier")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
routesCmd.AddCommand(enableRouteCmd)
|
routesCmd.AddCommand(enableRouteCmd)
|
||||||
|
|
||||||
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
nodeCmd.AddCommand(routesCmd)
|
||||||
err = disableRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(disableRouteCmd)
|
|
||||||
|
|
||||||
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
|
||||||
err = deleteRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(deleteRouteCmd)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var routesCmd = &cobra.Command{
|
var routesCmd = &cobra.Command{
|
||||||
@@ -52,7 +44,7 @@ var routesCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listRoutesCmd = &cobra.Command{
|
var listRoutesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all routes",
|
Short: "List routes advertised and enabled by a given node",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -72,51 +64,28 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
var routes []*v1.Route
|
request := &v1.GetMachineRouteRequest{
|
||||||
|
MachineId: machineID,
|
||||||
if machineID == 0 {
|
|
||||||
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.Routes, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
routes = response.Routes
|
|
||||||
} else {
|
|
||||||
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
|
|
||||||
MachineId: machineID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.Routes, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
routes = response.Routes
|
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData := routesToPtables(routes)
|
response, err := client.GetMachineRoute(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.Routes, "", output)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
tableData := routesToPtables(response.Routes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
|
||||||
@@ -138,12 +107,16 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
|
|
||||||
var enableRouteCmd = &cobra.Command{
|
var enableRouteCmd = &cobra.Command{
|
||||||
Use: "enable",
|
Use: "enable",
|
||||||
Short: "Set a route as enabled",
|
Short: "Set the enabled routes for a given node",
|
||||||
Long: `This command will make as enabled a given route.`,
|
Long: `This command will take a list of routes that will _replace_
|
||||||
|
the current set of routes on a given node.
|
||||||
|
If you would like to disable a route, simply run the command again, but
|
||||||
|
omit the route you do not want to enable.
|
||||||
|
`,
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
machineID, err := cmd.Flags().GetUint64("identifier")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -158,13 +131,52 @@ var enableRouteCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
|
var routes []string
|
||||||
RouteId: routeID,
|
|
||||||
})
|
isAll, _ := cmd.Flags().GetBool("all")
|
||||||
|
if isAll {
|
||||||
|
response, err := client.GetMachineRoute(ctx, &v1.GetMachineRouteRequest{
|
||||||
|
MachineId: machineID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf(
|
||||||
|
"Cannot get machine routes: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
routes = response.GetRoutes().GetAdvertisedRoutes()
|
||||||
|
} else {
|
||||||
|
routes, err = cmd.Flags().GetStringSlice("route")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting routes from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
request := &v1.EnableMachineRoutesRequest{
|
||||||
|
MachineId: machineID,
|
||||||
|
Routes: routes,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.EnableMachineRoutes(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
|
fmt.Sprintf(
|
||||||
|
"Cannot register machine: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -172,127 +184,50 @@ var enableRouteCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
if output != "" {
|
||||||
SuccessOutput(response, "", output)
|
SuccessOutput(response.Routes, "", output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var disableRouteCmd = &cobra.Command{
|
tableData := routesToPtables(response.Routes)
|
||||||
Use: "disable",
|
if err != nil {
|
||||||
Short: "Set as disabled a given route",
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
Long: `This command will make as disabled a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteRouteCmd = &cobra.Command{
|
|
||||||
Use: "delete",
|
|
||||||
Short: "Delete a given route",
|
|
||||||
Long: `This command will delete a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// routesToPtables converts the list of routes to a nice table.
|
// routesToPtables converts the list of routes to a nice table.
|
||||||
func routesToPtables(routes []*v1.Route) pterm.TableData {
|
func routesToPtables(routes *v1.Routes) pterm.TableData {
|
||||||
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
|
tableData := pterm.TableData{{"Route", "Enabled"}}
|
||||||
|
|
||||||
for _, route := range routes {
|
for _, route := range routes.GetAdvertisedRoutes() {
|
||||||
var isPrimaryStr string
|
enabled := isStringInSlice(routes.EnabledRoutes, route)
|
||||||
prefix, err := netip.ParsePrefix(route.Prefix)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
|
|
||||||
|
|
||||||
continue
|
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
|
||||||
}
|
|
||||||
if prefix == hscontrol.ExitRouteV4 || prefix == hscontrol.ExitRouteV6 {
|
|
||||||
isPrimaryStr = "-"
|
|
||||||
} else {
|
|
||||||
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
|
|
||||||
}
|
|
||||||
|
|
||||||
tableData = append(tableData,
|
|
||||||
[]string{
|
|
||||||
strconv.FormatUint(route.Id, Base10),
|
|
||||||
route.Machine.GivenName,
|
|
||||||
route.Prefix,
|
|
||||||
strconv.FormatBool(route.Advertised),
|
|
||||||
strconv.FormatBool(route.Enabled),
|
|
||||||
isPrimaryStr,
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return tableData
|
return tableData
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func isStringInSlice(strs []string, s string) bool {
|
||||||
|
for _, s2 := range strs {
|
||||||
|
if s == s2 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
@@ -8,13 +8,13 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"reflect"
|
"reflect"
|
||||||
|
|
||||||
|
"github.com/juanfont/headscale"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"google.golang.org/grpc"
|
"google.golang.org/grpc"
|
||||||
"google.golang.org/grpc/credentials"
|
"google.golang.org/grpc/credentials"
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -22,8 +22,8 @@ const (
|
|||||||
SocketWritePermissions = 0o666
|
SocketWritePermissions = 0o666
|
||||||
)
|
)
|
||||||
|
|
||||||
func getHeadscaleApp() (*hscontrol.Headscale, error) {
|
func getHeadscaleApp() (*headscale.Headscale, error) {
|
||||||
cfg, err := hscontrol.GetHeadscaleConfig()
|
cfg, err := headscale.GetHeadscaleConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf(
|
return nil, fmt.Errorf(
|
||||||
"failed to load configuration while creating headscale instance: %w",
|
"failed to load configuration while creating headscale instance: %w",
|
||||||
@@ -31,7 +31,7 @@ func getHeadscaleApp() (*hscontrol.Headscale, error) {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
app, err := hscontrol.NewHeadscale(cfg)
|
app, err := headscale.NewHeadscale(cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -39,8 +39,8 @@ func getHeadscaleApp() (*hscontrol.Headscale, error) {
|
|||||||
// We are doing this here, as in the future could be cool to have it also hot-reload
|
// We are doing this here, as in the future could be cool to have it also hot-reload
|
||||||
|
|
||||||
if cfg.ACL.PolicyPath != "" {
|
if cfg.ACL.PolicyPath != "" {
|
||||||
aclPath := hscontrol.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
|
aclPath := headscale.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
|
||||||
err = app.LoadACLPolicyFromPath(aclPath)
|
err = app.LoadACLPolicy(aclPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().
|
log.Fatal().
|
||||||
Str("path", aclPath).
|
Str("path", aclPath).
|
||||||
@@ -53,7 +53,7 @@ func getHeadscaleApp() (*hscontrol.Headscale, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
|
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
|
||||||
cfg, err := hscontrol.GetHeadscaleConfig()
|
cfg, err := headscale.GetHeadscaleConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().
|
log.Fatal().
|
||||||
Err(err).
|
Err(err).
|
||||||
@@ -74,7 +74,7 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
|
|||||||
|
|
||||||
address := cfg.CLI.Address
|
address := cfg.CLI.Address
|
||||||
|
|
||||||
// If the address is not set, we assume that we are on the server hosting hscontrol.
|
// If the address is not set, we assume that we are on the server hosting headscale.
|
||||||
if address == "" {
|
if address == "" {
|
||||||
log.Debug().
|
log.Debug().
|
||||||
Str("socket", cfg.UnixSocket).
|
Str("socket", cfg.UnixSocket).
|
||||||
@@ -98,7 +98,7 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
|
|||||||
grpcOptions = append(
|
grpcOptions = append(
|
||||||
grpcOptions,
|
grpcOptions,
|
||||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||||
grpc.WithContextDialer(hscontrol.GrpcSocketDialer),
|
grpc.WithContextDialer(headscale.GrpcSocketDialer),
|
||||||
)
|
)
|
||||||
} else {
|
} else {
|
||||||
// If we are not connecting to a local server, require an API key for authentication
|
// If we are not connecting to a local server, require an API key for authentication
|
||||||
|
@@ -6,25 +6,11 @@ import (
|
|||||||
|
|
||||||
"github.com/efekarakus/termcolor"
|
"github.com/efekarakus/termcolor"
|
||||||
"github.com/juanfont/headscale/cmd/headscale/cli"
|
"github.com/juanfont/headscale/cmd/headscale/cli"
|
||||||
"github.com/pkg/profile"
|
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
|
|
||||||
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
|
|
||||||
err := os.MkdirAll(profilePath, os.ModePerm)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("failed to create profiling directory")
|
|
||||||
}
|
|
||||||
|
|
||||||
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
|
|
||||||
} else {
|
|
||||||
defer profile.Start().Stop()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var colors bool
|
var colors bool
|
||||||
switch l := termcolor.SupportLevel(os.Stderr); l {
|
switch l := termcolor.SupportLevel(os.Stderr); l {
|
||||||
case termcolor.Level16M:
|
case termcolor.Level16M:
|
||||||
|
@@ -7,7 +7,7 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
"github.com/juanfont/headscale"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"gopkg.in/check.v1"
|
"gopkg.in/check.v1"
|
||||||
)
|
)
|
||||||
@@ -50,12 +50,12 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = hscontrol.LoadConfig(cfgFile, true)
|
err = headscale.LoadConfig(cfgFile, true)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
@@ -64,7 +64,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
|
|||||||
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
||||||
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
|
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
|
||||||
c.Assert(
|
c.Assert(
|
||||||
hscontrol.GetFileMode("unix_socket_permission"),
|
headscale.GetFileMode("unix_socket_permission"),
|
||||||
check.Equals,
|
check.Equals,
|
||||||
fs.FileMode(0o770),
|
fs.FileMode(0o770),
|
||||||
)
|
)
|
||||||
@@ -93,12 +93,12 @@ func (*Suite) TestConfigLoading(c *check.C) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = hscontrol.LoadConfig(tmpDir, false)
|
err = headscale.LoadConfig(tmpDir, false)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
@@ -107,7 +107,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
|
|||||||
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
||||||
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
|
c.Assert(viper.GetStringSlice("dns_config.nameservers")[0], check.Equals, "1.1.1.1")
|
||||||
c.Assert(
|
c.Assert(
|
||||||
hscontrol.GetFileMode("unix_socket_permission"),
|
headscale.GetFileMode("unix_socket_permission"),
|
||||||
check.Equals,
|
check.Equals,
|
||||||
fs.FileMode(0o770),
|
fs.FileMode(0o770),
|
||||||
)
|
)
|
||||||
@@ -137,10 +137,10 @@ func (*Suite) TestDNSConfigLoading(c *check.C) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = hscontrol.LoadConfig(tmpDir, false)
|
err = headscale.LoadConfig(tmpDir, false)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
dnsConfig, baseDomain := hscontrol.GetDNSConfig()
|
dnsConfig, baseDomain := headscale.GetDNSConfig()
|
||||||
|
|
||||||
c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1")
|
c.Assert(dnsConfig.Nameservers[0].String(), check.Equals, "1.1.1.1")
|
||||||
c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1")
|
c.Assert(dnsConfig.Resolvers[0].Addr, check.Equals, "1.1.1.1")
|
||||||
@@ -172,7 +172,7 @@ noise:
|
|||||||
writeConfig(c, tmpDir, configYaml)
|
writeConfig(c, tmpDir, configYaml)
|
||||||
|
|
||||||
// Check configuration validation errors (1)
|
// Check configuration validation errors (1)
|
||||||
err = hscontrol.LoadConfig(tmpDir, false)
|
err = headscale.LoadConfig(tmpDir, false)
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
// check.Matches can not handle multiline strings
|
// check.Matches can not handle multiline strings
|
||||||
tmp := strings.ReplaceAll(err.Error(), "\n", "***")
|
tmp := strings.ReplaceAll(err.Error(), "\n", "***")
|
||||||
@@ -201,6 +201,6 @@ tls_letsencrypt_hostname: example.com
|
|||||||
tls_letsencrypt_challenge_type: TLS-ALPN-01
|
tls_letsencrypt_challenge_type: TLS-ALPN-01
|
||||||
`)
|
`)
|
||||||
writeConfig(c, tmpDir, configYaml)
|
writeConfig(c, tmpDir, configYaml)
|
||||||
err = hscontrol.LoadConfig(tmpDir, false)
|
err = headscale.LoadConfig(tmpDir, false)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
}
|
}
|
||||||
|
@@ -14,9 +14,7 @@ server_url: http://127.0.0.1:8080
|
|||||||
|
|
||||||
# Address to listen to / bind to on the server
|
# Address to listen to / bind to on the server
|
||||||
#
|
#
|
||||||
# For production:
|
listen_addr: 0.0.0.0:8080
|
||||||
# listen_addr: 0.0.0.0:8080
|
|
||||||
listen_addr: 127.0.0.1:8080
|
|
||||||
|
|
||||||
# Address to listen to /metrics, you may want
|
# Address to listen to /metrics, you may want
|
||||||
# to keep this endpoint private to your internal
|
# to keep this endpoint private to your internal
|
||||||
@@ -29,10 +27,7 @@ metrics_listen_addr: 127.0.0.1:9090
|
|||||||
# remotely with the CLI
|
# remotely with the CLI
|
||||||
# Note: Remote access _only_ works if you have
|
# Note: Remote access _only_ works if you have
|
||||||
# valid certificates.
|
# valid certificates.
|
||||||
#
|
grpc_listen_addr: 0.0.0.0:50443
|
||||||
# For production:
|
|
||||||
# grpc_listen_addr: 0.0.0.0:50443
|
|
||||||
grpc_listen_addr: 127.0.0.1:50443
|
|
||||||
|
|
||||||
# Allow the gRPC admin interface to run in INSECURE
|
# Allow the gRPC admin interface to run in INSECURE
|
||||||
# mode. This is not recommended as the traffic will
|
# mode. This is not recommended as the traffic will
|
||||||
@@ -43,7 +38,6 @@ grpc_allow_insecure: false
|
|||||||
# Private key used to encrypt the traffic between headscale
|
# Private key used to encrypt the traffic between headscale
|
||||||
# and Tailscale clients.
|
# and Tailscale clients.
|
||||||
# The private key file will be autogenerated if it's missing.
|
# The private key file will be autogenerated if it's missing.
|
||||||
#
|
|
||||||
private_key_path: /var/lib/headscale/private.key
|
private_key_path: /var/lib/headscale/private.key
|
||||||
|
|
||||||
# The Noise section includes specific configuration for the
|
# The Noise section includes specific configuration for the
|
||||||
@@ -58,12 +52,6 @@ noise:
|
|||||||
# List of IP prefixes to allocate tailaddresses from.
|
# List of IP prefixes to allocate tailaddresses from.
|
||||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||||
# and the associated prefix length, delimited by a slash.
|
# and the associated prefix length, delimited by a slash.
|
||||||
# It must be within IP ranges supported by the Tailscale
|
|
||||||
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
|
||||||
# See below:
|
|
||||||
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
|
||||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
|
||||||
# Any other range is NOT supported, and it will cause unexpected issues.
|
|
||||||
ip_prefixes:
|
ip_prefixes:
|
||||||
- fd7a:115c:a1e0::/48
|
- fd7a:115c:a1e0::/48
|
||||||
- 100.64.0.0/10
|
- 100.64.0.0/10
|
||||||
@@ -131,8 +119,6 @@ node_update_check_interval: 10s
|
|||||||
|
|
||||||
# SQLite config
|
# SQLite config
|
||||||
db_type: sqlite3
|
db_type: sqlite3
|
||||||
|
|
||||||
# For production:
|
|
||||||
db_path: /var/lib/headscale/db.sqlite
|
db_path: /var/lib/headscale/db.sqlite
|
||||||
|
|
||||||
# # Postgres config
|
# # Postgres config
|
||||||
@@ -143,9 +129,6 @@ db_path: /var/lib/headscale/db.sqlite
|
|||||||
# db_name: headscale
|
# db_name: headscale
|
||||||
# db_user: foo
|
# db_user: foo
|
||||||
# db_pass: bar
|
# db_pass: bar
|
||||||
|
|
||||||
# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
|
|
||||||
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
|
|
||||||
# db_ssl: false
|
# db_ssl: false
|
||||||
|
|
||||||
### TLS configuration
|
### TLS configuration
|
||||||
@@ -164,9 +147,15 @@ acme_email: ""
|
|||||||
# Domain name to request a TLS certificate for:
|
# Domain name to request a TLS certificate for:
|
||||||
tls_letsencrypt_hostname: ""
|
tls_letsencrypt_hostname: ""
|
||||||
|
|
||||||
|
# Client (Tailscale/Browser) authentication mode (mTLS)
|
||||||
|
# Acceptable values:
|
||||||
|
# - disabled: client authentication disabled
|
||||||
|
# - relaxed: client certificate is required but not verified
|
||||||
|
# - enforced: client certificate is required and verified
|
||||||
|
tls_client_auth_mode: relaxed
|
||||||
|
|
||||||
# Path to store certificates and metadata needed by
|
# Path to store certificates and metadata needed by
|
||||||
# letsencrypt
|
# letsencrypt
|
||||||
# For production:
|
|
||||||
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
||||||
|
|
||||||
# Type of ACME challenge to use, currently supported types:
|
# Type of ACME challenge to use, currently supported types:
|
||||||
@@ -209,18 +198,6 @@ dns_config:
|
|||||||
nameservers:
|
nameservers:
|
||||||
- 1.1.1.1
|
- 1.1.1.1
|
||||||
|
|
||||||
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
|
|
||||||
# "abc123" is example NextDNS ID, replace with yours.
|
|
||||||
#
|
|
||||||
# With metadata sharing:
|
|
||||||
# nameservers:
|
|
||||||
# - https://dns.nextdns.io/abc123
|
|
||||||
#
|
|
||||||
# Without metadata sharing:
|
|
||||||
# nameservers:
|
|
||||||
# - 2a07:a8c0::ab:c123
|
|
||||||
# - 2a07:a8c1::ab:c123
|
|
||||||
|
|
||||||
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
||||||
# list of search domains and the DNS to query for each one.
|
# list of search domains and the DNS to query for each one.
|
||||||
#
|
#
|
||||||
@@ -234,17 +211,6 @@ dns_config:
|
|||||||
# Search domains to inject.
|
# Search domains to inject.
|
||||||
domains: []
|
domains: []
|
||||||
|
|
||||||
# Extra DNS records
|
|
||||||
# so far only A-records are supported (on the tailscale side)
|
|
||||||
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
|
|
||||||
# extra_records:
|
|
||||||
# - name: "grafana.myvpn.example.com"
|
|
||||||
# type: "A"
|
|
||||||
# value: "100.64.0.3"
|
|
||||||
#
|
|
||||||
# # you can also put it in one line
|
|
||||||
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
|
|
||||||
|
|
||||||
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
||||||
# Only works if there is at least a nameserver defined.
|
# Only works if there is at least a nameserver defined.
|
||||||
magic_dns: true
|
magic_dns: true
|
||||||
@@ -252,12 +218,13 @@ dns_config:
|
|||||||
# Defines the base domain to create the hostnames for MagicDNS.
|
# Defines the base domain to create the hostnames for MagicDNS.
|
||||||
# `base_domain` must be a FQDNs, without the trailing dot.
|
# `base_domain` must be a FQDNs, without the trailing dot.
|
||||||
# The FQDN of the hosts will be
|
# The FQDN of the hosts will be
|
||||||
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
|
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
|
||||||
base_domain: example.com
|
base_domain: example.com
|
||||||
|
|
||||||
# Unix socket used for the CLI to connect without authentication
|
# Unix socket used for the CLI to connect without authentication
|
||||||
# Note: for production you will want to set this to something like:
|
# Note: for local development, you probably want to change this to:
|
||||||
unix_socket: /var/run/headscale/headscale.sock
|
# unix_socket: ./headscale.sock
|
||||||
|
unix_socket: /var/run/headscale.sock
|
||||||
unix_socket_permission: "0770"
|
unix_socket_permission: "0770"
|
||||||
#
|
#
|
||||||
# headscale supports experimental OpenID connect support,
|
# headscale supports experimental OpenID connect support,
|
||||||
@@ -269,45 +236,26 @@ unix_socket_permission: "0770"
|
|||||||
# issuer: "https://your-oidc.issuer.com/path"
|
# issuer: "https://your-oidc.issuer.com/path"
|
||||||
# client_id: "your-oidc-client-id"
|
# client_id: "your-oidc-client-id"
|
||||||
# client_secret: "your-oidc-client-secret"
|
# client_secret: "your-oidc-client-secret"
|
||||||
# # Alternatively, set `client_secret_path` to read the secret from the file.
|
|
||||||
# # It resolves environment variables, making integration to systemd's
|
|
||||||
# # `LoadCredential` straightforward:
|
|
||||||
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
|
||||||
# # client_secret and client_secret_path are mutually exclusive.
|
|
||||||
#
|
#
|
||||||
# # The amount of time from a node is authenticated with OpenID until it
|
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
||||||
# # expires and needs to reauthenticate.
|
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
||||||
# # Setting the value to "0" will mean no expiry.
|
|
||||||
# expiry: 180d
|
|
||||||
#
|
|
||||||
# # Use the expiry from the token received from OpenID when the user logged
|
|
||||||
# # in, this will typically lead to frequent need to reauthenticate and should
|
|
||||||
# # only been enabled if you know what you are doing.
|
|
||||||
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
|
||||||
# use_expiry_from_token: false
|
|
||||||
#
|
|
||||||
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
|
||||||
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
|
||||||
#
|
#
|
||||||
# scope: ["openid", "profile", "email", "custom"]
|
# scope: ["openid", "profile", "email", "custom"]
|
||||||
# extra_params:
|
# extra_params:
|
||||||
# domain_hint: example.com
|
# domain_hint: example.com
|
||||||
#
|
#
|
||||||
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
||||||
# # authentication request will be rejected.
|
# authentication request will be rejected.
|
||||||
#
|
#
|
||||||
# allowed_domains:
|
# allowed_domains:
|
||||||
# - example.com
|
# - example.com
|
||||||
# # Note: Groups from keycloak have a leading '/'
|
|
||||||
# allowed_groups:
|
|
||||||
# - /headscale
|
|
||||||
# allowed_users:
|
# allowed_users:
|
||||||
# - alice@example.com
|
# - alice@example.com
|
||||||
#
|
#
|
||||||
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
||||||
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
|
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
|
||||||
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
||||||
# user: `first-name.last-name.example.com`
|
# namespace: `first-name.last-name.example.com`
|
||||||
#
|
#
|
||||||
# strip_email_domain: true
|
# strip_email_domain: true
|
||||||
|
|
||||||
|
@@ -1,22 +1,20 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/tls"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"net/url"
|
"net/url"
|
||||||
"os"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/coreos/go-oidc/v3/oidc"
|
"github.com/coreos/go-oidc/v3/oidc"
|
||||||
"github.com/prometheus/common/model"
|
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
"tailscale.com/net/tsaddr"
|
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
"tailscale.com/types/dnstype"
|
"tailscale.com/types/dnstype"
|
||||||
)
|
)
|
||||||
@@ -27,13 +25,6 @@ const (
|
|||||||
|
|
||||||
JSONLogFormat = "json"
|
JSONLogFormat = "json"
|
||||||
TextLogFormat = "text"
|
TextLogFormat = "text"
|
||||||
|
|
||||||
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
|
|
||||||
maxDuration time.Duration = 1<<63 - 1
|
|
||||||
)
|
|
||||||
|
|
||||||
var errOidcMutuallyExclusive = errors.New(
|
|
||||||
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config contains the initial Headscale configuration.
|
// Config contains the initial Headscale configuration.
|
||||||
@@ -61,7 +52,7 @@ type Config struct {
|
|||||||
DBname string
|
DBname string
|
||||||
DBuser string
|
DBuser string
|
||||||
DBpass string
|
DBpass string
|
||||||
DBssl string
|
DBssl bool
|
||||||
|
|
||||||
TLS TLSConfig
|
TLS TLSConfig
|
||||||
|
|
||||||
@@ -84,8 +75,9 @@ type Config struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type TLSConfig struct {
|
type TLSConfig struct {
|
||||||
CertPath string
|
CertPath string
|
||||||
KeyPath string
|
KeyPath string
|
||||||
|
ClientAuthMode tls.ClientAuthType
|
||||||
|
|
||||||
LetsEncrypt LetsEncryptConfig
|
LetsEncrypt LetsEncryptConfig
|
||||||
}
|
}
|
||||||
@@ -106,10 +98,7 @@ type OIDCConfig struct {
|
|||||||
ExtraParams map[string]string
|
ExtraParams map[string]string
|
||||||
AllowedDomains []string
|
AllowedDomains []string
|
||||||
AllowedUsers []string
|
AllowedUsers []string
|
||||||
AllowedGroups []string
|
|
||||||
StripEmaildomain bool
|
StripEmaildomain bool
|
||||||
Expiry time.Duration
|
|
||||||
UseExpiryFromToken bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type DERPConfig struct {
|
type DERPConfig struct {
|
||||||
@@ -165,6 +154,7 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
|
|
||||||
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
||||||
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
||||||
|
viper.SetDefault("tls_client_auth_mode", "relaxed")
|
||||||
|
|
||||||
viper.SetDefault("log.level", "info")
|
viper.SetDefault("log.level", "info")
|
||||||
viper.SetDefault("log.format", TextLogFormat)
|
viper.SetDefault("log.format", TextLogFormat)
|
||||||
@@ -175,7 +165,7 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
viper.SetDefault("derp.server.enabled", false)
|
viper.SetDefault("derp.server.enabled", false)
|
||||||
viper.SetDefault("derp.server.stun.enabled", true)
|
viper.SetDefault("derp.server.stun.enabled", true)
|
||||||
|
|
||||||
viper.SetDefault("unix_socket", "/var/run/headscale/headscale.sock")
|
viper.SetDefault("unix_socket", "/var/run/headscale.sock")
|
||||||
viper.SetDefault("unix_socket_permission", "0o770")
|
viper.SetDefault("unix_socket_permission", "0o770")
|
||||||
|
|
||||||
viper.SetDefault("grpc_listen_addr", ":50443")
|
viper.SetDefault("grpc_listen_addr", ":50443")
|
||||||
@@ -184,13 +174,9 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
viper.SetDefault("cli.timeout", "5s")
|
viper.SetDefault("cli.timeout", "5s")
|
||||||
viper.SetDefault("cli.insecure", false)
|
viper.SetDefault("cli.insecure", false)
|
||||||
|
|
||||||
viper.SetDefault("db_ssl", false)
|
|
||||||
|
|
||||||
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
||||||
viper.SetDefault("oidc.strip_email_domain", true)
|
viper.SetDefault("oidc.strip_email_domain", true)
|
||||||
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
||||||
viper.SetDefault("oidc.expiry", "180d")
|
|
||||||
viper.SetDefault("oidc.use_expiry_from_token", false)
|
|
||||||
|
|
||||||
viper.SetDefault("logtail.enabled", false)
|
viper.SetDefault("logtail.enabled", false)
|
||||||
viper.SetDefault("randomize_client_port", false)
|
viper.SetDefault("randomize_client_port", false)
|
||||||
@@ -199,10 +185,6 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
|
|
||||||
viper.SetDefault("node_update_check_interval", "10s")
|
viper.SetDefault("node_update_check_interval", "10s")
|
||||||
|
|
||||||
if IsCLIConfigured() {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := viper.ReadInConfig(); err != nil {
|
if err := viper.ReadInConfig(); err != nil {
|
||||||
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
||||||
|
|
||||||
@@ -238,6 +220,19 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_, authModeValid := LookupTLSClientAuthMode(
|
||||||
|
viper.GetString("tls_client_auth_mode"),
|
||||||
|
)
|
||||||
|
|
||||||
|
if !authModeValid {
|
||||||
|
errorText += fmt.Sprintf(
|
||||||
|
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
|
||||||
|
viper.GetString("tls_client_auth_mode"),
|
||||||
|
DisabledClientAuth,
|
||||||
|
RelaxedClientAuth,
|
||||||
|
EnforcedClientAuth)
|
||||||
|
}
|
||||||
|
|
||||||
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
||||||
// to avoid races
|
// to avoid races
|
||||||
minInactivityTimeout, _ := time.ParseDuration("65s")
|
minInactivityTimeout, _ := time.ParseDuration("65s")
|
||||||
@@ -267,6 +262,10 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func GetTLSConfig() TLSConfig {
|
func GetTLSConfig() TLSConfig {
|
||||||
|
tlsClientAuthMode, _ := LookupTLSClientAuthMode(
|
||||||
|
viper.GetString("tls_client_auth_mode"),
|
||||||
|
)
|
||||||
|
|
||||||
return TLSConfig{
|
return TLSConfig{
|
||||||
LetsEncrypt: LetsEncryptConfig{
|
LetsEncrypt: LetsEncryptConfig{
|
||||||
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
||||||
@@ -282,6 +281,7 @@ func GetTLSConfig() TLSConfig {
|
|||||||
KeyPath: AbsolutePathFromConfigPath(
|
KeyPath: AbsolutePathFromConfigPath(
|
||||||
viper.GetString("tls_key_path"),
|
viper.GetString("tls_key_path"),
|
||||||
),
|
),
|
||||||
|
ClientAuthMode: tlsClientAuthMode,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -383,21 +383,10 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
if viper.IsSet("dns_config.nameservers") {
|
if viper.IsSet("dns_config.nameservers") {
|
||||||
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
||||||
|
|
||||||
nameservers := []netip.Addr{}
|
nameservers := make([]netip.Addr, len(nameserversStr))
|
||||||
resolvers := []*dnstype.Resolver{}
|
resolvers := make([]*dnstype.Resolver, len(nameserversStr))
|
||||||
|
|
||||||
for _, nameserverStr := range nameserversStr {
|
for index, nameserverStr := range nameserversStr {
|
||||||
// Search for explicit DNS-over-HTTPS resolvers
|
|
||||||
if strings.HasPrefix(nameserverStr, "https://") {
|
|
||||||
resolvers = append(resolvers, &dnstype.Resolver{
|
|
||||||
Addr: nameserverStr,
|
|
||||||
})
|
|
||||||
|
|
||||||
// This nameserver can not be parsed as an IP address
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse nameserver as a regular IP
|
|
||||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
@@ -406,10 +395,10 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
nameservers = append(nameservers, nameserver)
|
nameservers[index] = nameserver
|
||||||
resolvers = append(resolvers, &dnstype.Resolver{
|
resolvers[index] = &dnstype.Resolver{
|
||||||
Addr: nameserver.String(),
|
Addr: nameserver.String(),
|
||||||
})
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dnsConfig.Nameservers = nameservers
|
dnsConfig.Nameservers = nameservers
|
||||||
@@ -422,37 +411,39 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.restricted_nameservers") {
|
if viper.IsSet("dns_config.restricted_nameservers") {
|
||||||
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
if len(dnsConfig.Nameservers) > 0 {
|
||||||
domains := []string{}
|
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
||||||
restrictedDNS := viper.GetStringMapStringSlice(
|
restrictedDNS := viper.GetStringMapStringSlice(
|
||||||
"dns_config.restricted_nameservers",
|
"dns_config.restricted_nameservers",
|
||||||
)
|
|
||||||
for domain, restrictedNameservers := range restrictedDNS {
|
|
||||||
restrictedResolvers := make(
|
|
||||||
[]*dnstype.Resolver,
|
|
||||||
len(restrictedNameservers),
|
|
||||||
)
|
)
|
||||||
for index, nameserverStr := range restrictedNameservers {
|
for domain, restrictedNameservers := range restrictedDNS {
|
||||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
restrictedResolvers := make(
|
||||||
if err != nil {
|
[]*dnstype.Resolver,
|
||||||
log.Error().
|
len(restrictedNameservers),
|
||||||
Str("func", "getDNSConfig").
|
)
|
||||||
Err(err).
|
for index, nameserverStr := range restrictedNameservers {
|
||||||
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
|
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||||
}
|
if err != nil {
|
||||||
restrictedResolvers[index] = &dnstype.Resolver{
|
log.Error().
|
||||||
Addr: nameserver.String(),
|
Str("func", "getDNSConfig").
|
||||||
|
Err(err).
|
||||||
|
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
|
||||||
|
}
|
||||||
|
restrictedResolvers[index] = &dnstype.Resolver{
|
||||||
|
Addr: nameserver.String(),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
dnsConfig.Routes[domain] = restrictedResolvers
|
||||||
}
|
}
|
||||||
dnsConfig.Routes[domain] = restrictedResolvers
|
} else {
|
||||||
domains = append(domains, domain)
|
log.Warn().
|
||||||
|
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
|
||||||
}
|
}
|
||||||
dnsConfig.Domains = domains
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.domains") {
|
if viper.IsSet("dns_config.domains") {
|
||||||
domains := viper.GetStringSlice("dns_config.domains")
|
domains := viper.GetStringSlice("dns_config.domains")
|
||||||
if len(dnsConfig.Resolvers) > 0 {
|
if len(dnsConfig.Nameservers) > 0 {
|
||||||
dnsConfig.Domains = domains
|
dnsConfig.Domains = domains
|
||||||
} else if domains != nil {
|
} else if domains != nil {
|
||||||
log.Warn().
|
log.Warn().
|
||||||
@@ -460,20 +451,6 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.extra_records") {
|
|
||||||
var extraRecords []tailcfg.DNSRecord
|
|
||||||
|
|
||||||
err := viper.UnmarshalKey("dns_config.extra_records", &extraRecords)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Str("func", "getDNSConfig").
|
|
||||||
Err(err).
|
|
||||||
Msgf("Could not parse dns_config.extra_records")
|
|
||||||
}
|
|
||||||
|
|
||||||
dnsConfig.ExtraRecords = extraRecords
|
|
||||||
}
|
|
||||||
|
|
||||||
if viper.IsSet("dns_config.magic_dns") {
|
if viper.IsSet("dns_config.magic_dns") {
|
||||||
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
|
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
|
||||||
}
|
}
|
||||||
@@ -492,17 +469,6 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func GetHeadscaleConfig() (*Config, error) {
|
func GetHeadscaleConfig() (*Config, error) {
|
||||||
if IsCLIConfigured() {
|
|
||||||
return &Config{
|
|
||||||
CLI: CLIConfig{
|
|
||||||
Address: viper.GetString("cli.address"),
|
|
||||||
APIKey: viper.GetString("cli.api_key"),
|
|
||||||
Timeout: viper.GetDuration("cli.timeout"),
|
|
||||||
Insecure: viper.GetBool("cli.insecure"),
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
dnsConfig, baseDomain := GetDNSConfig()
|
dnsConfig, baseDomain := GetDNSConfig()
|
||||||
derpConfig := GetDERPConfig()
|
derpConfig := GetDERPConfig()
|
||||||
logConfig := GetLogTailConfig()
|
logConfig := GetLogTailConfig()
|
||||||
@@ -516,29 +482,6 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
||||||
}
|
}
|
||||||
|
|
||||||
if prefix.Addr().Is4() {
|
|
||||||
builder := netipx.IPSetBuilder{}
|
|
||||||
builder.AddPrefix(tsaddr.CGNATRange())
|
|
||||||
ipSet, _ := builder.IPSet()
|
|
||||||
if !ipSet.ContainsPrefix(prefix) {
|
|
||||||
log.Warn().
|
|
||||||
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
|
||||||
prefixInConfig, tsaddr.CGNATRange())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if prefix.Addr().Is6() {
|
|
||||||
builder := netipx.IPSetBuilder{}
|
|
||||||
builder.AddPrefix(tsaddr.TailscaleULARange())
|
|
||||||
ipSet, _ := builder.IPSet()
|
|
||||||
if !ipSet.ContainsPrefix(prefix) {
|
|
||||||
log.Warn().
|
|
||||||
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
|
||||||
prefixInConfig, tsaddr.TailscaleULARange())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
parsedPrefixes = append(parsedPrefixes, prefix)
|
parsedPrefixes = append(parsedPrefixes, prefix)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -563,19 +506,6 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
||||||
}
|
}
|
||||||
|
|
||||||
oidcClientSecret := viper.GetString("oidc.client_secret")
|
|
||||||
oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
|
|
||||||
if oidcClientSecretPath != "" && oidcClientSecret != "" {
|
|
||||||
return nil, errOidcMutuallyExclusive
|
|
||||||
}
|
|
||||||
if oidcClientSecretPath != "" {
|
|
||||||
secretBytes, err := os.ReadFile(os.ExpandEnv(oidcClientSecretPath))
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
oidcClientSecret = string(secretBytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Config{
|
return &Config{
|
||||||
ServerURL: viper.GetString("server_url"),
|
ServerURL: viper.GetString("server_url"),
|
||||||
Addr: viper.GetString("listen_addr"),
|
Addr: viper.GetString("listen_addr"),
|
||||||
@@ -610,7 +540,7 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
DBname: viper.GetString("db_name"),
|
DBname: viper.GetString("db_name"),
|
||||||
DBuser: viper.GetString("db_user"),
|
DBuser: viper.GetString("db_user"),
|
||||||
DBpass: viper.GetString("db_pass"),
|
DBpass: viper.GetString("db_pass"),
|
||||||
DBssl: viper.GetString("db_ssl"),
|
DBssl: viper.GetBool("db_ssl"),
|
||||||
|
|
||||||
TLS: GetTLSConfig(),
|
TLS: GetTLSConfig(),
|
||||||
|
|
||||||
@@ -628,36 +558,17 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
),
|
),
|
||||||
Issuer: viper.GetString("oidc.issuer"),
|
Issuer: viper.GetString("oidc.issuer"),
|
||||||
ClientID: viper.GetString("oidc.client_id"),
|
ClientID: viper.GetString("oidc.client_id"),
|
||||||
ClientSecret: oidcClientSecret,
|
ClientSecret: viper.GetString("oidc.client_secret"),
|
||||||
Scope: viper.GetStringSlice("oidc.scope"),
|
Scope: viper.GetStringSlice("oidc.scope"),
|
||||||
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
||||||
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
||||||
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
||||||
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
|
|
||||||
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
||||||
Expiry: func() time.Duration {
|
|
||||||
// if set to 0, we assume no expiry
|
|
||||||
if value := viper.GetString("oidc.expiry"); value == "0" {
|
|
||||||
return maxDuration
|
|
||||||
} else {
|
|
||||||
expiry, err := model.ParseDuration(value)
|
|
||||||
if err != nil {
|
|
||||||
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
|
|
||||||
|
|
||||||
return defaultOIDCExpiryTime
|
|
||||||
}
|
|
||||||
|
|
||||||
return time.Duration(expiry)
|
|
||||||
}
|
|
||||||
}(),
|
|
||||||
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
|
|
||||||
},
|
},
|
||||||
|
|
||||||
LogTail: logConfig,
|
LogTail: logConfig,
|
||||||
RandomizeClientPort: randomizeClientPort,
|
RandomizeClientPort: randomizeClientPort,
|
||||||
|
|
||||||
ACL: GetACLConfig(),
|
|
||||||
|
|
||||||
CLI: CLIConfig{
|
CLI: CLIConfig{
|
||||||
Address: viper.GetString("cli.address"),
|
Address: viper.GetString("cli.address"),
|
||||||
APIKey: viper.GetString("cli.api_key"),
|
APIKey: viper.GetString("cli.api_key"),
|
||||||
@@ -665,10 +576,8 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
Insecure: viper.GetBool("cli.insecure"),
|
Insecure: viper.GetBool("cli.insecure"),
|
||||||
},
|
},
|
||||||
|
|
||||||
|
ACL: GetACLConfig(),
|
||||||
|
|
||||||
Log: GetLogConfig(),
|
Log: GetLogConfig(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func IsCLIConfigured() bool {
|
|
||||||
return viper.GetString("cli.address") != "" && viper.GetString("cli.api_key") != ""
|
|
||||||
}
|
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
@@ -18,10 +18,8 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
dbVersion = "1"
|
dbVersion = "1"
|
||||||
|
errValueNotFound = Error("not found")
|
||||||
errValueNotFound = Error("not found")
|
|
||||||
ErrCannotParsePrefix = Error("cannot parse prefix")
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// KV is a key-value store in a psql table. For future use...
|
// KV is a key-value store in a psql table. For future use...
|
||||||
@@ -41,16 +39,6 @@ func (h *Headscale) initDB() error {
|
|||||||
db.Exec(`create extension if not exists "uuid-ossp";`)
|
db.Exec(`create extension if not exists "uuid-ossp";`)
|
||||||
}
|
}
|
||||||
|
|
||||||
_ = db.Migrator().RenameTable("namespaces", "users")
|
|
||||||
|
|
||||||
err = db.AutoMigrate(&User{})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "namespace_id", "user_id")
|
|
||||||
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
|
|
||||||
|
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
||||||
|
|
||||||
@@ -91,70 +79,6 @@ func (h *Headscale) initDB() error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&Route{})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
|
|
||||||
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
|
|
||||||
|
|
||||||
type MachineAux struct {
|
|
||||||
ID uint64
|
|
||||||
EnabledRoutes IPPrefixes
|
|
||||||
}
|
|
||||||
|
|
||||||
machinesAux := []MachineAux{}
|
|
||||||
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("Error accessing db")
|
|
||||||
}
|
|
||||||
for _, machine := range machinesAux {
|
|
||||||
for _, prefix := range machine.EnabledRoutes {
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Err(err).
|
|
||||||
Str("enabled_route", prefix.String()).
|
|
||||||
Msg("Error parsing enabled_route")
|
|
||||||
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.Preload("Machine").
|
|
||||||
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
|
|
||||||
First(&Route{}).
|
|
||||||
Error
|
|
||||||
if err == nil {
|
|
||||||
log.Info().
|
|
||||||
Str("enabled_route", prefix.String()).
|
|
||||||
Msg("Route already migrated to new table, skipping")
|
|
||||||
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
route := Route{
|
|
||||||
MachineID: machine.ID,
|
|
||||||
Advertised: true,
|
|
||||||
Enabled: true,
|
|
||||||
Prefix: IPPrefix(prefix),
|
|
||||||
}
|
|
||||||
if err := h.db.Create(&route).Error; err != nil {
|
|
||||||
log.Error().Err(err).Msg("Error creating route")
|
|
||||||
} else {
|
|
||||||
log.Info().
|
|
||||||
Uint64("machine_id", route.MachineID).
|
|
||||||
Str("prefix", prefix.String()).
|
|
||||||
Msg("Route migrated")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msg("Error dropping enabled_routes column")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.AutoMigrate(&Machine{})
|
err = db.AutoMigrate(&Machine{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -197,6 +121,11 @@ func (h *Headscale) initDB() error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
err = db.AutoMigrate(&Namespace{})
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&PreAuthKey{})
|
err = db.AutoMigrate(&PreAuthKey{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -335,30 +264,6 @@ func (hi HostInfo) Value() (driver.Value, error) {
|
|||||||
return string(bytes), err
|
return string(bytes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
type IPPrefix netip.Prefix
|
|
||||||
|
|
||||||
func (i *IPPrefix) Scan(destination interface{}) error {
|
|
||||||
switch value := destination.(type) {
|
|
||||||
case string:
|
|
||||||
prefix, err := netip.ParsePrefix(value)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
*i = IPPrefix(prefix)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("%w: unexpected data type %T", ErrCannotParsePrefix, destination)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Value return json value, implement driver.Valuer interface.
|
|
||||||
func (i IPPrefix) Value() (driver.Value, error) {
|
|
||||||
prefixStr := netip.Prefix(i).String()
|
|
||||||
|
|
||||||
return prefixStr, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type IPPrefixes []netip.Prefix
|
type IPPrefixes []netip.Prefix
|
||||||
|
|
||||||
func (i *IPPrefixes) Scan(destination interface{}) error {
|
func (i *IPPrefixes) Scan(destination interface{}) error {
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
@@ -10,7 +10,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v2"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
|
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
@@ -157,14 +157,14 @@ func (h *Headscale) DERPHandler(
|
|||||||
|
|
||||||
if !fastStart {
|
if !fastStart {
|
||||||
pubKey := h.privateKey.Public()
|
pubKey := h.privateKey.Public()
|
||||||
pubKeyStr, _ := pubKey.MarshalText() //nolint
|
pubKeyStr := pubKey.UntypedHexString() //nolint
|
||||||
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
||||||
"Upgrade: DERP\r\n"+
|
"Upgrade: DERP\r\n"+
|
||||||
"Connection: Upgrade\r\n"+
|
"Connection: Upgrade\r\n"+
|
||||||
"Derp-Version: %v\r\n"+
|
"Derp-Version: %v\r\n"+
|
||||||
"Derp-Public-Key: %s\r\n\r\n",
|
"Derp-Public-Key: %s\r\n\r\n",
|
||||||
derp.ProtocolVersion,
|
derp.ProtocolVersion,
|
||||||
string(pubKeyStr))
|
pubKeyStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
@@ -1,15 +1,13 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"net/url"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
mapset "github.com/deckarep/golang-set/v2"
|
mapset "github.com/deckarep/golang-set/v2"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
"tailscale.com/types/dnstype"
|
|
||||||
"tailscale.com/util/dnsname"
|
"tailscale.com/util/dnsname"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -22,10 +20,6 @@ const (
|
|||||||
ipv6AddressLength = 128
|
ipv6AddressLength = 128
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
nextDNSDoHPrefix = "https://dns.nextdns.io"
|
|
||||||
)
|
|
||||||
|
|
||||||
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
||||||
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
||||||
// server (listening in 100.100.100.100 udp/53) should be used for.
|
// server (listening in 100.100.100.100 udp/53) should be used for.
|
||||||
@@ -158,62 +152,37 @@ func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
|
|||||||
return fqdns
|
return fqdns
|
||||||
}
|
}
|
||||||
|
|
||||||
// If any nextdns DoH resolvers are present in the list of resolvers it will
|
|
||||||
// take metadata from the machine metadata and instruct tailscale to add it
|
|
||||||
// to the requests. This makes it possible to identify from which device the
|
|
||||||
// requests come in the NextDNS dashboard.
|
|
||||||
//
|
|
||||||
// This will produce a resolver like:
|
|
||||||
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
|
|
||||||
func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
|
|
||||||
for _, resolver := range resolvers {
|
|
||||||
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
|
|
||||||
attrs := url.Values{
|
|
||||||
"device_name": []string{machine.Hostname},
|
|
||||||
"device_model": []string{machine.HostInfo.OS},
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(machine.IPAddresses) > 0 {
|
|
||||||
attrs.Add("device_ip", machine.IPAddresses[0].String())
|
|
||||||
}
|
|
||||||
|
|
||||||
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func getMapResponseDNSConfig(
|
func getMapResponseDNSConfig(
|
||||||
dnsConfigOrig *tailcfg.DNSConfig,
|
dnsConfigOrig *tailcfg.DNSConfig,
|
||||||
baseDomain string,
|
baseDomain string,
|
||||||
machine Machine,
|
machine Machine,
|
||||||
peers Machines,
|
peers Machines,
|
||||||
) *tailcfg.DNSConfig {
|
) *tailcfg.DNSConfig {
|
||||||
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
|
var dnsConfig *tailcfg.DNSConfig
|
||||||
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
||||||
// Only inject the Search Domain of the current user - shared nodes should use their full FQDN
|
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
|
||||||
|
dnsConfig = dnsConfigOrig.Clone()
|
||||||
dnsConfig.Domains = append(
|
dnsConfig.Domains = append(
|
||||||
dnsConfig.Domains,
|
dnsConfig.Domains,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"%s.%s",
|
"%s.%s",
|
||||||
machine.User.Name,
|
machine.Namespace.Name,
|
||||||
baseDomain,
|
baseDomain,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
userSet := mapset.NewSet[User]()
|
namespaceSet := mapset.NewSet[Namespace]()
|
||||||
userSet.Add(machine.User)
|
namespaceSet.Add(machine.Namespace)
|
||||||
for _, p := range peers {
|
for _, p := range peers {
|
||||||
userSet.Add(p.User)
|
namespaceSet.Add(p.Namespace)
|
||||||
}
|
}
|
||||||
for _, user := range userSet.ToSlice() {
|
for _, namespace := range namespaceSet.ToSlice() {
|
||||||
dnsRoute := fmt.Sprintf("%v.%v", user.Name, baseDomain)
|
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
|
||||||
dnsConfig.Routes[dnsRoute] = nil
|
dnsConfig.Routes[dnsRoute] = nil
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
dnsConfig = dnsConfigOrig
|
dnsConfig = dnsConfigOrig
|
||||||
}
|
}
|
||||||
|
|
||||||
addNextDNSMetadata(dnsConfig.Resolvers, machine)
|
|
||||||
|
|
||||||
return dnsConfig
|
return dnsConfig
|
||||||
}
|
}
|
@@ -1,4 +1,4 @@
|
|||||||
package hscontrol
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
@@ -112,17 +112,17 @@ func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||||
userShared1, err := app.CreateUser("shared1")
|
namespaceShared1, err := app.CreateNamespace("shared1")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared2, err := app.CreateUser("shared2")
|
namespaceShared2, err := app.CreateNamespace("shared2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared3, err := app.CreateUser("shared3")
|
namespaceShared3, err := app.CreateNamespace("shared3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -131,7 +131,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||||
userShared2.Name,
|
namespaceShared2.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -140,7 +140,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||||
userShared3.Name,
|
namespaceShared3.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -149,7 +149,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -157,7 +157,7 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
machineInShared1 := &Machine{
|
machineInShared1 := &Machine{
|
||||||
@@ -166,15 +166,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
Hostname: "test_get_shared_nodes_1",
|
Hostname: "test_get_shared_nodes_1",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared1)
|
app.db.Save(machineInShared1)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared2 := &Machine{
|
machineInShared2 := &Machine{
|
||||||
@@ -183,15 +183,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_2",
|
Hostname: "test_get_shared_nodes_2",
|
||||||
UserID: userShared2.ID,
|
NamespaceID: namespaceShared2.ID,
|
||||||
User: *userShared2,
|
Namespace: *namespaceShared2,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared2)
|
app.db.Save(machineInShared2)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared3 := &Machine{
|
machineInShared3 := &Machine{
|
||||||
@@ -200,15 +200,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_3",
|
Hostname: "test_get_shared_nodes_3",
|
||||||
UserID: userShared3.ID,
|
NamespaceID: namespaceShared3.ID,
|
||||||
User: *userShared3,
|
Namespace: *namespaceShared3,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared3)
|
app.db.Save(machineInShared3)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machine2InShared1 := &Machine{
|
machine2InShared1 := &Machine{
|
||||||
@@ -217,8 +217,8 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_4",
|
Hostname: "test_get_shared_nodes_4",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||||
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
||||||
@@ -245,31 +245,31 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
|
|
||||||
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
||||||
|
|
||||||
domainRouteShared1 := fmt.Sprintf("%s.%s", userShared1.Name, baseDomain)
|
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
|
||||||
_, ok := dnsConfig.Routes[domainRouteShared1]
|
_, ok := dnsConfig.Routes[domainRouteShared1]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
|
|
||||||
domainRouteShared2 := fmt.Sprintf("%s.%s", userShared2.Name, baseDomain)
|
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
|
||||||
_, ok = dnsConfig.Routes[domainRouteShared2]
|
_, ok = dnsConfig.Routes[domainRouteShared2]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
|
|
||||||
domainRouteShared3 := fmt.Sprintf("%s.%s", userShared3.Name, baseDomain)
|
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
|
||||||
_, ok = dnsConfig.Routes[domainRouteShared3]
|
_, ok = dnsConfig.Routes[domainRouteShared3]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||||
userShared1, err := app.CreateUser("shared1")
|
namespaceShared1, err := app.CreateNamespace("shared1")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared2, err := app.CreateUser("shared2")
|
namespaceShared2, err := app.CreateNamespace("shared2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared3, err := app.CreateUser("shared3")
|
namespaceShared3, err := app.CreateNamespace("shared3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -278,7 +278,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||||
userShared2.Name,
|
namespaceShared2.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -287,7 +287,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||||
userShared3.Name,
|
namespaceShared3.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -296,7 +296,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
@@ -304,7 +304,7 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
machineInShared1 := &Machine{
|
machineInShared1 := &Machine{
|
||||||
@@ -313,15 +313,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
Hostname: "test_get_shared_nodes_1",
|
Hostname: "test_get_shared_nodes_1",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared1)
|
app.db.Save(machineInShared1)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared2 := &Machine{
|
machineInShared2 := &Machine{
|
||||||
@@ -330,15 +330,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_2",
|
Hostname: "test_get_shared_nodes_2",
|
||||||
UserID: userShared2.ID,
|
NamespaceID: namespaceShared2.ID,
|
||||||
User: *userShared2,
|
Namespace: *namespaceShared2,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared2)
|
app.db.Save(machineInShared2)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared3 := &Machine{
|
machineInShared3 := &Machine{
|
||||||
@@ -347,15 +347,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_3",
|
Hostname: "test_get_shared_nodes_3",
|
||||||
UserID: userShared3.ID,
|
NamespaceID: namespaceShared3.ID,
|
||||||
User: *userShared3,
|
Namespace: *namespaceShared3,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared3)
|
app.db.Save(machineInShared3)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machine2InShared1 := &Machine{
|
machine2InShared1 := &Machine{
|
||||||
@@ -364,8 +364,8 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_4",
|
Hostname: "test_get_shared_nodes_4",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||||
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
54
docs/README.md
Normal file
54
docs/README.md
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
# headscale documentation
|
||||||
|
|
||||||
|
This page contains the official and community contributed documentation for `headscale`.
|
||||||
|
|
||||||
|
If you are having trouble with following the documentation or get unexpected results,
|
||||||
|
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
|
||||||
|
|
||||||
|
## Official documentation
|
||||||
|
|
||||||
|
### How-to
|
||||||
|
|
||||||
|
- [Running headscale on Linux](running-headscale-linux.md)
|
||||||
|
- [Control headscale remotely](remote-cli.md)
|
||||||
|
- [Using a Windows client with headscale](windows-client.md)
|
||||||
|
|
||||||
|
### References
|
||||||
|
|
||||||
|
- [Configuration](../config-example.yaml)
|
||||||
|
- [Glossary](glossary.md)
|
||||||
|
- [TLS](tls.md)
|
||||||
|
|
||||||
|
## Community documentation
|
||||||
|
|
||||||
|
Community documentation is not actively maintained by the headscale authors and is
|
||||||
|
written by community members. It is _not_ verified by `headscale` developers.
|
||||||
|
|
||||||
|
**It might be outdated and it might miss necessary steps**.
|
||||||
|
|
||||||
|
- [Running headscale in a container](running-headscale-container.md)
|
||||||
|
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
|
||||||
|
- [Running headscale behind a reverse proxy](reverse-proxy.md)
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
### Policy ACLs
|
||||||
|
|
||||||
|
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
||||||
|
|
||||||
|
For instance, instead of referring to users when defining groups you must
|
||||||
|
use namespaces (which are the equivalent to user/logins in Tailscale.com).
|
||||||
|
|
||||||
|
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
||||||
|
|
||||||
|
When using ACL's the Namespace borders are no longer applied. All machines
|
||||||
|
whichever the Namespace have the ability to communicate with other hosts as
|
||||||
|
long as the ACL's permits this exchange.
|
||||||
|
|
||||||
|
The [ACLs](acls.md) document should help understand a fictional case of setting
|
||||||
|
up ACLs in a small company. All concepts presented in this document could be
|
||||||
|
applied outside of business oriented usage.
|
||||||
|
|
||||||
|
### Apple devices
|
||||||
|
|
||||||
|
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.
|
25
docs/acls.md
25
docs/acls.md
@@ -1,15 +1,4 @@
|
|||||||
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
# ACLs use case example
|
||||||
|
|
||||||
For instance, instead of referring to users when defining groups you must
|
|
||||||
use users (which are the equivalent to user/logins in Tailscale.com).
|
|
||||||
|
|
||||||
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
|
||||||
|
|
||||||
When using ACL's the User borders are no longer applied. All machines
|
|
||||||
whichever the User have the ability to communicate with other hosts as
|
|
||||||
long as the ACL's permits this exchange.
|
|
||||||
|
|
||||||
## ACLs use case example
|
|
||||||
|
|
||||||
Let's build an example use case for a small business (It may be the place where
|
Let's build an example use case for a small business (It may be the place where
|
||||||
ACL's are the most useful).
|
ACL's are the most useful).
|
||||||
@@ -40,21 +29,19 @@ servers.
|
|||||||
|
|
||||||
## ACL setup
|
## ACL setup
|
||||||
|
|
||||||
Note: Users will be created automatically when users authenticate with the
|
Note: Namespaces will be created automatically when users authenticate with the
|
||||||
Headscale server.
|
Headscale server.
|
||||||
|
|
||||||
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
||||||
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
||||||
|
|
||||||
When registering the servers we will need to add the flag
|
When registering the servers we will need to add the flag
|
||||||
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
|
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
|
||||||
registering the server should be allowed to do it. Since anyone can add tags to
|
registering the server should be allowed to do it. Since anyone can add tags to
|
||||||
a server they can register, the check of the tags is done on headscale server
|
a server they can register, the check of the tags is done on headscale server
|
||||||
and only valid tags are applied. A tag is valid if the user that is
|
and only valid tags are applied. A tag is valid if the namespace that is
|
||||||
registering it is allowed to do it.
|
registering it is allowed to do it.
|
||||||
|
|
||||||
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
|
|
||||||
|
|
||||||
Here are the ACL's to implement the same permissions as above:
|
Here are the ACL's to implement the same permissions as above:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
@@ -177,8 +164,8 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
"dst": ["tag:dev-app-servers:80,443"]
|
"dst": ["tag:dev-app-servers:80,443"]
|
||||||
},
|
},
|
||||||
|
|
||||||
// We still have to allow internal users communications since nothing guarantees that each user have
|
// We still have to allow internal namespaces communications since nothing guarantees that each user have
|
||||||
// their own users.
|
// their own namespaces.
|
||||||
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
||||||
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
||||||
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
||||||
|
@@ -1,90 +0,0 @@
|
|||||||
# Setting custom DNS records
|
|
||||||
|
|
||||||
!!! warning "Community documentation"
|
|
||||||
|
|
||||||
This page is not actively maintained by the headscale authors and is
|
|
||||||
written by community members. It is _not_ verified by `headscale` developers.
|
|
||||||
|
|
||||||
**It might be outdated and it might miss necessary steps**.
|
|
||||||
|
|
||||||
## Goal
|
|
||||||
|
|
||||||
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
|
|
||||||
An example use case is to serve apps on the same host via a reverse proxy like NGINX, in this case a Prometheus monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the hostname and portnum combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
### 1. Change the configuration
|
|
||||||
|
|
||||||
1. Change the `config.yaml` to contain the desired records like so:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
dns_config:
|
|
||||||
...
|
|
||||||
extra_records:
|
|
||||||
- name: "prometheus.myvpn.example.com"
|
|
||||||
type: "A"
|
|
||||||
value: "100.64.0.3"
|
|
||||||
|
|
||||||
- name: "grafana.myvpn.example.com"
|
|
||||||
type: "A"
|
|
||||||
value: "100.64.0.3"
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Restart your headscale instance.
|
|
||||||
|
|
||||||
Beware of the limitations listed later on!
|
|
||||||
|
|
||||||
### 2. Verify that the records are set
|
|
||||||
|
|
||||||
You can use a DNS querying tool of your choice on one of your hosts to verify that your newly set records are actually available in MagicDNS, here we used [`dig`](https://man.archlinux.org/man/dig.1.en):
|
|
||||||
|
|
||||||
```
|
|
||||||
$ dig grafana.myvpn.example.com
|
|
||||||
|
|
||||||
; <<>> DiG 9.18.10 <<>> grafana.myvpn.example.com
|
|
||||||
;; global options: +cmd
|
|
||||||
;; Got answer:
|
|
||||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44054
|
|
||||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
|
|
||||||
|
|
||||||
;; OPT PSEUDOSECTION:
|
|
||||||
; EDNS: version: 0, flags:; udp: 65494
|
|
||||||
;; QUESTION SECTION:
|
|
||||||
;grafana.myvpn.example.com. IN A
|
|
||||||
|
|
||||||
;; ANSWER SECTION:
|
|
||||||
grafana.myvpn.example.com. 593 IN A 100.64.0.3
|
|
||||||
|
|
||||||
;; Query time: 0 msec
|
|
||||||
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
|
|
||||||
;; WHEN: Sat Dec 31 11:46:55 CET 2022
|
|
||||||
;; MSG SIZE rcvd: 66
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Optional: Setup the reverse proxy
|
|
||||||
|
|
||||||
The motivating example here was to be able to access internal monitoring services on the same host without specifying a port:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
listen [::]:80;
|
|
||||||
|
|
||||||
server_name grafana.myvpn.example.com;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
proxy_pass http://localhost:3000;
|
|
||||||
proxy_set_header Host $http_host;
|
|
||||||
proxy_set_header X-Real-IP $remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Limitations
|
|
||||||
|
|
||||||
[Not all types of records are supported](https://github.com/tailscale/tailscale/blob/6edf357b96b28ee1be659a70232c0135b2ffedfd/ipn/ipnlocal/local.go#L2989-L3007), especially no CNAME records.
|
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user