mirror of
https://github.com/juanfont/headscale.git
synced 2025-08-17 18:07:43 +00:00
Compare commits
1 Commits
revert-141
...
fix-proto-
Author | SHA1 | Date | |
---|---|---|---|
![]() |
41515532b6 |
5
.github/FUNDING.yml
vendored
5
.github/FUNDING.yml
vendored
@@ -1,3 +1,2 @@
|
|||||||
# These are supported funding model platforms
|
ko_fi: kradalby
|
||||||
|
github: [kradalby]
|
||||||
ko_fi: headscale
|
|
||||||
|
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
36
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -6,24 +6,19 @@ labels: ["bug"]
|
|||||||
assignees: ""
|
assignees: ""
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the bug report in this language. -->
|
||||||
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
|
|
||||||
to make sure that it is truly a bug.
|
|
||||||
The issue tracker is not the place to ask for support or how to set up Headscale.
|
|
||||||
|
|
||||||
Bug reports without the sufficient information will be closed.
|
**Bug description**
|
||||||
|
|
||||||
Headscale is a multinational community across the globe. Our language is English.
|
|
||||||
All bug reports needs to be in English.
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Bug description
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
|
||||||
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
and how it is currently different. If you are unsure if it is a bug, consider discussing
|
||||||
it on our Discord server first. -->
|
it on our Discord server first. -->
|
||||||
|
|
||||||
## Environment
|
**To Reproduce**
|
||||||
|
|
||||||
|
<!-- Steps to reproduce the behavior. -->
|
||||||
|
|
||||||
|
**Context info**
|
||||||
|
|
||||||
<!-- Please add relevant information about your system. For example:
|
<!-- Please add relevant information about your system. For example:
|
||||||
- Version of headscale used
|
- Version of headscale used
|
||||||
@@ -33,20 +28,3 @@ All bug reports needs to be in English.
|
|||||||
- The relevant config parameters you used
|
- The relevant config parameters you used
|
||||||
- Log output
|
- Log output
|
||||||
-->
|
-->
|
||||||
|
|
||||||
- OS:
|
|
||||||
- Headscale version:
|
|
||||||
- Tailscale version:
|
|
||||||
|
|
||||||
<!--
|
|
||||||
We do not support running Headscale in a container nor behind a (reverse) proxy.
|
|
||||||
If either of these are true for your environment, ask the community in Discord
|
|
||||||
instead of filing a bug report.
|
|
||||||
-->
|
|
||||||
|
|
||||||
- [ ] Headscale is behind a (reverse) proxy
|
|
||||||
- [ ] Headscale runs in a container
|
|
||||||
|
|
||||||
## To Reproduce
|
|
||||||
|
|
||||||
<!-- Steps to reproduce the behavior. -->
|
|
||||||
|
21
.github/ISSUE_TEMPLATE/feature_request.md
vendored
21
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -6,21 +6,12 @@ labels: ["enhancement"]
|
|||||||
assignees: ""
|
assignees: ""
|
||||||
---
|
---
|
||||||
|
|
||||||
<!--
|
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the feature request in this language. -->
|
||||||
We typically have a clear roadmap for what we want to improve and reserve the right
|
|
||||||
to close feature requests that does not fit in the roadmap, or fit with the scope
|
|
||||||
of the project, or we actually want to implement ourselves.
|
|
||||||
|
|
||||||
Headscale is a multinational community across the globe. Our language is English.
|
**Feature request**
|
||||||
All bug reports needs to be in English.
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Why
|
|
||||||
|
|
||||||
<!-- Include the reason, why you would need the feature. E.g. what problem
|
|
||||||
does it solve? Or which workflow is currently frustrating and will be improved by
|
|
||||||
this? -->
|
|
||||||
|
|
||||||
## Description
|
|
||||||
|
|
||||||
<!-- A clear and precise description of what new or changed feature you want. -->
|
<!-- A clear and precise description of what new or changed feature you want. -->
|
||||||
|
|
||||||
|
<!-- Please include the reason, why you would need the feature. E.g. what problem
|
||||||
|
does it solve? Or which workflow is currently frustrating and will be improved by
|
||||||
|
this? -->
|
||||||
|
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
Normal file
30
.github/ISSUE_TEMPLATE/other_issue.md
vendored
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
name: "Other issue"
|
||||||
|
about: "Report a different issue"
|
||||||
|
title: ""
|
||||||
|
labels: ["bug"]
|
||||||
|
assignees: ""
|
||||||
|
---
|
||||||
|
|
||||||
|
<!-- Headscale is a multinational community across the globe. Our common language is English. Please consider raising the issue in this language. -->
|
||||||
|
|
||||||
|
<!-- If you have a question, please consider using our Discord for asking questions -->
|
||||||
|
|
||||||
|
**Issue description**
|
||||||
|
|
||||||
|
<!-- Please add your issue description. -->
|
||||||
|
|
||||||
|
**To Reproduce**
|
||||||
|
|
||||||
|
<!-- Steps to reproduce the behavior. -->
|
||||||
|
|
||||||
|
**Context info**
|
||||||
|
|
||||||
|
<!-- Please add relevant information about your system. For example:
|
||||||
|
- Version of headscale used
|
||||||
|
- Version of tailscale client
|
||||||
|
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
|
||||||
|
- Kernel version
|
||||||
|
- The relevant config parameters you used
|
||||||
|
- Log output
|
||||||
|
-->
|
12
.github/pull_request_template.md
vendored
12
.github/pull_request_template.md
vendored
@@ -1,15 +1,3 @@
|
|||||||
<!--
|
|
||||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
|
||||||
contribution will have to be discussed with the Maintainers before being submitted.
|
|
||||||
|
|
||||||
This model has been chosen to reduce the risk of burnout by limiting the
|
|
||||||
maintenance overhead of reviewing and validating third-party code.
|
|
||||||
|
|
||||||
Headscale is open to code contributions for bug fixes without discussion.
|
|
||||||
|
|
||||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
|
||||||
-->
|
|
||||||
|
|
||||||
<!-- Please tick if the following things apply. You… -->
|
<!-- Please tick if the following things apply. You… -->
|
||||||
|
|
||||||
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
|
||||||
|
24
.github/renovate.json
vendored
24
.github/renovate.json
vendored
@@ -6,27 +6,31 @@
|
|||||||
"onboarding": false,
|
"onboarding": false,
|
||||||
"extends": ["config:base", ":rebaseStalePrs"],
|
"extends": ["config:base", ":rebaseStalePrs"],
|
||||||
"ignorePresets": [":prHourlyLimit2"],
|
"ignorePresets": [":prHourlyLimit2"],
|
||||||
"enabledManagers": ["dockerfile", "gomod", "github-actions", "regex"],
|
"enabledManagers": ["dockerfile", "gomod", "github-actions","regex" ],
|
||||||
"includeForks": true,
|
"includeForks": true,
|
||||||
"repositories": ["juanfont/headscale"],
|
"repositories": ["juanfont/headscale"],
|
||||||
"platform": "github",
|
"platform": "github",
|
||||||
"packageRules": [
|
"packageRules": [
|
||||||
{
|
{
|
||||||
"matchDatasources": ["go"],
|
"matchDatasources": ["go"],
|
||||||
"groupName": "Go modules",
|
"groupName": "Go modules",
|
||||||
"groupSlug": "gomod",
|
"groupSlug": "gomod",
|
||||||
"separateMajorMinor": false
|
"separateMajorMinor": false
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"matchDatasources": ["docker"],
|
"matchDatasources": ["docker"],
|
||||||
"groupName": "Dockerfiles",
|
"groupName": "Dockerfiles",
|
||||||
"groupSlug": "dockerfiles"
|
"groupSlug": "dockerfiles"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"regexManagers": [
|
"regexManagers": [
|
||||||
{
|
{
|
||||||
"fileMatch": [".github/workflows/.*.yml$"],
|
"fileMatch": [
|
||||||
"matchStrings": ["\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"],
|
".github/workflows/.*.yml$"
|
||||||
|
],
|
||||||
|
"matchStrings": [
|
||||||
|
"\\s*go-version:\\s*\"?(?<currentValue>.*?)\"?\\n"
|
||||||
|
],
|
||||||
"datasourceTemplate": "golang-version",
|
"datasourceTemplate": "golang-version",
|
||||||
"depNameTemplate": "actions/go-version"
|
"depNameTemplate": "actions/go-version"
|
||||||
}
|
}
|
||||||
|
37
.github/workflows/build.yml
vendored
37
.github/workflows/build.yml
vendored
@@ -8,23 +8,18 @@ on:
|
|||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions: write-all
|
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
|
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: tj-actions/changed-files@v34
|
uses: tj-actions/changed-files@v14.1
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
*.nix
|
*.nix
|
||||||
@@ -37,34 +32,10 @@ jobs:
|
|||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
- name: Run build
|
- name: Run build
|
||||||
id: build
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
run: |
|
run: nix build
|
||||||
nix build |& tee build-result
|
|
||||||
BUILD_STATUS="${PIPESTATUS[0]}"
|
|
||||||
|
|
||||||
OLD_HASH=$(cat build-result | grep specified: | awk -F ':' '{print $2}' | sed 's/ //g')
|
- uses: actions/upload-artifact@v2
|
||||||
NEW_HASH=$(cat build-result | grep got: | awk -F ':' '{print $2}' | sed 's/ //g')
|
|
||||||
|
|
||||||
echo "OLD_HASH=$OLD_HASH" >> $GITHUB_OUTPUT
|
|
||||||
echo "NEW_HASH=$NEW_HASH" >> $GITHUB_OUTPUT
|
|
||||||
|
|
||||||
exit $BUILD_STATUS
|
|
||||||
|
|
||||||
- name: Nix gosum diverging
|
|
||||||
uses: actions/github-script@v6
|
|
||||||
if: failure() && steps.build.outcome == 'failure'
|
|
||||||
with:
|
|
||||||
github-token: ${{secrets.GITHUB_TOKEN}}
|
|
||||||
script: |
|
|
||||||
github.rest.pulls.createReviewComment({
|
|
||||||
pull_number: context.issue.number,
|
|
||||||
owner: context.repo.owner,
|
|
||||||
repo: context.repo.repo,
|
|
||||||
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
|
||||||
})
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
with:
|
with:
|
||||||
name: headscale-linux
|
name: headscale-linux
|
||||||
|
2
.github/workflows/contributors.yml
vendored
2
.github/workflows/contributors.yml
vendored
@@ -9,7 +9,7 @@ jobs:
|
|||||||
add-contributors:
|
add-contributors:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v2
|
||||||
- name: Delete upstream contributor branch
|
- name: Delete upstream contributor branch
|
||||||
# Allow continue on failure to account for when the
|
# Allow continue on failure to account for when the
|
||||||
# upstream branch is deleted or does not exist.
|
# upstream branch is deleted or does not exist.
|
||||||
|
45
.github/workflows/docs.yml
vendored
45
.github/workflows/docs.yml
vendored
@@ -1,45 +0,0 @@
|
|||||||
name: Build documentation
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
pages: write
|
|
||||||
id-token: write
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
- name: Install python
|
|
||||||
uses: actions/setup-python@v4
|
|
||||||
with:
|
|
||||||
python-version: 3.x
|
|
||||||
- name: Setup cache
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
key: ${{ github.ref }}
|
|
||||||
path: .cache
|
|
||||||
- name: Setup dependencies
|
|
||||||
run: pip install mkdocs-material pillow cairosvg mkdocs-minify-plugin
|
|
||||||
- name: Build docs
|
|
||||||
run: mkdocs build --strict
|
|
||||||
- name: Upload artifact
|
|
||||||
uses: actions/upload-pages-artifact@v1
|
|
||||||
with:
|
|
||||||
path: ./site
|
|
||||||
deploy:
|
|
||||||
environment:
|
|
||||||
name: github-pages
|
|
||||||
url: ${{ steps.deployment.outputs.page_url }}
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
needs: build
|
|
||||||
steps:
|
|
||||||
- name: Deploy to GitHub Pages
|
|
||||||
id: deployment
|
|
||||||
uses: actions/deploy-pages@v1
|
|
23
.github/workflows/gh-actions-updater.yaml
vendored
23
.github/workflows/gh-actions-updater.yaml
vendored
@@ -1,23 +0,0 @@
|
|||||||
name: GitHub Actions Version Updater
|
|
||||||
|
|
||||||
# Controls when the action will run.
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
# Automatically run on every Sunday
|
|
||||||
- cron: "0 0 * * 0"
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v2
|
|
||||||
with:
|
|
||||||
# [Required] Access token with `workflow` scope.
|
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
|
||||||
|
|
||||||
- name: Run GitHub Actions Version Updater
|
|
||||||
uses: saadmk11/github-actions-version-updater@v0.7.1
|
|
||||||
with:
|
|
||||||
# [Required] Access token with `workflow` scope.
|
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
|
16
.github/workflows/lint.yml
vendored
16
.github/workflows/lint.yml
vendored
@@ -1,23 +1,19 @@
|
|||||||
---
|
---
|
||||||
name: Lint
|
name: CI
|
||||||
|
|
||||||
on: [push, pull_request]
|
on: [push, pull_request]
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
golangci-lint:
|
golangci-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
|
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: tj-actions/changed-files@v34
|
uses: tj-actions/changed-files@v14.1
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
*.nix
|
*.nix
|
||||||
@@ -30,7 +26,7 @@ jobs:
|
|||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
uses: golangci/golangci-lint-action@v2
|
uses: golangci/golangci-lint-action@v2
|
||||||
with:
|
with:
|
||||||
version: v1.51.2
|
version: v1.49.0
|
||||||
|
|
||||||
# Only block PRs on new problems.
|
# Only block PRs on new problems.
|
||||||
# If this is not enabled, we will end up having PRs
|
# If this is not enabled, we will end up having PRs
|
||||||
@@ -63,7 +59,7 @@ jobs:
|
|||||||
|
|
||||||
- name: Prettify code
|
- name: Prettify code
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
uses: creyD/prettier_action@v4.3
|
uses: creyD/prettier_action@v4.0
|
||||||
with:
|
with:
|
||||||
prettier_options: >-
|
prettier_options: >-
|
||||||
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
||||||
@@ -74,7 +70,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v2
|
- uses: actions/checkout@v2
|
||||||
- uses: bufbuild/buf-setup-action@v1.7.0
|
- uses: bufbuild/buf-setup-action@v1
|
||||||
- uses: bufbuild/buf-lint-action@v1
|
- uses: bufbuild/buf-lint-action@v1
|
||||||
with:
|
with:
|
||||||
input: "proto"
|
input: "proto"
|
||||||
|
138
.github/workflows/release-docker.yml
vendored
138
.github/workflows/release-docker.yml
vendored
@@ -1,138 +0,0 @@
|
|||||||
---
|
|
||||||
name: Release Docker
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
tags:
|
|
||||||
- "*" # triggers only if push new tag version
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
docker-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache
|
|
||||||
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}
|
|
||||||
type=semver,pattern={{major}}.{{minor}}
|
|
||||||
type=semver,pattern={{major}}
|
|
||||||
type=sha
|
|
||||||
type=raw,value=develop
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
tags: ${{ steps.meta.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache
|
|
||||||
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
|
||||||
|
|
||||||
docker-debug-release:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
- name: Set up Docker Buildx
|
|
||||||
uses: docker/setup-buildx-action@v1
|
|
||||||
- name: Set up QEMU for multiple platforms
|
|
||||||
uses: docker/setup-qemu-action@master
|
|
||||||
with:
|
|
||||||
platforms: arm64,amd64
|
|
||||||
- name: Cache Docker layers
|
|
||||||
uses: actions/cache@v2
|
|
||||||
with:
|
|
||||||
path: /tmp/.buildx-cache-debug
|
|
||||||
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
|
||||||
restore-keys: |
|
|
||||||
${{ runner.os }}-buildx-debug-
|
|
||||||
- name: Docker meta
|
|
||||||
id: meta-debug
|
|
||||||
uses: docker/metadata-action@v3
|
|
||||||
with:
|
|
||||||
# list of Docker images to use as base name for tags
|
|
||||||
images: |
|
|
||||||
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
|
||||||
ghcr.io/${{ github.repository_owner }}/headscale
|
|
||||||
flavor: |
|
|
||||||
suffix=-debug,onlatest=true
|
|
||||||
tags: |
|
|
||||||
type=semver,pattern={{version}}
|
|
||||||
type=semver,pattern={{major}}.{{minor}}
|
|
||||||
type=semver,pattern={{major}}
|
|
||||||
type=sha
|
|
||||||
type=raw,value=develop
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@v1
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
- name: Build and push
|
|
||||||
id: docker_build
|
|
||||||
uses: docker/build-push-action@v2
|
|
||||||
with:
|
|
||||||
push: true
|
|
||||||
context: .
|
|
||||||
file: Dockerfile.debug
|
|
||||||
tags: ${{ steps.meta-debug.outputs.tags }}
|
|
||||||
labels: ${{ steps.meta-debug.outputs.labels }}
|
|
||||||
platforms: linux/amd64,linux/arm64
|
|
||||||
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
|
||||||
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
|
||||||
build-args: |
|
|
||||||
VERSION=${{ steps.meta-debug.outputs.version }}
|
|
||||||
- name: Prepare cache for next build
|
|
||||||
run: |
|
|
||||||
rm -rf /tmp/.buildx-cache-debug
|
|
||||||
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
|
219
.github/workflows/release.yml
vendored
219
.github/workflows/release.yml
vendored
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
name: Release
|
name: release
|
||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
@@ -9,16 +9,221 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
goreleaser:
|
goreleaser:
|
||||||
|
runs-on: ubuntu-18.04 # due to CGO we need to user an older version
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Go
|
||||||
|
uses: actions/setup-go@v2
|
||||||
|
with:
|
||||||
|
go-version: 1.19.0
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: |
|
||||||
|
sudo apt update
|
||||||
|
sudo apt install -y gcc-aarch64-linux-gnu
|
||||||
|
- name: Run GoReleaser
|
||||||
|
uses: goreleaser/goreleaser-action@v2
|
||||||
|
with:
|
||||||
|
distribution: goreleaser
|
||||||
|
version: latest
|
||||||
|
args: release --rm-dist
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
docker-release:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v3
|
uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache
|
||||||
|
key: ${{ runner.os }}-buildx-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}
|
||||||
|
type=semver,pattern={{major}}.{{minor}}
|
||||||
|
type=semver,pattern={{major}}
|
||||||
|
type=raw,value=latest
|
||||||
|
type=sha
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
tags: ${{ steps.meta.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache
|
||||||
|
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v16
|
docker-debug-release:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache-debug
|
||||||
|
key: ${{ runner.os }}-buildx-debug-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-debug-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta-debug
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
flavor: |
|
||||||
|
latest=false
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}-debug
|
||||||
|
type=semver,pattern={{major}}.{{minor}}-debug
|
||||||
|
type=semver,pattern={{major}}-debug
|
||||||
|
type=raw,value=latest-debug
|
||||||
|
type=sha,suffix=-debug
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
file: Dockerfile.debug
|
||||||
|
tags: ${{ steps.meta-debug.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta-debug.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache-debug
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-debug-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta-debug.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache-debug
|
||||||
|
mv /tmp/.buildx-cache-debug-new /tmp/.buildx-cache-debug
|
||||||
|
|
||||||
- name: Run goreleaser
|
docker-alpine-release:
|
||||||
run: nix develop --command -- goreleaser release --clean
|
runs-on: ubuntu-latest
|
||||||
env:
|
steps:
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Set up Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v1
|
||||||
|
- name: Set up QEMU for multiple platforms
|
||||||
|
uses: docker/setup-qemu-action@master
|
||||||
|
with:
|
||||||
|
platforms: arm64,amd64
|
||||||
|
- name: Cache Docker layers
|
||||||
|
uses: actions/cache@v2
|
||||||
|
with:
|
||||||
|
path: /tmp/.buildx-cache-alpine
|
||||||
|
key: ${{ runner.os }}-buildx-alpine-${{ github.sha }}
|
||||||
|
restore-keys: |
|
||||||
|
${{ runner.os }}-buildx-alpine-
|
||||||
|
- name: Docker meta
|
||||||
|
id: meta-alpine
|
||||||
|
uses: docker/metadata-action@v3
|
||||||
|
with:
|
||||||
|
# list of Docker images to use as base name for tags
|
||||||
|
images: |
|
||||||
|
${{ secrets.DOCKERHUB_USERNAME }}/headscale
|
||||||
|
ghcr.io/${{ github.repository_owner }}/headscale
|
||||||
|
flavor: |
|
||||||
|
latest=false
|
||||||
|
tags: |
|
||||||
|
type=semver,pattern={{version}}-alpine
|
||||||
|
type=semver,pattern={{major}}.{{minor}}-alpine
|
||||||
|
type=semver,pattern={{major}}-alpine
|
||||||
|
type=raw,value=latest-alpine
|
||||||
|
type=sha,suffix=-alpine
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- name: Build and push
|
||||||
|
id: docker_build
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
push: true
|
||||||
|
context: .
|
||||||
|
file: Dockerfile.alpine
|
||||||
|
tags: ${{ steps.meta-alpine.outputs.tags }}
|
||||||
|
labels: ${{ steps.meta-alpine.outputs.labels }}
|
||||||
|
platforms: linux/amd64,linux/arm64
|
||||||
|
cache-from: type=local,src=/tmp/.buildx-cache-alpine
|
||||||
|
cache-to: type=local,dest=/tmp/.buildx-cache-alpine-new
|
||||||
|
build-args: |
|
||||||
|
VERSION=${{ steps.meta-alpine.outputs.version }}
|
||||||
|
- name: Prepare cache for next build
|
||||||
|
run: |
|
||||||
|
rm -rf /tmp/.buildx-cache-alpine
|
||||||
|
mv /tmp/.buildx-cache-alpine-new /tmp/.buildx-cache-alpine
|
||||||
|
27
.github/workflows/renovatebot.yml
vendored
Normal file
27
.github/workflows/renovatebot.yml
vendored
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
---
|
||||||
|
name: Renovate
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: "* * 5,20 * *" # Every 5th and 20th of the month
|
||||||
|
workflow_dispatch:
|
||||||
|
jobs:
|
||||||
|
renovate:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Get token
|
||||||
|
id: get_token
|
||||||
|
uses: machine-learning-apps/actions-app-token@master
|
||||||
|
with:
|
||||||
|
APP_PEM: ${{ secrets.RENOVATEBOT_SECRET }}
|
||||||
|
APP_ID: ${{ secrets.RENOVATEBOT_APP_ID }}
|
||||||
|
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v2.0.0
|
||||||
|
|
||||||
|
- name: Self-hosted Renovate
|
||||||
|
uses: renovatebot/github-action@v31.81.3
|
||||||
|
with:
|
||||||
|
configurationFile: .github/renovate.json
|
||||||
|
token: "x-access-token:${{ steps.get_token.outputs.app_token }}"
|
||||||
|
# env:
|
||||||
|
# LOG_LEVEL: "debug"
|
35
.github/workflows/test-integration-cli.yml
vendored
35
.github/workflows/test-integration-cli.yml
vendored
@@ -1,35 +0,0 @@
|
|||||||
name: Integration Test CLI
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
integration-test-cli:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Set Swap Space
|
|
||||||
uses: pierotofy/set-swap-space@master
|
|
||||||
with:
|
|
||||||
swap-size-gb: 10
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v16
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run CLI integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: nix develop --command -- make test_integration_cli
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLAllowStarDst
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLAllowStarDst$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLAllowUser80Dst
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLAllowUser80Dst$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLAllowUserDst
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLAllowUserDst$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLDenyAllPort80
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLDenyAllPort80$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLDevice1CanAccessDevice2$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLHostsInNetMapTable
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLHostsInNetMapTable$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLNamedHostsCanReach
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLNamedHostsCanReach$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestACLNamedHostsCanReachBySubnet$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestAuthKeyLogoutAndRelogin$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestAuthWebFlowAuthenticationPingAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestAuthWebFlowLogoutAndRelogin$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestCreateTailscale
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestCreateTailscale$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestDERPServerScenario
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestDERPServerScenario$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestEnablingRoutes
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestEnablingRoutes$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestEphemeral
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestEphemeral$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestExpireNode
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestExpireNode$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestHeadscale
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestHeadscale$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestOIDCAuthenticationPingAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestOIDCAuthenticationPingAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPingAllByHostname
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPingAllByHostname$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPingAllByIP
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPingAllByIP$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPreAuthKeyCommand
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPreAuthKeyCommand$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPreAuthKeyCommandReusableEphemeral$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestPreAuthKeyCommandWithoutExpiry$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestResolveMagicDNS
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestResolveMagicDNS$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHIsBlockedInACL
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHIsBlockedInACL$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHMultipleUsersAllToAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHNoSSHConfigured
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHNoSSHConfigured$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSHOneUserAllToAll
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSHOneUserAllToAll$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestSSUserOnlyIsolation
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestSSUserOnlyIsolation$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestTaildrop
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestTaildrop$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestTailscaleNodesJoiningHeadcale$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
@@ -1,63 +0,0 @@
|
|||||||
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - TestUserCommand
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: ${{ env.ACT }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^TestUserCommand$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
58
.github/workflows/test-integration.yml
vendored
Normal file
58
.github/workflows/test-integration.yml
vendored
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
name: CI
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
integration-test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Set Swap Space
|
||||||
|
uses: pierotofy/set-swap-space@master
|
||||||
|
with:
|
||||||
|
swap-size-gb: 10
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: tj-actions/changed-files@v14.1
|
||||||
|
with:
|
||||||
|
files: |
|
||||||
|
*.nix
|
||||||
|
go.*
|
||||||
|
**/*.go
|
||||||
|
integration_test/
|
||||||
|
config-example.yaml
|
||||||
|
|
||||||
|
- uses: cachix/install-nix-action@v16
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
|
||||||
|
- name: Run CLI integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
uses: nick-fields/retry@v2
|
||||||
|
with:
|
||||||
|
timeout_minutes: 240
|
||||||
|
max_attempts: 5
|
||||||
|
retry_on: error
|
||||||
|
command: nix develop --command -- make test_integration_cli
|
||||||
|
|
||||||
|
- name: Run Embedded DERP server integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
uses: nick-fields/retry@v2
|
||||||
|
with:
|
||||||
|
timeout_minutes: 240
|
||||||
|
max_attempts: 5
|
||||||
|
retry_on: error
|
||||||
|
command: nix develop --command -- make test_integration_derp
|
||||||
|
|
||||||
|
- name: Run general integration tests
|
||||||
|
if: steps.changed-files.outputs.any_changed == 'true'
|
||||||
|
uses: nick-fields/retry@v2
|
||||||
|
with:
|
||||||
|
timeout_minutes: 240
|
||||||
|
max_attempts: 5
|
||||||
|
retry_on: error
|
||||||
|
command: nix develop --command -- make test_integration_general
|
10
.github/workflows/test.yml
vendored
10
.github/workflows/test.yml
vendored
@@ -1,23 +1,19 @@
|
|||||||
name: Tests
|
name: CI
|
||||||
|
|
||||||
on: [push, pull_request]
|
on: [push, pull_request]
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
test:
|
test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
|
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: tj-actions/changed-files@v34
|
uses: tj-actions/changed-files@v14.1
|
||||||
with:
|
with:
|
||||||
files: |
|
files: |
|
||||||
*.nix
|
*.nix
|
||||||
|
10
.gitignore
vendored
10
.gitignore
vendored
@@ -1,5 +1,3 @@
|
|||||||
ignored/
|
|
||||||
|
|
||||||
# Binaries for programs and plugins
|
# Binaries for programs and plugins
|
||||||
*.exe
|
*.exe
|
||||||
*.exe~
|
*.exe~
|
||||||
@@ -14,9 +12,8 @@ ignored/
|
|||||||
*.out
|
*.out
|
||||||
|
|
||||||
# Dependency directories (remove the comment below to include it)
|
# Dependency directories (remove the comment below to include it)
|
||||||
vendor/
|
# vendor/
|
||||||
|
|
||||||
dist/
|
|
||||||
/headscale
|
/headscale
|
||||||
config.json
|
config.json
|
||||||
config.yaml
|
config.yaml
|
||||||
@@ -30,14 +27,9 @@ derp.yaml
|
|||||||
.idea
|
.idea
|
||||||
|
|
||||||
test_output/
|
test_output/
|
||||||
control_logs/
|
|
||||||
|
|
||||||
# Nix build output
|
# Nix build output
|
||||||
result
|
result
|
||||||
.direnv/
|
.direnv/
|
||||||
|
|
||||||
integration_test/etc/config.dump.yaml
|
integration_test/etc/config.dump.yaml
|
||||||
|
|
||||||
# MkDocs
|
|
||||||
.cache
|
|
||||||
/site
|
|
||||||
|
@@ -1,8 +1,6 @@
|
|||||||
---
|
---
|
||||||
run:
|
run:
|
||||||
timeout: 10m
|
timeout: 10m
|
||||||
build-tags:
|
|
||||||
- ts2019
|
|
||||||
|
|
||||||
issues:
|
issues:
|
||||||
skip-dirs:
|
skip-dirs:
|
||||||
@@ -28,15 +26,6 @@ linters:
|
|||||||
- ireturn
|
- ireturn
|
||||||
- execinquery
|
- execinquery
|
||||||
- exhaustruct
|
- exhaustruct
|
||||||
- nolintlint
|
|
||||||
- musttag # causes issues with imported libs
|
|
||||||
|
|
||||||
# deprecated
|
|
||||||
- structcheck # replaced by unused
|
|
||||||
- ifshort # deprecated by the owner
|
|
||||||
- varcheck # replaced by unused
|
|
||||||
- nosnakecase # replaced by revive
|
|
||||||
- deadcode # replaced by unused
|
|
||||||
|
|
||||||
# We should strive to enable these:
|
# We should strive to enable these:
|
||||||
- wrapcheck
|
- wrapcheck
|
||||||
@@ -44,9 +33,6 @@ linters:
|
|||||||
- makezero
|
- makezero
|
||||||
- maintidx
|
- maintidx
|
||||||
|
|
||||||
# Limits the methods of an interface to 10. We have more in integration tests
|
|
||||||
- interfacebloat
|
|
||||||
|
|
||||||
# We might want to enable this, but it might be a lot of work
|
# We might want to enable this, but it might be a lot of work
|
||||||
- cyclop
|
- cyclop
|
||||||
- nestif
|
- nestif
|
||||||
|
111
.goreleaser.yml
111
.goreleaser.yml
@@ -1,85 +1,74 @@
|
|||||||
---
|
---
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- go mod tidy -compat=1.20
|
- go mod tidy -compat=1.19
|
||||||
- go mod vendor
|
|
||||||
|
|
||||||
release:
|
release:
|
||||||
prerelease: auto
|
prerelease: auto
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
- id: headscale
|
- id: darwin-amd64
|
||||||
main: ./cmd/headscale/headscale.go
|
main: ./cmd/headscale/headscale.go
|
||||||
mod_timestamp: "{{ .CommitTimestamp }}"
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
env:
|
env:
|
||||||
- CGO_ENABLED=0
|
- CGO_ENABLED=0
|
||||||
targets:
|
goos:
|
||||||
- darwin_amd64
|
- darwin
|
||||||
- darwin_arm64
|
goarch:
|
||||||
- freebsd_amd64
|
- amd64
|
||||||
- linux_386
|
|
||||||
- linux_amd64
|
|
||||||
- linux_arm64
|
|
||||||
- linux_arm_5
|
|
||||||
- linux_arm_6
|
|
||||||
- linux_arm_7
|
|
||||||
flags:
|
flags:
|
||||||
- -mod=readonly
|
- -mod=readonly
|
||||||
ldflags:
|
ldflags:
|
||||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
tags:
|
|
||||||
- ts2019
|
- id: darwin-arm64
|
||||||
|
main: ./cmd/headscale/headscale.go
|
||||||
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
|
env:
|
||||||
|
- CGO_ENABLED=0
|
||||||
|
goos:
|
||||||
|
- darwin
|
||||||
|
goarch:
|
||||||
|
- arm64
|
||||||
|
flags:
|
||||||
|
- -mod=readonly
|
||||||
|
ldflags:
|
||||||
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
|
|
||||||
|
- id: linux-amd64
|
||||||
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
|
env:
|
||||||
|
- CGO_ENABLED=0
|
||||||
|
goos:
|
||||||
|
- linux
|
||||||
|
goarch:
|
||||||
|
- amd64
|
||||||
|
main: ./cmd/headscale/headscale.go
|
||||||
|
ldflags:
|
||||||
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
|
|
||||||
|
- id: linux-arm64
|
||||||
|
mod_timestamp: "{{ .CommitTimestamp }}"
|
||||||
|
env:
|
||||||
|
- CGO_ENABLED=0
|
||||||
|
goos:
|
||||||
|
- linux
|
||||||
|
goarch:
|
||||||
|
- arm64
|
||||||
|
main: ./cmd/headscale/headscale.go
|
||||||
|
ldflags:
|
||||||
|
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
||||||
|
|
||||||
archives:
|
archives:
|
||||||
- id: golang-cross
|
- id: golang-cross
|
||||||
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
|
builds:
|
||||||
|
- darwin-amd64
|
||||||
|
- darwin-arm64
|
||||||
|
- linux-amd64
|
||||||
|
- linux-arm64
|
||||||
|
name_template: "{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}"
|
||||||
format: binary
|
format: binary
|
||||||
|
|
||||||
source:
|
|
||||||
enabled: true
|
|
||||||
name_template: "{{ .ProjectName }}_{{ .Version }}"
|
|
||||||
format: tar.gz
|
|
||||||
files:
|
|
||||||
- "vendor/"
|
|
||||||
|
|
||||||
nfpms:
|
|
||||||
# Configure nFPM for .deb and .rpm releases
|
|
||||||
#
|
|
||||||
# See https://nfpm.goreleaser.com/configuration/
|
|
||||||
# and https://goreleaser.com/customization/nfpm/
|
|
||||||
#
|
|
||||||
# Useful tools for debugging .debs:
|
|
||||||
# List file contents: dpkg -c dist/headscale...deb
|
|
||||||
# Package metadata: dpkg --info dist/headscale....deb
|
|
||||||
#
|
|
||||||
- builds:
|
|
||||||
- headscale
|
|
||||||
package_name: headscale
|
|
||||||
priority: optional
|
|
||||||
vendor: headscale
|
|
||||||
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
|
|
||||||
homepage: https://github.com/juanfont/headscale
|
|
||||||
license: BSD
|
|
||||||
bindir: /usr/bin
|
|
||||||
formats:
|
|
||||||
- deb
|
|
||||||
# - rpm
|
|
||||||
contents:
|
|
||||||
- src: ./config-example.yaml
|
|
||||||
dst: /etc/headscale/config.yaml
|
|
||||||
type: config|noreplace
|
|
||||||
file_info:
|
|
||||||
mode: 0644
|
|
||||||
- src: ./docs/packaging/headscale.systemd.service
|
|
||||||
dst: /usr/lib/systemd/system/headscale.service
|
|
||||||
- dst: /var/lib/headscale
|
|
||||||
type: dir
|
|
||||||
- dst: /var/run/headscale
|
|
||||||
type: dir
|
|
||||||
scripts:
|
|
||||||
postinstall: ./docs/packaging/postinstall.sh
|
|
||||||
postremove: ./docs/packaging/postremove.sh
|
|
||||||
|
|
||||||
checksum:
|
checksum:
|
||||||
name_template: "checksums.txt"
|
name_template: "checksums.txt"
|
||||||
snapshot:
|
snapshot:
|
||||||
|
@@ -1 +0,0 @@
|
|||||||
.github/workflows/test-integration-v2*
|
|
115
CHANGELOG.md
115
CHANGELOG.md
@@ -1,125 +1,12 @@
|
|||||||
# CHANGELOG
|
# CHANGELOG
|
||||||
|
|
||||||
## 0.23.0 (2023-XX-XX)
|
## 0.17.0 (2022-XX-XX)
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Add environment flags to enable pprof (profiling) [#1382](https://github.com/juanfont/headscale/pull/1382)
|
|
||||||
- Profiles are continously generated in our integration tests.
|
|
||||||
- Fix systemd service file location in `.deb` packages [#1391](https://github.com/juanfont/headscale/pull/1391)
|
|
||||||
- Improvements on Noise implementation [#1379](https://github.com/juanfont/headscale/pull/1379)
|
|
||||||
- Replace node filter logic, ensuring nodes with access can see eachother [#1381](https://github.com/juanfont/headscale/pull/1381)
|
|
||||||
|
|
||||||
## 0.22.1 (2023-04-20)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Fix issue where systemd could not bind to port 80 [#1365](https://github.com/juanfont/headscale/pull/1365)
|
|
||||||
- Disable (or delete) both exit routes at the same time [#1428](https://github.com/juanfont/headscale/pull/1428)
|
|
||||||
|
|
||||||
## 0.22.0 (2023-04-20)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Add `.deb` packages to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
|
||||||
- Update and simplify the documentation to use new `.deb` packages [#1349](https://github.com/juanfont/headscale/pull/1349)
|
|
||||||
- Add 32-bit Arm platforms to release process [#1297](https://github.com/juanfont/headscale/pull/1297)
|
|
||||||
- Fix longstanding bug that would prevent "\*" from working properly in ACLs (issue [#699](https://github.com/juanfont/headscale/issues/699)) [#1279](https://github.com/juanfont/headscale/pull/1279)
|
|
||||||
- Fix issue where IPv6 could not be used in, or while using ACLs (part of [#809](https://github.com/juanfont/headscale/issues/809)) [#1339](https://github.com/juanfont/headscale/pull/1339)
|
|
||||||
- Target Go 1.20 and Tailscale 1.38 for Headscale [#1323](https://github.com/juanfont/headscale/pull/1323)
|
|
||||||
|
|
||||||
## 0.21.0 (2023-03-20)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Adding "configtest" CLI command. [#1230](https://github.com/juanfont/headscale/pull/1230)
|
|
||||||
- Add documentation on connecting with iOS to `/apple` [#1261](https://github.com/juanfont/headscale/pull/1261)
|
|
||||||
- Update iOS compatibility and added documentation for iOS [#1264](https://github.com/juanfont/headscale/pull/1264)
|
|
||||||
- Allow to delete routes [#1244](https://github.com/juanfont/headscale/pull/1244)
|
|
||||||
|
|
||||||
## 0.20.0 (2023-02-03)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Fix wrong behaviour in exit nodes [#1159](https://github.com/juanfont/headscale/pull/1159)
|
|
||||||
- Align behaviour of `dns_config.restricted_nameservers` to tailscale [#1162](https://github.com/juanfont/headscale/pull/1162)
|
|
||||||
- Make OpenID Connect authenticated client expiry time configurable [#1191](https://github.com/juanfont/headscale/pull/1191)
|
|
||||||
- defaults to 180 days like Tailscale SaaS
|
|
||||||
- adds option to use the expiry time from the OpenID token for the node (see config-example.yaml)
|
|
||||||
- Set ControlTime in Map info sent to nodes [#1195](https://github.com/juanfont/headscale/pull/1195)
|
|
||||||
- Populate Tags field on Node updates sent [#1195](https://github.com/juanfont/headscale/pull/1195)
|
|
||||||
|
|
||||||
## 0.19.0 (2023-01-29)
|
|
||||||
|
|
||||||
### BREAKING
|
|
||||||
|
|
||||||
- Rename Namespace to User [#1144](https://github.com/juanfont/headscale/pull/1144)
|
|
||||||
- **BACKUP your database before upgrading**
|
|
||||||
- Command line flags previously taking `--namespace` or `-n` will now require `--user` or `-u`
|
|
||||||
|
|
||||||
## 0.18.0 (2023-01-14)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Reworked routing and added support for subnet router failover [#1024](https://github.com/juanfont/headscale/pull/1024)
|
|
||||||
- Added an OIDC AllowGroups Configuration options and authorization check [#1041](https://github.com/juanfont/headscale/pull/1041)
|
|
||||||
- Set `db_ssl` to false by default [#1052](https://github.com/juanfont/headscale/pull/1052)
|
|
||||||
- Fix duplicate nodes due to incorrect implementation of the protocol [#1058](https://github.com/juanfont/headscale/pull/1058)
|
|
||||||
- Report if a machine is online in CLI more accurately [#1062](https://github.com/juanfont/headscale/pull/1062)
|
|
||||||
- Added config option for custom DNS records [#1035](https://github.com/juanfont/headscale/pull/1035)
|
|
||||||
- Expire nodes based on OIDC token expiry [#1067](https://github.com/juanfont/headscale/pull/1067)
|
|
||||||
- Remove ephemeral nodes on logout [#1098](https://github.com/juanfont/headscale/pull/1098)
|
|
||||||
- Performance improvements in ACLs [#1129](https://github.com/juanfont/headscale/pull/1129)
|
|
||||||
- OIDC client secret can be passed via a file [#1127](https://github.com/juanfont/headscale/pull/1127)
|
|
||||||
|
|
||||||
## 0.17.1 (2022-12-05)
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Correct typo on macOS standalone profile link [#1028](https://github.com/juanfont/headscale/pull/1028)
|
|
||||||
- Update platform docs with Fast User Switching [#1016](https://github.com/juanfont/headscale/pull/1016)
|
|
||||||
|
|
||||||
## 0.17.0 (2022-11-26)
|
|
||||||
|
|
||||||
### BREAKING
|
|
||||||
|
|
||||||
- `noise.private_key_path` has been added and is required for the new noise protocol.
|
|
||||||
- Log level option `log_level` was moved to a distinct `log` config section and renamed to `level` [#768](https://github.com/juanfont/headscale/pull/768)
|
|
||||||
- Removed Alpine Linux container image [#962](https://github.com/juanfont/headscale/pull/962)
|
|
||||||
|
|
||||||
### Important Changes
|
|
||||||
|
|
||||||
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
- Added support for Tailscale TS2021 protocol [#738](https://github.com/juanfont/headscale/pull/738)
|
||||||
- Add experimental support for [SSH ACL](https://tailscale.com/kb/1018/acls/#tailscale-ssh) (see docs for limitations) [#847](https://github.com/juanfont/headscale/pull/847)
|
|
||||||
- Please note that this support should be considered _partially_ implemented
|
|
||||||
- SSH ACLs status:
|
|
||||||
- Support `accept` and `check` (SSH can be enabled and used for connecting and authentication)
|
|
||||||
- Rejecting connections **are not supported**, meaning that if you enable SSH, then assume that _all_ `ssh` connections **will be allowed**.
|
|
||||||
- If you decied to try this feature, please carefully managed permissions by blocking port `22` with regular ACLs or do _not_ set `--ssh` on your clients.
|
|
||||||
- We are currently improving our testing of the SSH ACLs, help us get an overview by testing and giving feedback.
|
|
||||||
- This feature should be considered dangerous and it is disabled by default. Enable by setting `HEADSCALE_EXPERIMENTAL_FEATURE_SSH=1`.
|
|
||||||
|
|
||||||
### Changes
|
|
||||||
|
|
||||||
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
- Add ability to specify config location via env var `HEADSCALE_CONFIG` [#674](https://github.com/juanfont/headscale/issues/674)
|
||||||
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
- Target Go 1.19 for Headscale [#778](https://github.com/juanfont/headscale/pull/778)
|
||||||
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
- Target Tailscale v1.30.0 to build Headscale [#780](https://github.com/juanfont/headscale/pull/780)
|
||||||
- Give a warning when running Headscale with reverse proxy improperly configured for WebSockets [#788](https://github.com/juanfont/headscale/pull/788)
|
- Give a warning when running Headscale with reverse proxy improperly configured for WebSockets [#788](https://github.com/juanfont/headscale/pull/788)
|
||||||
- Fix subnet routers with Primary Routes [#811](https://github.com/juanfont/headscale/pull/811)
|
|
||||||
- Added support for JSON logs [#653](https://github.com/juanfont/headscale/issues/653)
|
|
||||||
- Sanitise the node key passed to registration url [#823](https://github.com/juanfont/headscale/pull/823)
|
|
||||||
- Add support for generating pre-auth keys with tags [#767](https://github.com/juanfont/headscale/pull/767)
|
|
||||||
- Add support for evaluating `autoApprovers` ACL entries when a machine is registered [#763](https://github.com/juanfont/headscale/pull/763)
|
|
||||||
- Add config flag to allow Headscale to start if OIDC provider is down [#829](https://github.com/juanfont/headscale/pull/829)
|
|
||||||
- Fix prefix length comparison bug in AutoApprovers route evaluation [#862](https://github.com/juanfont/headscale/pull/862)
|
|
||||||
- Random node DNS suffix only applied if names collide in namespace. [#766](https://github.com/juanfont/headscale/issues/766)
|
|
||||||
- Remove `ip_prefix` configuration option and warning [#899](https://github.com/juanfont/headscale/pull/899)
|
|
||||||
- Add `dns_config.override_local_dns` option [#905](https://github.com/juanfont/headscale/pull/905)
|
|
||||||
- Fix some DNS config issues [#660](https://github.com/juanfont/headscale/issues/660)
|
|
||||||
- Make it possible to disable TS2019 with build flag [#928](https://github.com/juanfont/headscale/pull/928)
|
|
||||||
- Fix OIDC registration issues [#960](https://github.com/juanfont/headscale/pull/960) and [#971](https://github.com/juanfont/headscale/pull/971)
|
|
||||||
- Add support for specifying NextDNS DNS-over-HTTPS resolver [#940](https://github.com/juanfont/headscale/pull/940)
|
|
||||||
- Make more sslmode available for postgresql connection [#927](https://github.com/juanfont/headscale/pull/927)
|
|
||||||
|
|
||||||
## 0.16.4 (2022-08-21)
|
## 0.16.4 (2022-08-21)
|
||||||
|
|
||||||
|
@@ -1,5 +1,5 @@
|
|||||||
# Builder image
|
# Builder image
|
||||||
FROM docker.io/golang:1.20-bullseye AS build
|
FROM docker.io/golang:1.19.0-bullseye AS build
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
@@ -9,7 +9,7 @@ RUN go mod download
|
|||||||
|
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||||
RUN strip /go/bin/headscale
|
RUN strip /go/bin/headscale
|
||||||
RUN test -e /go/bin/headscale
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
|
24
Dockerfile.alpine
Normal file
24
Dockerfile.alpine
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
# Builder image
|
||||||
|
FROM docker.io/golang:1.19.0-alpine AS build
|
||||||
|
ARG VERSION=dev
|
||||||
|
ENV GOPATH /go
|
||||||
|
WORKDIR /go/src/headscale
|
||||||
|
|
||||||
|
COPY go.mod go.sum /go/src/headscale/
|
||||||
|
RUN apk add gcc musl-dev
|
||||||
|
RUN go mod download
|
||||||
|
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||||
|
RUN strip /go/bin/headscale
|
||||||
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
|
# Production image
|
||||||
|
FROM docker.io/alpine:latest
|
||||||
|
|
||||||
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
|
ENV TZ UTC
|
||||||
|
|
||||||
|
EXPOSE 8080/tcp
|
||||||
|
CMD ["headscale"]
|
@@ -1,5 +1,5 @@
|
|||||||
# Builder image
|
# Builder image
|
||||||
FROM docker.io/golang:1.20-bullseye AS build
|
FROM docker.io/golang:1.19.0-bullseye AS build
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
@@ -9,11 +9,11 @@ RUN go mod download
|
|||||||
|
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
RUN CGO_ENABLED=0 GOOS=linux go install -tags ts2019 -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
|
||||||
RUN test -e /go/bin/headscale
|
RUN test -e /go/bin/headscale
|
||||||
|
|
||||||
# Debug image
|
# Debug image
|
||||||
FROM docker.io/golang:1.20.0-bullseye
|
FROM gcr.io/distroless/base-debian11:debug
|
||||||
|
|
||||||
COPY --from=build /go/bin/headscale /bin/headscale
|
COPY --from=build /go/bin/headscale /bin/headscale
|
||||||
ENV TZ UTC
|
ENV TZ UTC
|
||||||
|
@@ -1,16 +1,17 @@
|
|||||||
FROM ubuntu:22.04
|
FROM ubuntu:latest
|
||||||
|
|
||||||
ARG TAILSCALE_VERSION=*
|
ARG TAILSCALE_VERSION=*
|
||||||
ARG TAILSCALE_CHANNEL=stable
|
ARG TAILSCALE_CHANNEL=stable
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y gnupg curl ssh dnsutils ca-certificates \
|
&& apt-get install -y gnupg curl \
|
||||||
&& adduser --shell=/bin/bash ssh-it-user
|
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
||||||
|
|
||||||
# Tailscale is deliberately split into a second stage so we can cash utils as a seperate layer.
|
|
||||||
RUN curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.gpg | apt-key add - \
|
|
||||||
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
&& curl -fsSL https://pkgs.tailscale.com/${TAILSCALE_CHANNEL}/ubuntu/focal.list | tee /etc/apt/sources.list.d/tailscale.list \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y tailscale=${TAILSCALE_VERSION} \
|
&& apt-get install -y ca-certificates tailscale=${TAILSCALE_VERSION} dnsutils \
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
||||||
|
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
||||||
|
|
||||||
|
RUN update-ca-certificates
|
||||||
|
@@ -1,17 +1,23 @@
|
|||||||
FROM golang:latest
|
FROM golang:latest
|
||||||
|
|
||||||
RUN apt-get update \
|
RUN apt-get update \
|
||||||
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
|
&& apt-get install -y ca-certificates dnsutils git iptables \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
RUN useradd --shell=/bin/bash --create-home ssh-it-user
|
|
||||||
|
|
||||||
RUN git clone https://github.com/tailscale/tailscale.git
|
RUN git clone https://github.com/tailscale/tailscale.git
|
||||||
|
|
||||||
WORKDIR /go/tailscale
|
WORKDIR /go/tailscale
|
||||||
|
|
||||||
RUN git checkout main \
|
RUN git checkout main
|
||||||
&& sh build_dist.sh tailscale.com/cmd/tailscale \
|
|
||||||
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
|
RUN sh build_dist.sh tailscale.com/cmd/tailscale
|
||||||
&& cp tailscale /usr/local/bin/ \
|
RUN sh build_dist.sh tailscale.com/cmd/tailscaled
|
||||||
&& cp tailscaled /usr/local/bin/
|
|
||||||
|
RUN cp tailscale /usr/local/bin/
|
||||||
|
RUN cp tailscaled /usr/local/bin/
|
||||||
|
|
||||||
|
ADD integration_test/etc_embedded_derp/tls/server.crt /usr/local/share/ca-certificates/
|
||||||
|
RUN chmod 644 /usr/local/share/ca-certificates/server.crt
|
||||||
|
|
||||||
|
RUN update-ca-certificates
|
||||||
|
46
Makefile
46
Makefile
@@ -10,8 +10,6 @@ ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
|
|||||||
else
|
else
|
||||||
endif
|
endif
|
||||||
|
|
||||||
TAGS = -tags ts2019
|
|
||||||
|
|
||||||
# GO_SOURCES = $(wildcard *.go)
|
# GO_SOURCES = $(wildcard *.go)
|
||||||
# PROTO_SOURCES = $(wildcard **/*.proto)
|
# PROTO_SOURCES = $(wildcard **/*.proto)
|
||||||
GO_SOURCES = $(call rwildcard,,*.go)
|
GO_SOURCES = $(call rwildcard,,*.go)
|
||||||
@@ -19,34 +17,29 @@ PROTO_SOURCES = $(call rwildcard,,*.proto)
|
|||||||
|
|
||||||
|
|
||||||
build:
|
build:
|
||||||
nix build
|
GOOS=$(GOOS) CGO_ENABLED=0 go build -trimpath $(pieflags) -mod=readonly -ldflags "-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$(version)" cmd/headscale/headscale.go
|
||||||
|
|
||||||
dev: lint test build
|
dev: lint test build
|
||||||
|
|
||||||
test:
|
test:
|
||||||
@go test $(TAGS) -short -coverprofile=coverage.out ./...
|
@go test -coverprofile=coverage.out ./...
|
||||||
|
|
||||||
test_integration: test_integration_cli test_integration_derp test_integration_v2_general
|
test_integration: test_integration_cli test_integration_derp test_integration_general
|
||||||
|
|
||||||
test_integration_cli:
|
test_integration_cli:
|
||||||
docker network rm $$(docker network ls --filter name=headscale --quiet) || true
|
go test -failfast -tags integration_cli,integration -timeout 30m -count=1 ./...
|
||||||
docker network create headscale-test || true
|
|
||||||
docker run -t --rm \
|
|
||||||
--network headscale-test \
|
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
|
||||||
-v $$PWD:$$PWD -w $$PWD \
|
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast -timeout 30m -count=1 -run IntegrationCLI ./...
|
|
||||||
|
|
||||||
test_integration_v2_general:
|
test_integration_derp:
|
||||||
docker run \
|
go test -failfast -tags integration_derp,integration -timeout 30m -count=1 ./...
|
||||||
-t --rm \
|
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
test_integration_general:
|
||||||
--name headscale-test-suite \
|
go test -failfast -tags integration_general,integration -timeout 30m -count=1 ./...
|
||||||
-v $$PWD:$$PWD -w $$PWD/integration \
|
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
coverprofile_func:
|
||||||
golang:1 \
|
go tool cover -func=coverage.out
|
||||||
go run gotest.tools/gotestsum@latest -- $(TAGS) -failfast ./... -timeout 120m -parallel 8
|
|
||||||
|
coverprofile_html:
|
||||||
|
go tool cover -html=coverage.out
|
||||||
|
|
||||||
lint:
|
lint:
|
||||||
golangci-lint run --fix --timeout 10m
|
golangci-lint run --fix --timeout 10m
|
||||||
@@ -64,4 +57,11 @@ compress: build
|
|||||||
|
|
||||||
generate:
|
generate:
|
||||||
rm -rf gen
|
rm -rf gen
|
||||||
buf generate proto
|
go run github.com/bufbuild/buf/cmd/buf generate proto
|
||||||
|
|
||||||
|
install-protobuf-plugins:
|
||||||
|
go install \
|
||||||
|
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway \
|
||||||
|
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2 \
|
||||||
|
google.golang.org/protobuf/cmd/protoc-gen-go \
|
||||||
|
google.golang.org/grpc/cmd/protoc-gen-go-grpc
|
||||||
|
510
README.md
510
README.md
@@ -32,18 +32,22 @@ organisation.
|
|||||||
|
|
||||||
## Design goal
|
## Design goal
|
||||||
|
|
||||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
`headscale` aims to implement a self-hosted, open source alternative to the Tailscale
|
||||||
control server.
|
control server. `headscale` has a narrower scope and an instance of `headscale`
|
||||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
implements a _single_ Tailnet, which is typically what a single organisation, or
|
||||||
server they can use for their projects and labs.
|
home/personal setup would use.
|
||||||
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
|
|
||||||
open-source organisation.
|
|
||||||
|
|
||||||
## Supporting Headscale
|
`headscale` uses terms that maps to Tailscale's control server, consult the
|
||||||
|
[glossary](./docs/glossary.md) for explainations.
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
If you like `headscale` and find it useful, there is a sponsorship and donation
|
If you like `headscale` and find it useful, there is a sponsorship and donation
|
||||||
buttons available in the repo.
|
buttons available in the repo.
|
||||||
|
|
||||||
|
If you would like to sponsor features, bugs or prioritisation, reach out to
|
||||||
|
one of the maintainers.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- Full "base" support of Tailscale's features
|
- Full "base" support of Tailscale's features
|
||||||
@@ -71,48 +75,19 @@ buttons available in the repo.
|
|||||||
| macOS | Yes (see `/apple` on your headscale for more information) |
|
| macOS | Yes (see `/apple` on your headscale for more information) |
|
||||||
| Windows | Yes [docs](./docs/windows-client.md) |
|
| Windows | Yes [docs](./docs/windows-client.md) |
|
||||||
| Android | Yes [docs](./docs/android-client.md) |
|
| Android | Yes [docs](./docs/android-client.md) |
|
||||||
| iOS | Yes [docs](./docs/iOS-client.md) |
|
| iOS | Not yet |
|
||||||
|
|
||||||
## Running headscale
|
## Running headscale
|
||||||
|
|
||||||
**Please note that we do not support nor encourage the use of reverse proxies
|
Please have a look at the documentation under [`docs/`](docs/).
|
||||||
and container to run Headscale.**
|
|
||||||
|
|
||||||
Please have a look at the [`documentation`](https://headscale.net/).
|
|
||||||
|
|
||||||
## Graphical Control Panels
|
|
||||||
|
|
||||||
Headscale provides an API for complete management of your Tailnet.
|
|
||||||
These are community projects not directly affiliated with the Headscale project.
|
|
||||||
|
|
||||||
| Name | Repository Link | Description | Status |
|
|
||||||
| --------------- | ---------------------------------------------------- | ------------------------------------------------------ | ------ |
|
|
||||||
| headscale-webui | [Github](https://github.com/ifargle/headscale-webui) | A simple Headscale web UI for small-scale deployments. | Alpha |
|
|
||||||
|
|
||||||
## Talks
|
|
||||||
|
|
||||||
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
|
||||||
- presented by Juan Font Alonso and Kristoffer Dalby
|
|
||||||
|
|
||||||
## Disclaimer
|
## Disclaimer
|
||||||
|
|
||||||
1. This project is not associated with Tailscale Inc.
|
1. We have nothing to do with Tailscale, or Tailscale Inc.
|
||||||
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
2. The purpose of Headscale is maintaining a working, self-hosted Tailscale control panel.
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
Headscale is "Open Source, acknowledged contribution", this means that any
|
|
||||||
contribution will have to be discussed with the Maintainers before being submitted.
|
|
||||||
|
|
||||||
This model has been chosen to reduce the risk of burnout by limiting the
|
|
||||||
maintenance overhead of reviewing and validating third-party code.
|
|
||||||
|
|
||||||
Headscale is open to code contributions for bug fixes without discussion.
|
|
||||||
|
|
||||||
If you find mistakes in the documentation, please submit a fix to the documentation.
|
|
||||||
|
|
||||||
### Requirements
|
|
||||||
|
|
||||||
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
To contribute to headscale you would need the lastest version of [Go](https://golang.org)
|
||||||
and [Buf](https://buf.build)(Protobuf generator).
|
and [Buf](https://buf.build)(Protobuf generator).
|
||||||
|
|
||||||
@@ -120,6 +95,8 @@ We recommend using [Nix](https://nixos.org/) to setup a development environment.
|
|||||||
be done with `nix develop`, which will install the tools and give you a shell.
|
be done with `nix develop`, which will install the tools and give you a shell.
|
||||||
This guarantees that you will have the same dev env as `headscale` maintainers.
|
This guarantees that you will have the same dev env as `headscale` maintainers.
|
||||||
|
|
||||||
|
PRs and suggestions are welcome.
|
||||||
|
|
||||||
### Code style
|
### Code style
|
||||||
|
|
||||||
To ensure we have some consistency with a growing number of contributions,
|
To ensure we have some consistency with a growing number of contributions,
|
||||||
@@ -197,6 +174,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
<sub style="font-size:14px"><b>Juan Font</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/restanrm>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/cure>
|
<a href=https://github.com/cure>
|
||||||
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
<img src=https://avatars.githubusercontent.com/u/149135?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ward Vandewege/>
|
||||||
@@ -211,13 +195,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Jiang Zhu</b></sub>
|
<sub style="font-size:14px"><b>Jiang Zhu</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/tsujamin>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/2435619?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Benjamin Roberts/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Benjamin Roberts</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/reynico>
|
<a href=https://github.com/reynico>
|
||||||
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
<img src=https://avatars.githubusercontent.com/u/715768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Nico/>
|
||||||
@@ -227,13 +204,6 @@ make build
|
|||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/evenh>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/2701536?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Even Holthe/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Even Holthe</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/e-zk>
|
<a href=https://github.com/e-zk>
|
||||||
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
<img src=https://avatars.githubusercontent.com/u/58356365?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=e-zk/>
|
||||||
@@ -262,6 +232,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>unreality</b></sub>
|
<sub style="font-size:14px"><b>unreality</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/ohdearaugustin>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/mpldr>
|
<a href=https://github.com/mpldr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
<img src=https://avatars.githubusercontent.com/u/33086936?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Moritz Poldrack/>
|
||||||
@@ -271,20 +248,6 @@ make build
|
|||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/ohdearaugustin>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/14001491?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ohdearaugustin/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>ohdearaugustin</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/restanrm>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/4344371?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Adrien Raffin-Caboisse/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Adrien Raffin-Caboisse</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/GrigoriyMikhalkin>
|
<a href=https://github.com/GrigoriyMikhalkin>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3637857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=GrigoriyMikhalkin/>
|
<img src=https://avatars.githubusercontent.com/u/3637857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=GrigoriyMikhalkin/>
|
||||||
@@ -292,29 +255,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
<sub style="font-size:14px"><b>GrigoriyMikhalkin</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/christian-heusel>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/26827864?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Christian Heusel/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Christian Heusel</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/mike-lloyd03>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/49411532?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mike Lloyd/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Mike Lloyd</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/iSchluff>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/Niek>
|
<a href=https://github.com/Niek>
|
||||||
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
<img src=https://avatars.githubusercontent.com/u/213140?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Niek van der Maas/>
|
||||||
@@ -336,6 +276,13 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Azz</b></sub>
|
<sub style="font-size:14px"><b>Azz</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/iSchluff>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/1429641?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anton Schubert/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>Anton Schubert</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/qbit>
|
<a href=https://github.com/qbit>
|
||||||
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
<img src=https://avatars.githubusercontent.com/u/68368?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aaron Bieber/>
|
||||||
@@ -343,13 +290,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Aaron Bieber</b></sub>
|
<sub style="font-size:14px"><b>Aaron Bieber</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
</tr>
|
||||||
<a href=https://github.com/kazauwa>
|
<tr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12330159?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Igor Perepilitsyn/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Igor Perepilitsyn</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/Aluxima>
|
<a href=https://github.com/Aluxima>
|
||||||
<img src=https://avatars.githubusercontent.com/u/16262531?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Laurent Marchaud/>
|
<img src=https://avatars.githubusercontent.com/u/16262531?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Laurent Marchaud/>
|
||||||
@@ -357,15 +299,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
<sub style="font-size:14px"><b>Laurent Marchaud</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/majst01>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/fdelucchijr>
|
<a href=https://github.com/fdelucchijr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
<img src=https://avatars.githubusercontent.com/u/69133647?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Fernando De Lucchi/>
|
||||||
@@ -373,18 +306,11 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
<sub style="font-size:14px"><b>Fernando De Lucchi</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/OrvilleQ>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/21377465?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Orville Q. Song/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Orville Q. Song</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/hdhoang>
|
<a href=https://github.com/hdhoang>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hdhoang/>
|
<img src=https://avatars.githubusercontent.com/u/12537?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Hoàng Đức Hiếu/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>hdhoang</b></sub>
|
<sub style="font-size:14px"><b>Hoàng Đức Hiếu</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -394,15 +320,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/bravechamp>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/48980452?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=bravechamp/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>bravechamp</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/deonthomasgy>
|
<a href=https://github.com/deonthomasgy>
|
||||||
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
<img src=https://avatars.githubusercontent.com/u/150036?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Deon Thomas/>
|
||||||
@@ -410,13 +327,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Deon Thomas</b></sub>
|
<sub style="font-size:14px"><b>Deon Thomas</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/madjam002>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Jamie Greeff</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ChibangLW>
|
<a href=https://github.com/ChibangLW>
|
||||||
<img src=https://avatars.githubusercontent.com/u/22293464?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ChibangLW/>
|
<img src=https://avatars.githubusercontent.com/u/22293464?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ChibangLW/>
|
||||||
@@ -424,6 +334,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>ChibangLW</b></sub>
|
<sub style="font-size:14px"><b>ChibangLW</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/mevansam>
|
<a href=https://github.com/mevansam>
|
||||||
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
|
<img src=https://avatars.githubusercontent.com/u/403630?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mevan Samaratunga/>
|
||||||
@@ -445,8 +357,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
<sub style="font-size:14px"><b>Paul Tötterman</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/samson4649>
|
<a href=https://github.com/samson4649>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
<img src=https://avatars.githubusercontent.com/u/12725953?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Samuel Lock/>
|
||||||
@@ -455,17 +365,10 @@ make build
|
|||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/kevin1sMe>
|
<a href=https://github.com/majst01>
|
||||||
<img src=https://avatars.githubusercontent.com/u/6886076?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kevinlin/>
|
<img src=https://avatars.githubusercontent.com/u/410110?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan Majer/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>kevinlin</b></sub>
|
<sub style="font-size:14px"><b>Stefan Majer</b></sub>
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/QZAiXH>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/23068780?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Snack/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Snack</b></sub>
|
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -475,6 +378,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Artem Klevtsov</b></sub>
|
<sub style="font-size:14px"><b>Artem Klevtsov</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/cmars>
|
<a href=https://github.com/cmars>
|
||||||
<img src=https://avatars.githubusercontent.com/u/23741?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Casey Marshall/>
|
<img src=https://avatars.githubusercontent.com/u/23741?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Casey Marshall/>
|
||||||
@@ -482,36 +387,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
<sub style="font-size:14px"><b>Casey Marshall</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/dbevacqua>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/6534306?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dbevacqua/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>dbevacqua</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/joshuataylor>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/225131?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Josh Taylor/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Josh Taylor</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/CNLHC>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/21005146?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=LiuHanCheng/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>LiuHanCheng</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/motiejus>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/107720?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Motiejus Jakštys/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Motiejus Jakštys</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/pvinis>
|
<a href=https://github.com/pvinis>
|
||||||
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
<img src=https://avatars.githubusercontent.com/u/100233?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pavlos Vinieratos/>
|
||||||
@@ -527,21 +402,19 @@ make build
|
|||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/snh>
|
<a href=https://github.com/vtrf>
|
||||||
<img src=https://avatars.githubusercontent.com/u/2051768?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Steven Honson/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Steven Honson</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/ratsclub>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
<img src=https://avatars.githubusercontent.com/u/25647735?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Victor Freire/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
<sub style="font-size:14px"><b>Victor Freire</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
|
<a href=https://github.com/lachy2849>
|
||||||
|
<img src=https://avatars.githubusercontent.com/u/98844035?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lachy2849/>
|
||||||
|
<br />
|
||||||
|
<sub style="font-size:14px"><b>lachy2849</b></sub>
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/t56k>
|
<a href=https://github.com/t56k>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
<img src=https://avatars.githubusercontent.com/u/12165422?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=thomas/>
|
||||||
@@ -549,13 +422,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>thomas</b></sub>
|
<sub style="font-size:14px"><b>thomas</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
</tr>
|
||||||
<a href=https://github.com/linsomniac>
|
<tr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/466380?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Sean Reifschneider/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Sean Reifschneider</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/aberoham>
|
<a href=https://github.com/aberoham>
|
||||||
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
<img src=https://avatars.githubusercontent.com/u/586805?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Abraham Ingersoll/>
|
||||||
@@ -563,29 +431,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
<sub style="font-size:14px"><b>Abraham Ingersoll</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/iFargle>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/124551390?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Albert Copeland/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Albert Copeland</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/puzpuzpuz>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/37772591?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Andrei Pechkurov/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Andrei Pechkurov</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/theryecatcher>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/16442416?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Anoop Sundaresh/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Anoop Sundaresh</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/apognu>
|
<a href=https://github.com/apognu>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
<img src=https://avatars.githubusercontent.com/u/3017182?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antoine POPINEAU/>
|
||||||
@@ -593,13 +438,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
<sub style="font-size:14px"><b>Antoine POPINEAU</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/tony1661>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/5287266?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Antonio Fernandez/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Antonio Fernandez</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/aofei>
|
<a href=https://github.com/aofei>
|
||||||
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
<img src=https://avatars.githubusercontent.com/u/5037285?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Aofei Sheng/>
|
||||||
@@ -608,21 +446,12 @@ make build
|
|||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/arnarg>
|
<a href=https://github.com/awoimbee>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1291396?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arnar/>
|
<img src=https://avatars.githubusercontent.com/u/22431493?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Arthur Woimbée/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Arnar</b></sub>
|
<sub style="font-size:14px"><b>Arthur Woimbée</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/avirut>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/27095602?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Avirut Mehta/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Avirut Mehta</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/stensonb>
|
<a href=https://github.com/stensonb>
|
||||||
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
<img src=https://avatars.githubusercontent.com/u/933389?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Bryan Stenson/>
|
||||||
@@ -637,6 +466,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b> Carson Yang</b></sub>
|
<sub style="font-size:14px"><b> Carson Yang</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/kundel>
|
<a href=https://github.com/kundel>
|
||||||
<img src=https://avatars.githubusercontent.com/u/10158899?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kundel/>
|
<img src=https://avatars.githubusercontent.com/u/10158899?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=kundel/>
|
||||||
@@ -644,13 +475,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>kundel</b></sub>
|
<sub style="font-size:14px"><b>kundel</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/fatih-acar>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/15028881?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=fatih-acar/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>fatih-acar</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/fkr>
|
<a href=https://github.com/fkr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
<img src=https://avatars.githubusercontent.com/u/51063?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Felix Kronlage-Dammers/>
|
||||||
@@ -665,15 +489,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
<sub style="font-size:14px"><b>Felix Yan</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/gabe565>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/7717888?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Gabe Cook/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Gabe Cook</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/JJGadgets>
|
<a href=https://github.com/JJGadgets>
|
||||||
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
<img src=https://avatars.githubusercontent.com/u/5709019?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=JJGadgets/>
|
||||||
@@ -682,10 +497,10 @@ make build
|
|||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/hrtkpf>
|
<a href=https://github.com/madjam002>
|
||||||
<img src=https://avatars.githubusercontent.com/u/42646788?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=hrtkpf/>
|
<img src=https://avatars.githubusercontent.com/u/679137?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jamie Greeff/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>hrtkpf</b></sub>
|
<sub style="font-size:14px"><b>Jamie Greeff</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -695,64 +510,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
<sub style="font-size:14px"><b>Jim Tittsler</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/jsiebens>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/499769?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Johan Siebens/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Johan Siebens</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/johnae>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/28332?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=John Axel Eriksson/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>John Axel Eriksson</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/ShadowJonathan>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/22740616?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Jonathan de Jong/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Jonathan de Jong</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/JulienFloris>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/20380255?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Julien Zweverink/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Julien Zweverink</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/win-t>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1589120?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Kurnia D Win/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Kurnia D Win</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/foxtrot>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/4153572?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Marc/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Marc</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/magf>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/11992737?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Maxim Gajdaj/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Maxim Gajdaj</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/mikejsavage>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/579299?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Michael Savage/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Michael Savage</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -762,13 +519,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
|
<sub style="font-size:14px"><b>Pierre Carru</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/Donran>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/4838348?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Pontus N/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Pontus N</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/nnsee>
|
<a href=https://github.com/nnsee>
|
||||||
<img src=https://avatars.githubusercontent.com/u/36747857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Rasmus Moorats/>
|
<img src=https://avatars.githubusercontent.com/u/36747857?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Rasmus Moorats/>
|
||||||
@@ -785,9 +535,9 @@ make build
|
|||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/renovate-bot>
|
<a href=https://github.com/renovate-bot>
|
||||||
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Mend Renovate/>
|
<img src=https://avatars.githubusercontent.com/u/25180681?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=WhiteSource Renovate/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Mend Renovate</b></sub>
|
<sub style="font-size:14px"><b>WhiteSource Renovate</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -797,8 +547,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
<sub style="font-size:14px"><b>Ryan Fowler</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/shaananc>
|
<a href=https://github.com/shaananc>
|
||||||
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
<img src=https://avatars.githubusercontent.com/u/2287839?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Shaanan Cohney/>
|
||||||
@@ -806,13 +554,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
|
<sub style="font-size:14px"><b>Shaanan Cohney</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
</tr>
|
||||||
<a href=https://github.com/stefanvanburen>
|
<tr>
|
||||||
<img src=https://avatars.githubusercontent.com/u/622527?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Stefan VanBuren/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Stefan VanBuren</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/sophware>
|
<a href=https://github.com/sophware>
|
||||||
<img src=https://avatars.githubusercontent.com/u/41669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sophware/>
|
<img src=https://avatars.githubusercontent.com/u/41669?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sophware/>
|
||||||
@@ -834,15 +577,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
<sub style="font-size:14px"><b>Teteros</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/Teteros>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/5067989?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Teteros/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Teteros</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/gitter-badger>
|
<a href=https://github.com/gitter-badger>
|
||||||
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
|
<img src=https://avatars.githubusercontent.com/u/8518239?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=The Gitter Badger/>
|
||||||
@@ -857,13 +591,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
|
<sub style="font-size:14px"><b>Tianon Gravi</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/thetillhoff>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/25052289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Till Hoffmann/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Till Hoffmann</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/woudsma>
|
<a href=https://github.com/woudsma>
|
||||||
<img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/>
|
<img src=https://avatars.githubusercontent.com/u/6162978?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tjerk Woudsma/>
|
||||||
@@ -871,6 +598,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Tjerk Woudsma</b></sub>
|
<sub style="font-size:14px"><b>Tjerk Woudsma</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/y0ngb1n>
|
<a href=https://github.com/y0ngb1n>
|
||||||
<img src=https://avatars.githubusercontent.com/u/25719408?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yang Bin/>
|
<img src=https://avatars.githubusercontent.com/u/25719408?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Yang Bin/>
|
||||||
@@ -885,15 +614,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
|
<sub style="font-size:14px"><b>Yujie Xia</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/newellz2>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/52436542?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zachary Newell/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Zachary Newell</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/zekker6>
|
<a href=https://github.com/zekker6>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
<img src=https://avatars.githubusercontent.com/u/1367798?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zakhar Bessarab/>
|
||||||
@@ -901,13 +621,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
|
<sub style="font-size:14px"><b>Zakhar Bessarab</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/zhzy0077>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/8717471?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Zhiyuan Zheng/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Zhiyuan Zheng</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/Bpazy>
|
<a href=https://github.com/Bpazy>
|
||||||
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ziyuan Han/>
|
<img src=https://avatars.githubusercontent.com/u/9838749?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Ziyuan Han/>
|
||||||
@@ -915,13 +628,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
<sub style="font-size:14px"><b>Ziyuan Han</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/caelansar>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/31852257?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=caelansar/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>caelansar</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/derelm>
|
<a href=https://github.com/derelm>
|
||||||
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
<img src=https://avatars.githubusercontent.com/u/465155?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=derelm/>
|
||||||
@@ -929,15 +635,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>derelm</b></sub>
|
<sub style="font-size:14px"><b>derelm</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/dnaq>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1299717?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=dnaq/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>dnaq</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/nning>
|
<a href=https://github.com/nning>
|
||||||
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
<img src=https://avatars.githubusercontent.com/u/557430?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=henning mueller/>
|
||||||
@@ -945,6 +642,8 @@ make build
|
|||||||
<sub style="font-size:14px"><b>henning mueller</b></sub>
|
<sub style="font-size:14px"><b>henning mueller</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/ignoramous>
|
<a href=https://github.com/ignoramous>
|
||||||
<img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/>
|
<img src=https://avatars.githubusercontent.com/u/852289?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ignoramous/>
|
||||||
@@ -952,62 +651,18 @@ make build
|
|||||||
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
<sub style="font-size:14px"><b>ignoramous</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/jimyag>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/69233189?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=jimyag/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>jimyag</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/magichuihui>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/10866198?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=suhelen/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>suhelen</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/lion24>
|
<a href=https://github.com/lion24>
|
||||||
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=sharkonet/>
|
<img src=https://avatars.githubusercontent.com/u/1382102?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=lion24/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>sharkonet</b></sub>
|
<sub style="font-size:14px"><b>lion24</b></sub>
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/ma6174>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/1449133?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=ma6174/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>ma6174</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/manju-rn>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/26291847?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=manju-rn/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>manju-rn</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/nicholas-yap>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/38109533?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=nicholas-yap/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>nicholas-yap</b></sub>
|
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/pernila>
|
<a href=https://github.com/pernila>
|
||||||
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Tommi Pernila/>
|
<img src=https://avatars.githubusercontent.com/u/12460060?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=pernila/>
|
||||||
<br />
|
<br />
|
||||||
<sub style="font-size:14px"><b>Tommi Pernila</b></sub>
|
<sub style="font-size:14px"><b>pernila</b></sub>
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/phpmalik>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/26834645?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=phpmalik/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>phpmalik</b></sub>
|
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
@@ -1017,8 +672,6 @@ make build
|
|||||||
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
<sub style="font-size:14px"><b>Wakeful-Cloud</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
|
||||||
<tr>
|
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
||||||
<a href=https://github.com/xpzouying>
|
<a href=https://github.com/xpzouying>
|
||||||
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
<img src=https://avatars.githubusercontent.com/u/3946563?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=zy/>
|
||||||
@@ -1026,12 +679,5 @@ make build
|
|||||||
<sub style="font-size:14px"><b>zy</b></sub>
|
<sub style="font-size:14px"><b>zy</b></sub>
|
||||||
</a>
|
</a>
|
||||||
</td>
|
</td>
|
||||||
<td align="center" style="word-wrap: break-word; width: 150.0; height: 150.0">
|
|
||||||
<a href=https://github.com/atorregrosa-smd>
|
|
||||||
<img src=https://avatars.githubusercontent.com/u/78434679?v=4 width="100;" style="border-radius:50%;align-items:center;justify-content:center;overflow:hidden;padding-top:10px" alt=Àlex Torregrosa/>
|
|
||||||
<br />
|
|
||||||
<sub style="font-size:14px"><b>Àlex Torregrosa</b></sub>
|
|
||||||
</a>
|
|
||||||
</td>
|
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
573
acls.go
573
acls.go
@@ -10,13 +10,10 @@ import (
|
|||||||
"path/filepath"
|
"path/filepath"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/tailscale/hujson"
|
"github.com/tailscale/hujson"
|
||||||
"go4.org/netipx"
|
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
"tailscale.com/envknob"
|
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -57,8 +54,6 @@ const (
|
|||||||
ProtocolFC = 133 // Fibre Channel
|
ProtocolFC = 133 // Fibre Channel
|
||||||
)
|
)
|
||||||
|
|
||||||
var featureEnableSSH = envknob.RegisterBool("HEADSCALE_EXPERIMENTAL_FEATURE_SSH")
|
|
||||||
|
|
||||||
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
|
// LoadACLPolicy loads the ACL policy from the specify path, and generates the ACL rules.
|
||||||
func (h *Headscale) LoadACLPolicy(path string) error {
|
func (h *Headscale) LoadACLPolicy(path string) error {
|
||||||
log.Debug().
|
log.Debug().
|
||||||
@@ -118,62 +113,39 @@ func (h *Headscale) LoadACLPolicy(path string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) UpdateACLRules() error {
|
func (h *Headscale) UpdateACLRules() error {
|
||||||
machines, err := h.ListMachines()
|
rules, err := h.generateACLRules()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if h.aclPolicy == nil {
|
|
||||||
return errEmptyPolicy
|
|
||||||
}
|
|
||||||
|
|
||||||
rules, err := h.aclPolicy.generateFilterRules(machines, h.cfg.OIDC.StripEmaildomain)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
|
log.Trace().Interface("ACL", rules).Msg("ACL rules generated")
|
||||||
h.aclRules = rules
|
h.aclRules = rules
|
||||||
|
|
||||||
if featureEnableSSH() {
|
|
||||||
sshRules, err := h.generateSSHRules()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
log.Trace().Interface("SSH", sshRules).Msg("SSH rules generated")
|
|
||||||
if h.sshPolicy == nil {
|
|
||||||
h.sshPolicy = &tailcfg.SSHPolicy{}
|
|
||||||
}
|
|
||||||
h.sshPolicy.Rules = sshRules
|
|
||||||
} else if h.aclPolicy != nil && len(h.aclPolicy.SSHs) > 0 {
|
|
||||||
log.Info().Msg("SSH ACLs has been defined, but HEADSCALE_EXPERIMENTAL_FEATURE_SSH is not enabled, this is a unstable feature, check docs before activating")
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// generateFilterRules takes a set of machines and an ACLPolicy and generates a
|
func (h *Headscale) generateACLRules() ([]tailcfg.FilterRule, error) {
|
||||||
// set of Tailscale compatible FilterRules used to allow traffic on clients.
|
|
||||||
func (pol *ACLPolicy) generateFilterRules(
|
|
||||||
machines []Machine,
|
|
||||||
stripEmailDomain bool,
|
|
||||||
) ([]tailcfg.FilterRule, error) {
|
|
||||||
rules := []tailcfg.FilterRule{}
|
rules := []tailcfg.FilterRule{}
|
||||||
|
|
||||||
for index, acl := range pol.ACLs {
|
if h.aclPolicy == nil {
|
||||||
|
return nil, errEmptyPolicy
|
||||||
|
}
|
||||||
|
|
||||||
|
machines, err := h.ListMachines()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
for index, acl := range h.aclPolicy.ACLs {
|
||||||
if acl.Action != "accept" {
|
if acl.Action != "accept" {
|
||||||
return nil, errInvalidAction
|
return nil, errInvalidAction
|
||||||
}
|
}
|
||||||
|
|
||||||
srcIPs := []string{}
|
srcIPs := []string{}
|
||||||
for srcIndex, src := range acl.Sources {
|
for innerIndex, src := range acl.Sources {
|
||||||
srcs, err := pol.getIPsFromSource(src, machines, stripEmailDomain)
|
srcs, err := h.generateACLPolicySrcIP(machines, *h.aclPolicy, src)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Interface("src", src).
|
Msgf("Error parsing ACL %d, Source %d", index, innerIndex)
|
||||||
Int("ACL index", index).
|
|
||||||
Int("Src index", srcIndex).
|
|
||||||
Msgf("Error parsing ACL")
|
|
||||||
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -189,19 +161,16 @@ func (pol *ACLPolicy) generateFilterRules(
|
|||||||
}
|
}
|
||||||
|
|
||||||
destPorts := []tailcfg.NetPortRange{}
|
destPorts := []tailcfg.NetPortRange{}
|
||||||
for destIndex, dest := range acl.Destinations {
|
for innerIndex, dest := range acl.Destinations {
|
||||||
dests, err := pol.getNetPortRangeFromDestination(
|
dests, err := h.generateACLPolicyDest(
|
||||||
dest,
|
|
||||||
machines,
|
machines,
|
||||||
|
*h.aclPolicy,
|
||||||
|
dest,
|
||||||
needsWildcard,
|
needsWildcard,
|
||||||
stripEmailDomain,
|
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Interface("dest", dest).
|
Msgf("Error parsing ACL %d, Destination %d", index, innerIndex)
|
||||||
Int("ACL index", index).
|
|
||||||
Int("dest index", destIndex).
|
|
||||||
Msgf("Error parsing ACL")
|
|
||||||
|
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
@@ -218,192 +187,29 @@ func (pol *ACLPolicy) generateFilterRules(
|
|||||||
return rules, nil
|
return rules, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) generateSSHRules() ([]*tailcfg.SSHRule, error) {
|
func (h *Headscale) generateACLPolicySrcIP(
|
||||||
rules := []*tailcfg.SSHRule{}
|
machines []Machine,
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
if h.aclPolicy == nil {
|
|
||||||
return nil, errEmptyPolicy
|
|
||||||
}
|
|
||||||
|
|
||||||
machines, err := h.ListMachines()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
acceptAction := tailcfg.SSHAction{
|
|
||||||
Message: "",
|
|
||||||
Reject: false,
|
|
||||||
Accept: true,
|
|
||||||
SessionDuration: 0,
|
|
||||||
AllowAgentForwarding: false,
|
|
||||||
HoldAndDelegate: "",
|
|
||||||
AllowLocalPortForwarding: true,
|
|
||||||
}
|
|
||||||
|
|
||||||
rejectAction := tailcfg.SSHAction{
|
|
||||||
Message: "",
|
|
||||||
Reject: true,
|
|
||||||
Accept: false,
|
|
||||||
SessionDuration: 0,
|
|
||||||
AllowAgentForwarding: false,
|
|
||||||
HoldAndDelegate: "",
|
|
||||||
AllowLocalPortForwarding: false,
|
|
||||||
}
|
|
||||||
|
|
||||||
for index, sshACL := range h.aclPolicy.SSHs {
|
|
||||||
action := rejectAction
|
|
||||||
switch sshACL.Action {
|
|
||||||
case "accept":
|
|
||||||
action = acceptAction
|
|
||||||
case "check":
|
|
||||||
checkAction, err := sshCheckAction(sshACL.CheckPeriod)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Msgf("Error parsing SSH %d, check action with unparsable duration '%s'", index, sshACL.CheckPeriod)
|
|
||||||
} else {
|
|
||||||
action = *checkAction
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
log.Error().
|
|
||||||
Msgf("Error parsing SSH %d, unknown action '%s'", index, sshACL.Action)
|
|
||||||
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
principals := make([]*tailcfg.SSHPrincipal, 0, len(sshACL.Sources))
|
|
||||||
for innerIndex, rawSrc := range sshACL.Sources {
|
|
||||||
if isWildcard(rawSrc) {
|
|
||||||
principals = append(principals, &tailcfg.SSHPrincipal{
|
|
||||||
Any: true,
|
|
||||||
})
|
|
||||||
} else if isGroup(rawSrc) {
|
|
||||||
users, err := h.aclPolicy.getUsersInGroup(rawSrc, h.cfg.OIDC.StripEmaildomain)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
|
|
||||||
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, user := range users {
|
|
||||||
principals = append(principals, &tailcfg.SSHPrincipal{
|
|
||||||
UserLogin: user,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
expandedSrcs, err := h.aclPolicy.expandAlias(
|
|
||||||
machines,
|
|
||||||
rawSrc,
|
|
||||||
h.cfg.OIDC.StripEmaildomain,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Msgf("Error parsing SSH %d, Source %d", index, innerIndex)
|
|
||||||
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
for _, expandedSrc := range expandedSrcs.Prefixes() {
|
|
||||||
principals = append(principals, &tailcfg.SSHPrincipal{
|
|
||||||
NodeIP: expandedSrc.Addr().String(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
userMap := make(map[string]string, len(sshACL.Users))
|
|
||||||
for _, user := range sshACL.Users {
|
|
||||||
userMap[user] = "="
|
|
||||||
}
|
|
||||||
rules = append(rules, &tailcfg.SSHRule{
|
|
||||||
Principals: principals,
|
|
||||||
SSHUsers: userMap,
|
|
||||||
Action: &action,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
return rules, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func sshCheckAction(duration string) (*tailcfg.SSHAction, error) {
|
|
||||||
sessionLength, err := time.ParseDuration(duration)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return &tailcfg.SSHAction{
|
|
||||||
Message: "",
|
|
||||||
Reject: false,
|
|
||||||
Accept: true,
|
|
||||||
SessionDuration: sessionLength,
|
|
||||||
AllowAgentForwarding: false,
|
|
||||||
HoldAndDelegate: "",
|
|
||||||
AllowLocalPortForwarding: true,
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getIPsFromSource returns a set of Source IPs that would be associated
|
|
||||||
// with the given src alias.
|
|
||||||
func (pol *ACLPolicy) getIPsFromSource(
|
|
||||||
src string,
|
src string,
|
||||||
machines []Machine,
|
|
||||||
stripEmaildomain bool,
|
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
ipSet, err := pol.expandAlias(machines, src, stripEmaildomain)
|
return expandAlias(machines, aclPolicy, src, h.cfg.OIDC.StripEmaildomain)
|
||||||
if err != nil {
|
|
||||||
return []string{}, err
|
|
||||||
}
|
|
||||||
|
|
||||||
prefixes := []string{}
|
|
||||||
|
|
||||||
for _, prefix := range ipSet.Prefixes() {
|
|
||||||
prefixes = append(prefixes, prefix.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
return prefixes, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// getNetPortRangeFromDestination returns a set of tailcfg.NetPortRange
|
func (h *Headscale) generateACLPolicyDest(
|
||||||
// which are associated with the dest alias.
|
|
||||||
func (pol *ACLPolicy) getNetPortRangeFromDestination(
|
|
||||||
dest string,
|
|
||||||
machines []Machine,
|
machines []Machine,
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
|
dest string,
|
||||||
needsWildcard bool,
|
needsWildcard bool,
|
||||||
stripEmaildomain bool,
|
|
||||||
) ([]tailcfg.NetPortRange, error) {
|
) ([]tailcfg.NetPortRange, error) {
|
||||||
var tokens []string
|
tokens := strings.Split(dest, ":")
|
||||||
|
|
||||||
log.Trace().Str("destination", dest).Msg("generating policy destination")
|
|
||||||
|
|
||||||
// Check if there is a IPv4/6:Port combination, IPv6 has more than
|
|
||||||
// three ":".
|
|
||||||
tokens = strings.Split(dest, ":")
|
|
||||||
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
if len(tokens) < expectedTokenItems || len(tokens) > 3 {
|
||||||
port := tokens[len(tokens)-1]
|
return nil, errInvalidPortFormat
|
||||||
|
|
||||||
maybeIPv6Str := strings.TrimSuffix(dest, ":"+port)
|
|
||||||
log.Trace().Str("maybeIPv6Str", maybeIPv6Str).Msg("")
|
|
||||||
|
|
||||||
if maybeIPv6, err := netip.ParseAddr(maybeIPv6Str); err != nil && !maybeIPv6.Is6() {
|
|
||||||
log.Trace().Err(err).Msg("trying to parse as IPv6")
|
|
||||||
|
|
||||||
return nil, fmt.Errorf(
|
|
||||||
"failed to parse destination, tokens %v: %w",
|
|
||||||
tokens,
|
|
||||||
errInvalidPortFormat,
|
|
||||||
)
|
|
||||||
} else {
|
|
||||||
tokens = []string{maybeIPv6Str, port}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace().Strs("tokens", tokens).Msg("generating policy destination")
|
|
||||||
|
|
||||||
var alias string
|
var alias string
|
||||||
// We can have here stuff like:
|
// We can have here stuff like:
|
||||||
// git-server:*
|
// git-server:*
|
||||||
// 192.168.1.0/24:22
|
// 192.168.1.0/24:22
|
||||||
// fd7a:115c:a1e0::2:22
|
|
||||||
// fd7a:115c:a1e0::2/128:22
|
|
||||||
// tag:montreal-webserver:80,443
|
// tag:montreal-webserver:80,443
|
||||||
// tag:api-server:443
|
// tag:api-server:443
|
||||||
// example-host-1:*
|
// example-host-1:*
|
||||||
@@ -413,10 +219,11 @@ func (pol *ACLPolicy) getNetPortRangeFromDestination(
|
|||||||
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
|
alias = fmt.Sprintf("%s:%s", tokens[0], tokens[1])
|
||||||
}
|
}
|
||||||
|
|
||||||
expanded, err := pol.expandAlias(
|
expanded, err := expandAlias(
|
||||||
machines,
|
machines,
|
||||||
|
aclPolicy,
|
||||||
alias,
|
alias,
|
||||||
stripEmaildomain,
|
h.cfg.OIDC.StripEmaildomain,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
@@ -427,11 +234,11 @@ func (pol *ACLPolicy) getNetPortRangeFromDestination(
|
|||||||
}
|
}
|
||||||
|
|
||||||
dests := []tailcfg.NetPortRange{}
|
dests := []tailcfg.NetPortRange{}
|
||||||
for _, dest := range expanded.Prefixes() {
|
for _, d := range expanded {
|
||||||
for _, port := range *ports {
|
for _, p := range *ports {
|
||||||
pr := tailcfg.NetPortRange{
|
pr := tailcfg.NetPortRange{
|
||||||
IP: dest.String(),
|
IP: d,
|
||||||
Ports: port,
|
Ports: p,
|
||||||
}
|
}
|
||||||
dests = append(dests, pr)
|
dests = append(dests, pr)
|
||||||
}
|
}
|
||||||
@@ -453,7 +260,12 @@ func (pol *ACLPolicy) getNetPortRangeFromDestination(
|
|||||||
func parseProtocol(protocol string) ([]int, bool, error) {
|
func parseProtocol(protocol string) ([]int, bool, error) {
|
||||||
switch protocol {
|
switch protocol {
|
||||||
case "":
|
case "":
|
||||||
return nil, false, nil
|
return []int{
|
||||||
|
protocolICMP,
|
||||||
|
protocolIPv6ICMP,
|
||||||
|
protocolTCP,
|
||||||
|
protocolUDP,
|
||||||
|
}, false, nil
|
||||||
case "igmp":
|
case "igmp":
|
||||||
return []int{protocolIGMP}, true, nil
|
return []int{protocolIGMP}, true, nil
|
||||||
case "ipv4", "ip-in-ip":
|
case "ipv4", "ip-in-ip":
|
||||||
@@ -491,81 +303,128 @@ func parseProtocol(protocol string) ([]int, bool, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// expandalias has an input of either
|
// expandalias has an input of either
|
||||||
// - a user
|
// - a namespace
|
||||||
// - a group
|
// - a group
|
||||||
// - a tag
|
// - a tag
|
||||||
// - a host
|
|
||||||
// - an ip
|
|
||||||
// - a cidr
|
|
||||||
// and transform these in IPAddresses.
|
// and transform these in IPAddresses.
|
||||||
func (pol *ACLPolicy) expandAlias(
|
func expandAlias(
|
||||||
machines Machines,
|
machines []Machine,
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
alias string,
|
alias string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) (*netipx.IPSet, error) {
|
) ([]string, error) {
|
||||||
if isWildcard(alias) {
|
ips := []string{}
|
||||||
return parseIPSet("*", nil)
|
if alias == "*" {
|
||||||
|
return []string{"*"}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
build := netipx.IPSetBuilder{}
|
|
||||||
|
|
||||||
log.Debug().
|
log.Debug().
|
||||||
Str("alias", alias).
|
Str("alias", alias).
|
||||||
Msg("Expanding")
|
Msg("Expanding")
|
||||||
|
|
||||||
// if alias is a group
|
if strings.HasPrefix(alias, "group:") {
|
||||||
if isGroup(alias) {
|
namespaces, err := expandGroup(aclPolicy, alias, stripEmailDomain)
|
||||||
return pol.getIPsFromGroup(alias, machines, stripEmailDomain)
|
if err != nil {
|
||||||
|
return ips, err
|
||||||
|
}
|
||||||
|
for _, n := range namespaces {
|
||||||
|
nodes := filterMachinesByNamespace(machines, n)
|
||||||
|
for _, node := range nodes {
|
||||||
|
ips = append(ips, node.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is a tag
|
if strings.HasPrefix(alias, "tag:") {
|
||||||
if isTag(alias) {
|
// check for forced tags
|
||||||
return pol.getIPsFromTag(alias, machines, stripEmailDomain)
|
for _, machine := range machines {
|
||||||
|
if contains(machine.ForcedTags, alias) {
|
||||||
|
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// find tag owners
|
||||||
|
owners, err := expandTagOwners(aclPolicy, alias, stripEmailDomain)
|
||||||
|
if err != nil {
|
||||||
|
if errors.Is(err, errInvalidTag) {
|
||||||
|
if len(ips) == 0 {
|
||||||
|
return ips, fmt.Errorf(
|
||||||
|
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
||||||
|
errInvalidTag,
|
||||||
|
alias,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
|
} else {
|
||||||
|
return ips, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// filter out machines per tag owner
|
||||||
|
for _, namespace := range owners {
|
||||||
|
machines := filterMachinesByNamespace(machines, namespace)
|
||||||
|
for _, machine := range machines {
|
||||||
|
hi := machine.GetHostInfo()
|
||||||
|
if contains(hi.RequestTags, alias) {
|
||||||
|
ips = append(ips, machine.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return ips, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is a user
|
// if alias is a namespace
|
||||||
if ips, err := pol.getIPsForUser(alias, machines, stripEmailDomain); ips != nil {
|
nodes := filterMachinesByNamespace(machines, alias)
|
||||||
return ips, err
|
nodes = excludeCorrectlyTaggedNodes(aclPolicy, nodes, alias, stripEmailDomain)
|
||||||
|
|
||||||
|
for _, n := range nodes {
|
||||||
|
ips = append(ips, n.IPAddresses.ToStringSlice()...)
|
||||||
|
}
|
||||||
|
if len(ips) > 0 {
|
||||||
|
return ips, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is an host
|
// if alias is an host
|
||||||
// Note, this is recursive.
|
if h, ok := aclPolicy.Hosts[alias]; ok {
|
||||||
if h, ok := pol.Hosts[alias]; ok {
|
return []string{h.String()}, nil
|
||||||
log.Trace().Str("host", h.String()).Msg("expandAlias got hosts entry")
|
|
||||||
|
|
||||||
return pol.expandAlias(machines, h.String(), stripEmailDomain)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is an IP
|
// if alias is an IP
|
||||||
if ip, err := netip.ParseAddr(alias); err == nil {
|
ip, err := netip.ParseAddr(alias)
|
||||||
return pol.getIPsFromSingleIP(ip, machines)
|
if err == nil {
|
||||||
|
return []string{ip.String()}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// if alias is an IP Prefix (CIDR)
|
// if alias is an CIDR
|
||||||
if prefix, err := netip.ParsePrefix(alias); err == nil {
|
cidr, err := netip.ParsePrefix(alias)
|
||||||
return pol.getIPsFromIPPrefix(prefix, machines)
|
if err == nil {
|
||||||
|
return []string{cidr.String()}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Warn().Msgf("No IPs found with the alias %v", alias)
|
log.Warn().Msgf("No IPs found with the alias %v", alias)
|
||||||
|
|
||||||
return build.IPSet()
|
return ips, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
|
// excludeCorrectlyTaggedNodes will remove from the list of input nodes the ones
|
||||||
// that are correctly tagged since they should not be listed as being in the user
|
// that are correctly tagged since they should not be listed as being in the namespace
|
||||||
// we assume in this function that we only have nodes from 1 user.
|
// we assume in this function that we only have nodes from 1 namespace.
|
||||||
func excludeCorrectlyTaggedNodes(
|
func excludeCorrectlyTaggedNodes(
|
||||||
aclPolicy *ACLPolicy,
|
aclPolicy ACLPolicy,
|
||||||
nodes []Machine,
|
nodes []Machine,
|
||||||
user string,
|
namespace string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) []Machine {
|
) []Machine {
|
||||||
out := []Machine{}
|
out := []Machine{}
|
||||||
tags := []string{}
|
tags := []string{}
|
||||||
for tag := range aclPolicy.TagOwners {
|
for tag := range aclPolicy.TagOwners {
|
||||||
owners, _ := getTagOwners(aclPolicy, user, stripEmailDomain)
|
owners, _ := expandTagOwners(aclPolicy, namespace, stripEmailDomain)
|
||||||
ns := append(owners, user)
|
ns := append(owners, namespace)
|
||||||
if contains(ns, user) {
|
if contains(ns, namespace) {
|
||||||
tags = append(tags, tag)
|
tags = append(tags, tag)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -593,7 +452,7 @@ func excludeCorrectlyTaggedNodes(
|
|||||||
}
|
}
|
||||||
|
|
||||||
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
|
func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, error) {
|
||||||
if isWildcard(portsStr) {
|
if portsStr == "*" {
|
||||||
return &[]tailcfg.PortRange{
|
return &[]tailcfg.PortRange{
|
||||||
{First: portRangeBegin, Last: portRangeEnd},
|
{First: portRangeBegin, Last: portRangeEnd},
|
||||||
}, nil
|
}, nil
|
||||||
@@ -605,7 +464,6 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
|
|||||||
|
|
||||||
ports := []tailcfg.PortRange{}
|
ports := []tailcfg.PortRange{}
|
||||||
for _, portStr := range strings.Split(portsStr, ",") {
|
for _, portStr := range strings.Split(portsStr, ",") {
|
||||||
log.Trace().Msgf("parsing portstring: %s", portStr)
|
|
||||||
rang := strings.Split(portStr, "-")
|
rang := strings.Split(portStr, "-")
|
||||||
switch len(rang) {
|
switch len(rang) {
|
||||||
case 1:
|
case 1:
|
||||||
@@ -640,10 +498,10 @@ func expandPorts(portsStr string, needsWildcard bool) (*[]tailcfg.PortRange, err
|
|||||||
return &ports, nil
|
return &ports, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func filterMachinesByUser(machines []Machine, user string) []Machine {
|
func filterMachinesByNamespace(machines []Machine, namespace string) []Machine {
|
||||||
out := []Machine{}
|
out := []Machine{}
|
||||||
for _, machine := range machines {
|
for _, machine := range machines {
|
||||||
if machine.User.Name == user {
|
if machine.Namespace.Name == namespace {
|
||||||
out = append(out, machine)
|
out = append(out, machine)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -651,15 +509,15 @@ func filterMachinesByUser(machines []Machine, user string) []Machine {
|
|||||||
return out
|
return out
|
||||||
}
|
}
|
||||||
|
|
||||||
// getTagOwners will return a list of user. An owner can be either a user or a group
|
// expandTagOwners will return a list of namespace. An owner can be either a namespace or a group
|
||||||
// a group cannot be composed of groups.
|
// a group cannot be composed of groups.
|
||||||
func getTagOwners(
|
func expandTagOwners(
|
||||||
pol *ACLPolicy,
|
aclPolicy ACLPolicy,
|
||||||
tag string,
|
tag string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
var owners []string
|
var owners []string
|
||||||
ows, ok := pol.TagOwners[tag]
|
ows, ok := aclPolicy.TagOwners[tag]
|
||||||
if !ok {
|
if !ok {
|
||||||
return []string{}, fmt.Errorf(
|
return []string{}, fmt.Errorf(
|
||||||
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
|
"%w. %v isn't owned by a TagOwner. Please add one first. https://tailscale.com/kb/1018/acls/#tag-owners",
|
||||||
@@ -668,8 +526,8 @@ func getTagOwners(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
for _, owner := range ows {
|
for _, owner := range ows {
|
||||||
if isGroup(owner) {
|
if strings.HasPrefix(owner, "group:") {
|
||||||
gs, err := pol.getUsersInGroup(owner, stripEmailDomain)
|
gs, err := expandGroup(aclPolicy, owner, stripEmailDomain)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return []string{}, err
|
return []string{}, err
|
||||||
}
|
}
|
||||||
@@ -682,15 +540,15 @@ func getTagOwners(
|
|||||||
return owners, nil
|
return owners, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// getUsersInGroup will return the list of user inside the group
|
// expandGroup will return the list of namespace inside the group
|
||||||
// after some validation.
|
// after some validation.
|
||||||
func (pol *ACLPolicy) getUsersInGroup(
|
func expandGroup(
|
||||||
|
aclPolicy ACLPolicy,
|
||||||
group string,
|
group string,
|
||||||
stripEmailDomain bool,
|
stripEmailDomain bool,
|
||||||
) ([]string, error) {
|
) ([]string, error) {
|
||||||
users := []string{}
|
outGroups := []string{}
|
||||||
log.Trace().Caller().Interface("pol", pol).Msg("test")
|
aclGroups, ok := aclPolicy.Groups[group]
|
||||||
aclGroups, ok := pol.Groups[group]
|
|
||||||
if !ok {
|
if !ok {
|
||||||
return []string{}, fmt.Errorf(
|
return []string{}, fmt.Errorf(
|
||||||
"group %v isn't registered. %w",
|
"group %v isn't registered. %w",
|
||||||
@@ -699,7 +557,7 @@ func (pol *ACLPolicy) getUsersInGroup(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
for _, group := range aclGroups {
|
for _, group := range aclGroups {
|
||||||
if isGroup(group) {
|
if strings.HasPrefix(group, "group:") {
|
||||||
return []string{}, fmt.Errorf(
|
return []string{}, fmt.Errorf(
|
||||||
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
|
"%w. A group cannot be composed of groups. https://tailscale.com/kb/1018/acls/#groups",
|
||||||
errInvalidGroup,
|
errInvalidGroup,
|
||||||
@@ -713,151 +571,8 @@ func (pol *ACLPolicy) getUsersInGroup(
|
|||||||
errInvalidGroup,
|
errInvalidGroup,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
users = append(users, grp)
|
outGroups = append(outGroups, grp)
|
||||||
}
|
}
|
||||||
|
|
||||||
return users, nil
|
return outGroups, nil
|
||||||
}
|
|
||||||
|
|
||||||
func (pol *ACLPolicy) getIPsFromGroup(
|
|
||||||
group string,
|
|
||||||
machines Machines,
|
|
||||||
stripEmailDomain bool,
|
|
||||||
) (*netipx.IPSet, error) {
|
|
||||||
build := netipx.IPSetBuilder{}
|
|
||||||
|
|
||||||
users, err := pol.getUsersInGroup(group, stripEmailDomain)
|
|
||||||
if err != nil {
|
|
||||||
return &netipx.IPSet{}, err
|
|
||||||
}
|
|
||||||
for _, user := range users {
|
|
||||||
filteredMachines := filterMachinesByUser(machines, user)
|
|
||||||
for _, machine := range filteredMachines {
|
|
||||||
machine.IPAddresses.AppendToIPSet(&build)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return build.IPSet()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pol *ACLPolicy) getIPsFromTag(
|
|
||||||
alias string,
|
|
||||||
machines Machines,
|
|
||||||
stripEmailDomain bool,
|
|
||||||
) (*netipx.IPSet, error) {
|
|
||||||
build := netipx.IPSetBuilder{}
|
|
||||||
|
|
||||||
// check for forced tags
|
|
||||||
for _, machine := range machines {
|
|
||||||
if contains(machine.ForcedTags, alias) {
|
|
||||||
machine.IPAddresses.AppendToIPSet(&build)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// find tag owners
|
|
||||||
owners, err := getTagOwners(pol, alias, stripEmailDomain)
|
|
||||||
if err != nil {
|
|
||||||
if errors.Is(err, errInvalidTag) {
|
|
||||||
ipSet, _ := build.IPSet()
|
|
||||||
if len(ipSet.Prefixes()) == 0 {
|
|
||||||
return ipSet, fmt.Errorf(
|
|
||||||
"%w. %v isn't owned by a TagOwner and no forced tags are defined",
|
|
||||||
errInvalidTag,
|
|
||||||
alias,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
return build.IPSet()
|
|
||||||
} else {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// filter out machines per tag owner
|
|
||||||
for _, user := range owners {
|
|
||||||
machines := filterMachinesByUser(machines, user)
|
|
||||||
for _, machine := range machines {
|
|
||||||
hi := machine.GetHostInfo()
|
|
||||||
if contains(hi.RequestTags, alias) {
|
|
||||||
machine.IPAddresses.AppendToIPSet(&build)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return build.IPSet()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pol *ACLPolicy) getIPsForUser(
|
|
||||||
user string,
|
|
||||||
machines Machines,
|
|
||||||
stripEmailDomain bool,
|
|
||||||
) (*netipx.IPSet, error) {
|
|
||||||
build := netipx.IPSetBuilder{}
|
|
||||||
|
|
||||||
filteredMachines := filterMachinesByUser(machines, user)
|
|
||||||
filteredMachines = excludeCorrectlyTaggedNodes(pol, filteredMachines, user, stripEmailDomain)
|
|
||||||
|
|
||||||
// shortcurcuit if we have no machines to get ips from.
|
|
||||||
if len(filteredMachines) == 0 {
|
|
||||||
return nil, nil //nolint
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, machine := range filteredMachines {
|
|
||||||
machine.IPAddresses.AppendToIPSet(&build)
|
|
||||||
}
|
|
||||||
|
|
||||||
return build.IPSet()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pol *ACLPolicy) getIPsFromSingleIP(
|
|
||||||
ip netip.Addr,
|
|
||||||
machines Machines,
|
|
||||||
) (*netipx.IPSet, error) {
|
|
||||||
log.Trace().Str("ip", ip.String()).Msg("expandAlias got ip")
|
|
||||||
|
|
||||||
matches := machines.FilterByIP(ip)
|
|
||||||
|
|
||||||
build := netipx.IPSetBuilder{}
|
|
||||||
build.Add(ip)
|
|
||||||
|
|
||||||
for _, machine := range matches {
|
|
||||||
machine.IPAddresses.AppendToIPSet(&build)
|
|
||||||
}
|
|
||||||
|
|
||||||
return build.IPSet()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (pol *ACLPolicy) getIPsFromIPPrefix(
|
|
||||||
prefix netip.Prefix,
|
|
||||||
machines Machines,
|
|
||||||
) (*netipx.IPSet, error) {
|
|
||||||
log.Trace().Str("prefix", prefix.String()).Msg("expandAlias got prefix")
|
|
||||||
build := netipx.IPSetBuilder{}
|
|
||||||
build.AddPrefix(prefix)
|
|
||||||
|
|
||||||
// This is suboptimal and quite expensive, but if we only add the prefix, we will miss all the relevant IPv6
|
|
||||||
// addresses for the hosts that belong to tailscale. This doesnt really affect stuff like subnet routers.
|
|
||||||
for _, machine := range machines {
|
|
||||||
for _, ip := range machine.IPAddresses {
|
|
||||||
// log.Trace().
|
|
||||||
// Msgf("checking if machine ip (%s) is part of prefix (%s): %v, is single ip prefix (%v), addr: %s", ip.String(), prefix.String(), prefix.Contains(ip), prefix.IsSingleIP(), prefix.Addr().String())
|
|
||||||
if prefix.Contains(ip) {
|
|
||||||
machine.IPAddresses.AppendToIPSet(&build)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return build.IPSet()
|
|
||||||
}
|
|
||||||
|
|
||||||
func isWildcard(str string) bool {
|
|
||||||
return str == "*"
|
|
||||||
}
|
|
||||||
|
|
||||||
func isGroup(str string) bool {
|
|
||||||
return strings.HasPrefix(str, "group:")
|
|
||||||
}
|
|
||||||
|
|
||||||
func isTag(str string) bool {
|
|
||||||
return strings.HasPrefix(str, "tag:")
|
|
||||||
}
|
}
|
||||||
|
815
acls_test.go
815
acls_test.go
File diff suppressed because it is too large
Load Diff
@@ -11,13 +11,11 @@ import (
|
|||||||
|
|
||||||
// ACLPolicy represents a Tailscale ACL Policy.
|
// ACLPolicy represents a Tailscale ACL Policy.
|
||||||
type ACLPolicy struct {
|
type ACLPolicy struct {
|
||||||
Groups Groups `json:"groups" yaml:"groups"`
|
Groups Groups `json:"groups" yaml:"groups"`
|
||||||
Hosts Hosts `json:"hosts" yaml:"hosts"`
|
Hosts Hosts `json:"hosts" yaml:"hosts"`
|
||||||
TagOwners TagOwners `json:"tagOwners" yaml:"tagOwners"`
|
TagOwners TagOwners `json:"tagOwners" yaml:"tagOwners"`
|
||||||
ACLs []ACL `json:"acls" yaml:"acls"`
|
ACLs []ACL `json:"acls" yaml:"acls"`
|
||||||
Tests []ACLTest `json:"tests" yaml:"tests"`
|
Tests []ACLTest `json:"tests" yaml:"tests"`
|
||||||
AutoApprovers AutoApprovers `json:"autoApprovers" yaml:"autoApprovers"`
|
|
||||||
SSHs []SSH `json:"ssh" yaml:"ssh"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// ACL is a basic rule for the ACL Policy.
|
// ACL is a basic rule for the ACL Policy.
|
||||||
@@ -34,7 +32,7 @@ type Groups map[string][]string
|
|||||||
// Hosts are alias for IP addresses or subnets.
|
// Hosts are alias for IP addresses or subnets.
|
||||||
type Hosts map[string]netip.Prefix
|
type Hosts map[string]netip.Prefix
|
||||||
|
|
||||||
// TagOwners specify what users (users?) are allow to use certain tags.
|
// TagOwners specify what users (namespaces?) are allow to use certain tags.
|
||||||
type TagOwners map[string][]string
|
type TagOwners map[string][]string
|
||||||
|
|
||||||
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
// ACLTest is not implemented, but should be use to check if a certain rule is allowed.
|
||||||
@@ -44,22 +42,6 @@ type ACLTest struct {
|
|||||||
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
Deny []string `json:"deny,omitempty" yaml:"deny,omitempty"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AutoApprovers specify which users (users?), groups or tags have their advertised routes
|
|
||||||
// or exit node status automatically enabled.
|
|
||||||
type AutoApprovers struct {
|
|
||||||
Routes map[string][]string `json:"routes" yaml:"routes"`
|
|
||||||
ExitNode []string `json:"exitNode" yaml:"exitNode"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// SSH controls who can ssh into which machines.
|
|
||||||
type SSH struct {
|
|
||||||
Action string `json:"action" yaml:"action"`
|
|
||||||
Sources []string `json:"src" yaml:"src"`
|
|
||||||
Destinations []string `json:"dst" yaml:"dst"`
|
|
||||||
Users []string `json:"users" yaml:"users"`
|
|
||||||
CheckPeriod string `json:"checkPeriod,omitempty" yaml:"checkPeriod,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
// UnmarshalJSON allows to parse the Hosts directly into netip objects.
|
||||||
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
func (hosts *Hosts) UnmarshalJSON(data []byte) error {
|
||||||
newHosts := Hosts{}
|
newHosts := Hosts{}
|
||||||
@@ -111,35 +93,10 @@ func (hosts *Hosts) UnmarshalYAML(data []byte) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// IsZero is perhaps a bit naive here.
|
// IsZero is perhaps a bit naive here.
|
||||||
func (pol ACLPolicy) IsZero() bool {
|
func (policy ACLPolicy) IsZero() bool {
|
||||||
if len(pol.Groups) == 0 && len(pol.Hosts) == 0 && len(pol.ACLs) == 0 {
|
if len(policy.Groups) == 0 && len(policy.Hosts) == 0 && len(policy.ACLs) == 0 {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Returns the list of autoApproving users, groups or tags for a given IPPrefix.
|
|
||||||
func (autoApprovers *AutoApprovers) GetRouteApprovers(
|
|
||||||
prefix netip.Prefix,
|
|
||||||
) ([]string, error) {
|
|
||||||
if prefix.Bits() == 0 {
|
|
||||||
return autoApprovers.ExitNode, nil // 0.0.0.0/0, ::/0 or equivalent
|
|
||||||
}
|
|
||||||
|
|
||||||
approverAliases := []string{}
|
|
||||||
|
|
||||||
for autoApprovedPrefix, autoApproverAliases := range autoApprovers.Routes {
|
|
||||||
autoApprovedPrefix, err := netip.ParsePrefix(autoApprovedPrefix)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if prefix.Bits() >= autoApprovedPrefix.Bits() &&
|
|
||||||
autoApprovedPrefix.Contains(prefix.Masked().Addr()) {
|
|
||||||
approverAliases = append(approverAliases, autoApproverAliases...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return approverAliases, nil
|
|
||||||
}
|
|
||||||
|
34
api.go
34
api.go
@@ -9,7 +9,6 @@ import (
|
|||||||
|
|
||||||
"github.com/gorilla/mux"
|
"github.com/gorilla/mux"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"tailscale.com/types/key"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -78,7 +77,7 @@ var registerWebAPITemplate = template.Must(
|
|||||||
<p>
|
<p>
|
||||||
Run the command below in the headscale server to add this machine to your network:
|
Run the command below in the headscale server to add this machine to your network:
|
||||||
</p>
|
</p>
|
||||||
<pre><code>headscale nodes register --user USERNAME --key {{.Key}}</code></pre>
|
<pre><code>headscale -n NAMESPACE nodes register --key {{.Key}}</code></pre>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
`))
|
`))
|
||||||
@@ -94,34 +93,7 @@ func (h *Headscale) RegisterWebAPI(
|
|||||||
) {
|
) {
|
||||||
vars := mux.Vars(req)
|
vars := mux.Vars(req)
|
||||||
nodeKeyStr, ok := vars["nkey"]
|
nodeKeyStr, ok := vars["nkey"]
|
||||||
|
if !ok || nodeKeyStr == "" {
|
||||||
if !NodePublicKeyRegex.Match([]byte(nodeKeyStr)) {
|
|
||||||
log.Warn().Str("node_key", nodeKeyStr).Msg("Invalid node key passed to registration url")
|
|
||||||
|
|
||||||
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
|
||||||
writer.WriteHeader(http.StatusUnauthorized)
|
|
||||||
_, err := writer.Write([]byte("Unauthorized"))
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Caller().
|
|
||||||
Err(err).
|
|
||||||
Msg("Failed to write response")
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// We need to make sure we dont open for XSS style injections, if the parameter that
|
|
||||||
// is passed as a key is not parsable/validated as a NodePublic key, then fail to render
|
|
||||||
// the template and log an error.
|
|
||||||
var nodeKey key.NodePublic
|
|
||||||
err := nodeKey.UnmarshalText(
|
|
||||||
[]byte(NodePublicKeyEnsurePrefix(nodeKeyStr)),
|
|
||||||
)
|
|
||||||
|
|
||||||
if !ok || nodeKeyStr == "" || err != nil {
|
|
||||||
log.Warn().Err(err).Msg("Failed to parse incoming nodekey")
|
|
||||||
|
|
||||||
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
writer.Header().Set("Content-Type", "text/plain; charset=utf-8")
|
||||||
writer.WriteHeader(http.StatusBadRequest)
|
writer.WriteHeader(http.StatusBadRequest)
|
||||||
_, err := writer.Write([]byte("Wrong params"))
|
_, err := writer.Write([]byte("Wrong params"))
|
||||||
@@ -158,7 +130,7 @@ func (h *Headscale) RegisterWebAPI(
|
|||||||
|
|
||||||
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
|
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||||
writer.WriteHeader(http.StatusOK)
|
writer.WriteHeader(http.StatusOK)
|
||||||
_, err = writer.Write(content.Bytes())
|
_, err := writer.Write(content.Bytes())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
|
@@ -1,8 +1,6 @@
|
|||||||
package headscale
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
@@ -15,7 +13,7 @@ func (h *Headscale) generateMapResponse(
|
|||||||
Str("func", "generateMapResponse").
|
Str("func", "generateMapResponse").
|
||||||
Str("machine", mapRequest.Hostinfo.Hostname).
|
Str("machine", mapRequest.Hostinfo.Hostname).
|
||||||
Msg("Creating Map response")
|
Msg("Creating Map response")
|
||||||
node, err := h.toNode(*machine, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
node, err := machine.toNode(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
@@ -37,9 +35,9 @@ func (h *Headscale) generateMapResponse(
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
profiles := h.getMapResponseUserProfiles(*machine, peers)
|
profiles := getMapResponseUserProfiles(*machine, peers)
|
||||||
|
|
||||||
nodePeers, err := h.toNodes(peers, h.cfg.BaseDomain, h.cfg.DNSConfig)
|
nodePeers, err := peers.toNodes(h.cfg.BaseDomain, h.cfg.DNSConfig, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
@@ -57,46 +55,15 @@ func (h *Headscale) generateMapResponse(
|
|||||||
peers,
|
peers,
|
||||||
)
|
)
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
|
|
||||||
resp := tailcfg.MapResponse{
|
resp := tailcfg.MapResponse{
|
||||||
KeepAlive: false,
|
KeepAlive: false,
|
||||||
Node: node,
|
Node: node,
|
||||||
|
Peers: nodePeers,
|
||||||
// TODO: Only send if updated
|
DNSConfig: dnsConfig,
|
||||||
DERPMap: h.DERPMap,
|
Domain: h.cfg.BaseDomain,
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
Peers: nodePeers,
|
|
||||||
|
|
||||||
// TODO(kradalby): Implement:
|
|
||||||
// https://github.com/tailscale/tailscale/blob/main/tailcfg/tailcfg.go#L1351-L1374
|
|
||||||
// PeersChanged
|
|
||||||
// PeersRemoved
|
|
||||||
// PeersChangedPatch
|
|
||||||
// PeerSeenChange
|
|
||||||
// OnlineChange
|
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
DNSConfig: dnsConfig,
|
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
Domain: h.cfg.BaseDomain,
|
|
||||||
|
|
||||||
// Do not instruct clients to collect services, we do not
|
|
||||||
// support or do anything with them
|
|
||||||
CollectServices: "false",
|
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
PacketFilter: h.aclRules,
|
PacketFilter: h.aclRules,
|
||||||
|
DERPMap: h.DERPMap,
|
||||||
UserProfiles: profiles,
|
UserProfiles: profiles,
|
||||||
|
|
||||||
// TODO: Only send if updated
|
|
||||||
SSHPolicy: h.sshPolicy,
|
|
||||||
|
|
||||||
ControlTime: &now,
|
|
||||||
|
|
||||||
Debug: &tailcfg.Debug{
|
Debug: &tailcfg.Debug{
|
||||||
DisableLogTail: !h.cfg.LogTail.Enabled,
|
DisableLogTail: !h.cfg.LogTail.Enabled,
|
||||||
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
RandomizeClientPort: h.cfg.RandomizeClientPort,
|
||||||
|
@@ -29,7 +29,7 @@ type APIKey struct {
|
|||||||
LastSeen *time.Time
|
LastSeen *time.Time
|
||||||
}
|
}
|
||||||
|
|
||||||
// CreateAPIKey creates a new ApiKey in a user, and returns it.
|
// CreateAPIKey creates a new ApiKey in a namespace, and returns it.
|
||||||
func (h *Headscale) CreateAPIKey(
|
func (h *Headscale) CreateAPIKey(
|
||||||
expiration *time.Time,
|
expiration *time.Time,
|
||||||
) (string, *APIKey, error) {
|
) (string, *APIKey, error) {
|
||||||
@@ -64,7 +64,7 @@ func (h *Headscale) CreateAPIKey(
|
|||||||
return keyStr, &key, nil
|
return keyStr, &key, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ListAPIKeys returns the list of ApiKeys for a user.
|
// ListAPIKeys returns the list of ApiKeys for a namespace.
|
||||||
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
func (h *Headscale) ListAPIKeys() ([]APIKey, error) {
|
||||||
keys := []APIKey{}
|
keys := []APIKey{}
|
||||||
if err := h.db.Find(&keys).Error; err != nil {
|
if err := h.db.Find(&keys).Error; err != nil {
|
||||||
|
202
app.go
202
app.go
@@ -11,7 +11,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"sort"
|
"sort"
|
||||||
"strconv"
|
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"syscall"
|
"syscall"
|
||||||
@@ -25,7 +24,7 @@ import (
|
|||||||
"github.com/patrickmn/go-cache"
|
"github.com/patrickmn/go-cache"
|
||||||
zerolog "github.com/philip-bui/grpc-zerolog"
|
zerolog "github.com/philip-bui/grpc-zerolog"
|
||||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||||
"github.com/puzpuzpuz/xsync/v2"
|
"github.com/puzpuzpuz/xsync"
|
||||||
zl "github.com/rs/zerolog"
|
zl "github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"golang.org/x/crypto/acme"
|
"golang.org/x/crypto/acme"
|
||||||
@@ -52,6 +51,10 @@ const (
|
|||||||
errUnsupportedLetsEncryptChallengeType = Error(
|
errUnsupportedLetsEncryptChallengeType = Error(
|
||||||
"unknown value for Lets Encrypt challenge type",
|
"unknown value for Lets Encrypt challenge type",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
ErrFailedPrivateKey = Error("failed to read or create private key")
|
||||||
|
ErrFailedNoisePrivateKey = Error("failed to read or create Noise protocol private key")
|
||||||
|
ErrSamePrivateKeys = Error("private key and noise private key are the same")
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -81,14 +84,15 @@ type Headscale struct {
|
|||||||
privateKey *key.MachinePrivate
|
privateKey *key.MachinePrivate
|
||||||
noisePrivateKey *key.MachinePrivate
|
noisePrivateKey *key.MachinePrivate
|
||||||
|
|
||||||
|
noiseMux *mux.Router
|
||||||
|
|
||||||
DERPMap *tailcfg.DERPMap
|
DERPMap *tailcfg.DERPMap
|
||||||
DERPServer *DERPServer
|
DERPServer *DERPServer
|
||||||
|
|
||||||
aclPolicy *ACLPolicy
|
aclPolicy *ACLPolicy
|
||||||
aclRules []tailcfg.FilterRule
|
aclRules []tailcfg.FilterRule
|
||||||
sshPolicy *tailcfg.SSHPolicy
|
|
||||||
|
|
||||||
lastStateChange *xsync.MapOf[string, time.Time]
|
lastStateChange *xsync.MapOf[time.Time]
|
||||||
|
|
||||||
oidcProvider *oidc.Provider
|
oidcProvider *oidc.Provider
|
||||||
oauth2Config *oauth2.Config
|
oauth2Config *oauth2.Config
|
||||||
@@ -101,20 +105,41 @@ type Headscale struct {
|
|||||||
pollNetMapStreamWG sync.WaitGroup
|
pollNetMapStreamWG sync.WaitGroup
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Look up the TLS constant relative to user-supplied TLS client
|
||||||
|
// authentication mode. If an unknown mode is supplied, the default
|
||||||
|
// value, tls.RequireAnyClientCert, is returned. The returned boolean
|
||||||
|
// indicates if the supplied mode was valid.
|
||||||
|
func LookupTLSClientAuthMode(mode string) (tls.ClientAuthType, bool) {
|
||||||
|
switch mode {
|
||||||
|
case DisabledClientAuth:
|
||||||
|
// Client cert is _not_ required.
|
||||||
|
return tls.NoClientCert, true
|
||||||
|
case RelaxedClientAuth:
|
||||||
|
// Client cert required, but _not verified_.
|
||||||
|
return tls.RequireAnyClientCert, true
|
||||||
|
case EnforcedClientAuth:
|
||||||
|
// Client cert is _required and verified_.
|
||||||
|
return tls.RequireAndVerifyClientCert, true
|
||||||
|
default:
|
||||||
|
// Return the default when an unknown value is supplied.
|
||||||
|
return tls.RequireAnyClientCert, false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
func NewHeadscale(cfg *Config) (*Headscale, error) {
|
||||||
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
privateKey, err := readOrCreatePrivateKey(cfg.PrivateKeyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to read or create private key: %w", err)
|
return nil, ErrFailedPrivateKey
|
||||||
}
|
}
|
||||||
|
|
||||||
// TS2021 requires to have a different key from the legacy protocol.
|
// TS2021 requires to have a different key from the legacy protocol.
|
||||||
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
|
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("failed to read or create Noise protocol private key: %w", err)
|
return nil, ErrFailedNoisePrivateKey
|
||||||
}
|
}
|
||||||
|
|
||||||
if privateKey.Equal(*noisePrivateKey) {
|
if privateKey.Equal(*noisePrivateKey) {
|
||||||
return nil, fmt.Errorf("private key and noise private key are the same: %w", err)
|
return nil, ErrSamePrivateKeys
|
||||||
}
|
}
|
||||||
|
|
||||||
var dbString string
|
var dbString string
|
||||||
@@ -127,12 +152,8 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
cfg.DBuser,
|
cfg.DBuser,
|
||||||
)
|
)
|
||||||
|
|
||||||
if sslEnabled, err := strconv.ParseBool(cfg.DBssl); err == nil {
|
if !cfg.DBssl {
|
||||||
if !sslEnabled {
|
dbString += " sslmode=disable"
|
||||||
dbString += " sslmode=disable"
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
dbString += fmt.Sprintf(" sslmode=%s", cfg.DBssl)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.DBport != 0 {
|
if cfg.DBport != 0 {
|
||||||
@@ -162,7 +183,6 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
aclRules: tailcfg.FilterAllowAll, // default allowall
|
aclRules: tailcfg.FilterAllowAll, // default allowall
|
||||||
registrationCache: registrationCache,
|
registrationCache: registrationCache,
|
||||||
pollNetMapStreamWG: sync.WaitGroup{},
|
pollNetMapStreamWG: sync.WaitGroup{},
|
||||||
lastStateChange: xsync.NewMapOf[time.Time](),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
err = app.initDB()
|
err = app.initDB()
|
||||||
@@ -173,11 +193,7 @@ func NewHeadscale(cfg *Config) (*Headscale, error) {
|
|||||||
if cfg.OIDC.Issuer != "" {
|
if cfg.OIDC.Issuer != "" {
|
||||||
err = app.initOIDC()
|
err = app.initOIDC()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if cfg.OIDC.OnlyStartIfOIDCIsAvailable {
|
return nil, err
|
||||||
return nil, err
|
|
||||||
} else {
|
|
||||||
log.Warn().Err(err).Msg("failed to set up OIDC provider, falling back to CLI based authentication")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -218,47 +234,29 @@ func (h *Headscale) expireEphemeralNodes(milliSeconds int64) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// expireExpiredMachines expires machines that have an explicit expiry set
|
|
||||||
// after that expiry time has passed.
|
|
||||||
func (h *Headscale) expireExpiredMachines(milliSeconds int64) {
|
|
||||||
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
|
||||||
for range ticker.C {
|
|
||||||
h.expireExpiredMachinesWorker()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) failoverSubnetRoutes(milliSeconds int64) {
|
|
||||||
ticker := time.NewTicker(time.Duration(milliSeconds) * time.Millisecond)
|
|
||||||
for range ticker.C {
|
|
||||||
err := h.handlePrimarySubnetFailover()
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msg("failed to handle primary subnet failover")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) expireEphemeralNodesWorker() {
|
func (h *Headscale) expireEphemeralNodesWorker() {
|
||||||
users, err := h.ListUsers()
|
namespaces, err := h.ListNamespaces()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().Err(err).Msg("Error listing users")
|
log.Error().Err(err).Msg("Error listing namespaces")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, user := range users {
|
for _, namespace := range namespaces {
|
||||||
machines, err := h.ListMachinesByUser(user.Name)
|
machines, err := h.ListMachinesInNamespace(namespace.Name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Err(err).
|
Err(err).
|
||||||
Str("user", user.Name).
|
Str("namespace", namespace.Name).
|
||||||
Msg("Error listing machines in user")
|
Msg("Error listing machines in namespace")
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
expiredFound := false
|
expiredFound := false
|
||||||
for _, machine := range machines {
|
for _, machine := range machines {
|
||||||
if machine.isEphemeral() && machine.LastSeen != nil &&
|
if machine.AuthKey != nil && machine.LastSeen != nil &&
|
||||||
|
machine.AuthKey.Ephemeral &&
|
||||||
time.Now().
|
time.Now().
|
||||||
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
After(machine.LastSeen.Add(h.cfg.EphemeralNodeInactivityTimeout)) {
|
||||||
expiredFound = true
|
expiredFound = true
|
||||||
@@ -282,53 +280,6 @@ func (h *Headscale) expireEphemeralNodesWorker() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) expireExpiredMachinesWorker() {
|
|
||||||
users, err := h.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msg("Error listing users")
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, user := range users {
|
|
||||||
machines, err := h.ListMachinesByUser(user.Name)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Err(err).
|
|
||||||
Str("user", user.Name).
|
|
||||||
Msg("Error listing machines in user")
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
expiredFound := false
|
|
||||||
for index, machine := range machines {
|
|
||||||
if machine.isExpired() &&
|
|
||||||
machine.Expiry.After(h.getLastStateChange(user)) {
|
|
||||||
expiredFound = true
|
|
||||||
|
|
||||||
err := h.ExpireMachine(&machines[index])
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Err(err).
|
|
||||||
Str("machine", machine.Hostname).
|
|
||||||
Str("name", machine.GivenName).
|
|
||||||
Msg("🤮 Cannot expire machine")
|
|
||||||
} else {
|
|
||||||
log.Info().
|
|
||||||
Str("machine", machine.Hostname).
|
|
||||||
Str("name", machine.GivenName).
|
|
||||||
Msg("Machine successfully expired")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if expiredFound {
|
|
||||||
h.setLastStateChangeToNow()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
|
||||||
req interface{},
|
req interface{},
|
||||||
info *grpc.UnaryServerInfo,
|
info *grpc.UnaryServerInfo,
|
||||||
@@ -497,19 +448,16 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
|||||||
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet)
|
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet)
|
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/register/{nkey}", h.RegisterWebAPI).Methods(http.MethodGet)
|
router.HandleFunc("/register/{nkey}", h.RegisterWebAPI).Methods(http.MethodGet)
|
||||||
h.addLegacyHandlers(router)
|
router.HandleFunc("/machine/{mkey}/map", h.PollNetMapHandler).Methods(http.MethodPost)
|
||||||
|
router.HandleFunc("/machine/{mkey}", h.RegistrationHandler).Methods(http.MethodPost)
|
||||||
router.HandleFunc("/oidc/register/{nkey}", h.RegisterOIDC).Methods(http.MethodGet)
|
router.HandleFunc("/oidc/register/{nkey}", h.RegisterOIDC).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet)
|
router.HandleFunc("/oidc/callback", h.OIDCCallback).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet)
|
router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig).
|
router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig).Methods(http.MethodGet)
|
||||||
Methods(http.MethodGet)
|
|
||||||
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet)
|
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig).
|
router.HandleFunc("/windows/tailscale.reg", h.WindowsRegConfig).Methods(http.MethodGet)
|
||||||
Methods(http.MethodGet)
|
|
||||||
router.HandleFunc("/swagger", SwaggerUI).Methods(http.MethodGet)
|
router.HandleFunc("/swagger", SwaggerUI).Methods(http.MethodGet)
|
||||||
router.HandleFunc("/swagger/v1/openapiv2.json", SwaggerAPIv1).
|
router.HandleFunc("/swagger/v1/openapiv2.json", SwaggerAPIv1).Methods(http.MethodGet)
|
||||||
Methods(http.MethodGet)
|
|
||||||
|
|
||||||
if h.cfg.DERP.ServerEnabled {
|
if h.cfg.DERP.ServerEnabled {
|
||||||
router.HandleFunc("/derp", h.DERPHandler)
|
router.HandleFunc("/derp", h.DERPHandler)
|
||||||
@@ -521,7 +469,16 @@ func (h *Headscale) createRouter(grpcMux *runtime.ServeMux) *mux.Router {
|
|||||||
apiRouter.Use(h.httpAuthenticationMiddleware)
|
apiRouter.Use(h.httpAuthenticationMiddleware)
|
||||||
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP)
|
||||||
|
|
||||||
router.PathPrefix("/").HandlerFunc(notFoundHandler)
|
router.PathPrefix("/").HandlerFunc(stdoutHandler)
|
||||||
|
|
||||||
|
return router
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Headscale) createNoiseMux() *mux.Router {
|
||||||
|
router := mux.NewRouter()
|
||||||
|
|
||||||
|
router.HandleFunc("/machine/register", h.NoiseRegistrationHandler).Methods(http.MethodPost)
|
||||||
|
router.HandleFunc("/machine/map", h.NoisePollNetMapHandler)
|
||||||
|
|
||||||
return router
|
return router
|
||||||
}
|
}
|
||||||
@@ -550,9 +507,6 @@ func (h *Headscale) Serve() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
go h.expireEphemeralNodes(updateInterval)
|
go h.expireEphemeralNodes(updateInterval)
|
||||||
go h.expireExpiredMachines(updateInterval)
|
|
||||||
|
|
||||||
go h.failoverSubnetRoutes(updateInterval)
|
|
||||||
|
|
||||||
if zl.GlobalLevel() == zl.TraceLevel {
|
if zl.GlobalLevel() == zl.TraceLevel {
|
||||||
zerolog.RespLog = true
|
zerolog.RespLog = true
|
||||||
@@ -686,6 +640,12 @@ func (h *Headscale) Serve() error {
|
|||||||
// over our main Addr. It also serves the legacy Tailcale API
|
// over our main Addr. It also serves the legacy Tailcale API
|
||||||
router := h.createRouter(grpcGatewayMux)
|
router := h.createRouter(grpcGatewayMux)
|
||||||
|
|
||||||
|
// This router is served only over the Noise connection, and exposes only the new API.
|
||||||
|
//
|
||||||
|
// The HTTP2 server that exposes this router is created for
|
||||||
|
// a single hijacked connection from /ts2021, using netutil.NewOneConnListener
|
||||||
|
h.noiseMux = h.createNoiseMux()
|
||||||
|
|
||||||
httpServer := &http.Server{
|
httpServer := &http.Server{
|
||||||
Addr: h.cfg.Addr,
|
Addr: h.cfg.Addr,
|
||||||
Handler: router,
|
Handler: router,
|
||||||
@@ -818,6 +778,7 @@ func (h *Headscale) Serve() error {
|
|||||||
|
|
||||||
// And we're done:
|
// And we're done:
|
||||||
cancel()
|
cancel()
|
||||||
|
os.Exit(0)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -866,8 +827,9 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
|
|||||||
ReadTimeout: HTTPReadTimeout,
|
ReadTimeout: HTTPReadTimeout,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
err := server.ListenAndServe()
|
||||||
|
|
||||||
go func() {
|
go func() {
|
||||||
err := server.ListenAndServe()
|
|
||||||
log.Fatal().
|
log.Fatal().
|
||||||
Caller().
|
Caller().
|
||||||
Err(err).
|
Err(err).
|
||||||
@@ -890,7 +852,12 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
|
|||||||
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
log.Info().Msg(fmt.Sprintf(
|
||||||
|
"Client authentication (mTLS) is \"%s\". See the docs to learn about configuring this setting.",
|
||||||
|
h.cfg.TLS.ClientAuthMode))
|
||||||
|
|
||||||
tlsConfig := &tls.Config{
|
tlsConfig := &tls.Config{
|
||||||
|
ClientAuth: h.cfg.TLS.ClientAuthMode,
|
||||||
NextProtos: []string{"http/1.1"},
|
NextProtos: []string{"http/1.1"},
|
||||||
Certificates: make([]tls.Certificate, 1),
|
Certificates: make([]tls.Certificate, 1),
|
||||||
MinVersion: tls.VersionTLS12,
|
MinVersion: tls.VersionTLS12,
|
||||||
@@ -907,31 +874,31 @@ func (h *Headscale) setLastStateChangeToNow() {
|
|||||||
|
|
||||||
now := time.Now().UTC()
|
now := time.Now().UTC()
|
||||||
|
|
||||||
users, err := h.ListUsers()
|
namespaces, err := h.ListNamespacesStr()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
Caller().
|
Caller().
|
||||||
Err(err).
|
Err(err).
|
||||||
Msg("failed to fetch all users, failing to update last changed state.")
|
Msg("failed to fetch all namespaces, failing to update last changed state.")
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, user := range users {
|
for _, namespace := range namespaces {
|
||||||
lastStateUpdate.WithLabelValues(user.Name, "headscale").Set(float64(now.Unix()))
|
lastStateUpdate.WithLabelValues(namespace, "headscale").Set(float64(now.Unix()))
|
||||||
if h.lastStateChange == nil {
|
if h.lastStateChange == nil {
|
||||||
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
h.lastStateChange = xsync.NewMapOf[time.Time]()
|
||||||
}
|
}
|
||||||
h.lastStateChange.Store(user.Name, now)
|
h.lastStateChange.Store(namespace, now)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Headscale) getLastStateChange(users ...User) time.Time {
|
func (h *Headscale) getLastStateChange(namespaces ...string) time.Time {
|
||||||
times := []time.Time{}
|
times := []time.Time{}
|
||||||
|
|
||||||
// getLastStateChange takes a list of users as a "filter", if no users
|
// getLastStateChange takes a list of namespaces as a "filter", if no namespaces
|
||||||
// are past, then use the entier list of users and look for the last update
|
// are past, then use the entier list of namespaces and look for the last update
|
||||||
if len(users) > 0 {
|
if len(namespaces) > 0 {
|
||||||
for _, user := range users {
|
for _, namespace := range namespaces {
|
||||||
if lastChange, ok := h.lastStateChange.Load(user.Name); ok {
|
if lastChange, ok := h.lastStateChange.Load(namespace); ok {
|
||||||
times = append(times, lastChange)
|
times = append(times, lastChange)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -956,7 +923,7 @@ func (h *Headscale) getLastStateChange(users ...User) time.Time {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func notFoundHandler(
|
func stdoutHandler(
|
||||||
writer http.ResponseWriter,
|
writer http.ResponseWriter,
|
||||||
req *http.Request,
|
req *http.Request,
|
||||||
) {
|
) {
|
||||||
@@ -968,7 +935,6 @@ func notFoundHandler(
|
|||||||
Interface("url", req.URL).
|
Interface("url", req.URL).
|
||||||
Bytes("body", body).
|
Bytes("body", body).
|
||||||
Msg("Request did not match")
|
Msg("Request did not match")
|
||||||
writer.WriteHeader(http.StatusNotFound)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
|
||||||
|
17
app_test.go
17
app_test.go
@@ -59,3 +59,20 @@ func (s *Suite) ResetDB(c *check.C) {
|
|||||||
}
|
}
|
||||||
app.db = db
|
app.db = db
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Enusre an error is returned when an invalid auth mode
|
||||||
|
// is supplied.
|
||||||
|
func (s *Suite) TestInvalidClientAuthMode(c *check.C) {
|
||||||
|
_, isValid := LookupTLSClientAuthMode("invalid")
|
||||||
|
c.Assert(isValid, check.Equals, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure that all client auth modes return a nil error.
|
||||||
|
func (s *Suite) TestAuthModes(c *check.C) {
|
||||||
|
modes := []string{"disabled", "relaxed", "enforced"}
|
||||||
|
|
||||||
|
for _, v := range modes {
|
||||||
|
_, isValid := LookupTLSClientAuthMode(v)
|
||||||
|
c.Assert(isValid, check.Equals, true)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
@@ -1,47 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"log"
|
|
||||||
|
|
||||||
"github.com/juanfont/headscale/integration"
|
|
||||||
"github.com/juanfont/headscale/integration/tsic"
|
|
||||||
"github.com/ory/dockertest/v3"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
log.Printf("creating docker pool")
|
|
||||||
pool, err := dockertest.NewPool("")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("could not connect to docker: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("creating docker network")
|
|
||||||
network, err := pool.CreateNetwork("docker-integration-net")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to create or get network: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, version := range integration.TailscaleVersions {
|
|
||||||
log.Printf("creating container image for Tailscale (%s)", version)
|
|
||||||
|
|
||||||
tsClient, err := tsic.New(
|
|
||||||
pool,
|
|
||||||
version,
|
|
||||||
network,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to create tailscale node: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = tsClient.Shutdown()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to shut down container: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
network.Close()
|
|
||||||
err = pool.RemoveNetwork(network)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to remove network: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
@@ -1,169 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
//go:generate go run ./main.go
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"text/template"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
githubWorkflowPath = "../../.github/workflows/"
|
|
||||||
jobFileNameTemplate = `test-integration-v2-%s.yaml`
|
|
||||||
jobTemplate = template.Must(
|
|
||||||
template.New("jobTemplate").
|
|
||||||
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
|
|
||||||
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
|
|
||||||
|
|
||||||
name: Integration Test v2 - {{.Name}}
|
|
||||||
|
|
||||||
on: [pull_request]
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v3
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: tj-actions/changed-files@v34
|
|
||||||
with:
|
|
||||||
files: |
|
|
||||||
*.nix
|
|
||||||
go.*
|
|
||||||
**/*.go
|
|
||||||
integration_test/
|
|
||||||
config-example.yaml
|
|
||||||
|
|
||||||
- uses: cachix/install-nix-action@v18
|
|
||||||
if: {{ "${{ env.ACT }}" }} || steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
|
|
||||||
- name: Run general integration tests
|
|
||||||
if: steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
run: |
|
|
||||||
nix develop --command -- docker run \
|
|
||||||
--tty --rm \
|
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go test ./... \
|
|
||||||
-tags ts2019 \
|
|
||||||
-failfast \
|
|
||||||
-timeout 120m \
|
|
||||||
-parallel 1 \
|
|
||||||
-run "^{{.Name}}$"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v3
|
|
||||||
if: always() && steps.changed-files.outputs.any_changed == 'true'
|
|
||||||
with:
|
|
||||||
name: pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
||||||
`),
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
const workflowFilePerm = 0o600
|
|
||||||
|
|
||||||
func removeTests() {
|
|
||||||
glob := fmt.Sprintf(jobFileNameTemplate, "*")
|
|
||||||
|
|
||||||
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to find test files")
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, file := range files {
|
|
||||||
err := os.Remove(file)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("failed to remove: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func findTests() []string {
|
|
||||||
rgBin, err := exec.LookPath("rg")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to find rg (ripgrep) binary")
|
|
||||||
}
|
|
||||||
|
|
||||||
args := []string{
|
|
||||||
"--regexp", "func (Test.+)\\(.*",
|
|
||||||
"../../integration/",
|
|
||||||
"--replace", "$1",
|
|
||||||
"--sort", "path",
|
|
||||||
"--no-line-number",
|
|
||||||
"--no-filename",
|
|
||||||
"--no-heading",
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
|
|
||||||
|
|
||||||
ripgrep := exec.Command(
|
|
||||||
rgBin,
|
|
||||||
args...,
|
|
||||||
)
|
|
||||||
|
|
||||||
result, err := ripgrep.CombinedOutput()
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("out: %s", result)
|
|
||||||
log.Fatalf("failed to run ripgrep: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
tests := strings.Split(string(result), "\n")
|
|
||||||
tests = tests[:len(tests)-1]
|
|
||||||
|
|
||||||
return tests
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
type testConfig struct {
|
|
||||||
Name string
|
|
||||||
}
|
|
||||||
|
|
||||||
tests := findTests()
|
|
||||||
|
|
||||||
removeTests()
|
|
||||||
|
|
||||||
for _, test := range tests {
|
|
||||||
log.Printf("generating workflow for %s", test)
|
|
||||||
|
|
||||||
var content bytes.Buffer
|
|
||||||
|
|
||||||
if err := jobTemplate.Execute(&content, testConfig{
|
|
||||||
Name: test,
|
|
||||||
}); err != nil {
|
|
||||||
log.Fatalf("failed to render template: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
|
|
||||||
|
|
||||||
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to write github job: %s", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@@ -1,22 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/rs/zerolog/log"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
rootCmd.AddCommand(configTestCmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
var configTestCmd = &cobra.Command{
|
|
||||||
Use: "configtest",
|
|
||||||
Short: "Test the configuration.",
|
|
||||||
Long: "Run a test of the configuration and exit.",
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
_, err := getHeadscaleApp()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
@@ -3,7 +3,6 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"github.com/juanfont/headscale"
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
@@ -11,7 +10,8 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix")
|
keyLength = 64
|
||||||
|
errPreAuthKeyTooShort = Error("key too short, must be 64 hexadecimal characters")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
||||||
@@ -27,14 +27,8 @@ func init() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
createNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
||||||
|
err = createNodeCmd.MarkFlagRequired("namespace")
|
||||||
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
|
|
||||||
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
createNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err = createNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
@@ -61,9 +55,9 @@ var createNodeCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -93,8 +87,8 @@ var createNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if !headscale.NodePublicKeyRegex.Match([]byte(machineKey)) {
|
if len(machineKey) != keyLength {
|
||||||
err = errPreAuthKeyMalformed
|
err = errPreAuthKeyTooShort
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Error: %s", err),
|
fmt.Sprintf("Error: %s", err),
|
||||||
@@ -116,10 +110,10 @@ var createNodeCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.DebugCreateMachineRequest{
|
request := &v1.DebugCreateMachineRequest{
|
||||||
Key: machineKey,
|
Key: machineKey,
|
||||||
Name: name,
|
Name: name,
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
Routes: routes,
|
Routes: routes,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.DebugCreateMachine(ctx, request)
|
response, err := client.DebugCreateMachine(ctx, request)
|
||||||
|
@@ -1,115 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"os"
|
|
||||||
"strconv"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/oauth2-proxy/mockoidc"
|
|
||||||
"github.com/rs/zerolog/log"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
|
||||||
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
|
||||||
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
|
||||||
refreshTTL = 60 * time.Minute
|
|
||||||
)
|
|
||||||
|
|
||||||
var accessTTL = 2 * time.Minute
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
rootCmd.AddCommand(mockOidcCmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
var mockOidcCmd = &cobra.Command{
|
|
||||||
Use: "mockoidc",
|
|
||||||
Short: "Runs a mock OIDC server for testing",
|
|
||||||
Long: "This internal command runs a OpenID Connect for testing purposes",
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
err := mockOIDC()
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msgf("Error running mock OIDC server")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
func mockOIDC() error {
|
|
||||||
clientID := os.Getenv("MOCKOIDC_CLIENT_ID")
|
|
||||||
if clientID == "" {
|
|
||||||
return errMockOidcClientIDNotDefined
|
|
||||||
}
|
|
||||||
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
|
|
||||||
if clientSecret == "" {
|
|
||||||
return errMockOidcClientSecretNotDefined
|
|
||||||
}
|
|
||||||
addrStr := os.Getenv("MOCKOIDC_ADDR")
|
|
||||||
if addrStr == "" {
|
|
||||||
return errMockOidcPortNotDefined
|
|
||||||
}
|
|
||||||
portStr := os.Getenv("MOCKOIDC_PORT")
|
|
||||||
if portStr == "" {
|
|
||||||
return errMockOidcPortNotDefined
|
|
||||||
}
|
|
||||||
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
|
||||||
if accessTTLOverride != "" {
|
|
||||||
newTTL, err := time.ParseDuration(accessTTLOverride)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
accessTTL = newTTL
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
|
||||||
|
|
||||||
port, err := strconv.Atoi(portStr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
mock, err := getMockOIDC(clientID, clientSecret)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", addrStr, port))
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
err = mock.Start(listener, nil)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
log.Info().Msgf("Mock OIDC server listening on %s", listener.Addr().String())
|
|
||||||
log.Info().Msgf("Issuer: %s", mock.Issuer())
|
|
||||||
c := make(chan struct{})
|
|
||||||
<-c
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func getMockOIDC(clientID string, clientSecret string) (*mockoidc.MockOIDC, error) {
|
|
||||||
keypair, err := mockoidc.NewKeypair(nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
mock := mockoidc.MockOIDC{
|
|
||||||
ClientID: clientID,
|
|
||||||
ClientSecret: clientSecret,
|
|
||||||
AccessTTL: accessTTL,
|
|
||||||
RefreshTTL: refreshTTL,
|
|
||||||
CodeChallengeMethodsSupported: []string{"plain", "S256"},
|
|
||||||
Keypair: keypair,
|
|
||||||
SessionStore: mockoidc.NewSessionStore(),
|
|
||||||
UserQueue: &mockoidc.UserQueue{},
|
|
||||||
ErrorQueue: &mockoidc.ErrorQueue{},
|
|
||||||
}
|
|
||||||
|
|
||||||
return &mock, nil
|
|
||||||
}
|
|
@@ -13,26 +13,26 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(userCmd)
|
rootCmd.AddCommand(namespaceCmd)
|
||||||
userCmd.AddCommand(createUserCmd)
|
namespaceCmd.AddCommand(createNamespaceCmd)
|
||||||
userCmd.AddCommand(listUsersCmd)
|
namespaceCmd.AddCommand(listNamespacesCmd)
|
||||||
userCmd.AddCommand(destroyUserCmd)
|
namespaceCmd.AddCommand(destroyNamespaceCmd)
|
||||||
userCmd.AddCommand(renameUserCmd)
|
namespaceCmd.AddCommand(renameNamespaceCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
const (
|
const (
|
||||||
errMissingParameter = headscale.Error("missing parameters")
|
errMissingParameter = headscale.Error("missing parameters")
|
||||||
)
|
)
|
||||||
|
|
||||||
var userCmd = &cobra.Command{
|
var namespaceCmd = &cobra.Command{
|
||||||
Use: "users",
|
Use: "namespaces",
|
||||||
Short: "Manage the users of Headscale",
|
Short: "Manage the namespaces of Headscale",
|
||||||
Aliases: []string{"user", "namespace", "namespaces", "ns"},
|
Aliases: []string{"namespace", "ns", "user", "users"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createUserCmd = &cobra.Command{
|
var createNamespaceCmd = &cobra.Command{
|
||||||
Use: "create NAME",
|
Use: "create NAME",
|
||||||
Short: "Creates a new user",
|
Short: "Creates a new namespace",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
@@ -44,7 +44,7 @@ var createUserCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
userName := args[0]
|
namespaceName := args[0]
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
@@ -52,15 +52,15 @@ var createUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
||||||
|
|
||||||
request := &v1.CreateUserRequest{Name: userName}
|
request := &v1.CreateNamespaceRequest{Name: namespaceName}
|
||||||
|
|
||||||
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
|
log.Trace().Interface("request", request).Msg("Sending CreateNamespace request")
|
||||||
response, err := client.CreateUser(ctx, request)
|
response, err := client.CreateNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot create user: %s",
|
"Cannot create namespace: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -69,13 +69,13 @@ var createUserCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.User, "User created", output)
|
SuccessOutput(response.Namespace, "Namespace created", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var destroyUserCmd = &cobra.Command{
|
var destroyNamespaceCmd = &cobra.Command{
|
||||||
Use: "destroy NAME",
|
Use: "destroy NAME",
|
||||||
Short: "Destroys a user",
|
Short: "Destroys a namespace",
|
||||||
Aliases: []string{"delete"},
|
Aliases: []string{"delete"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
if len(args) < 1 {
|
||||||
@@ -87,17 +87,17 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
userName := args[0]
|
namespaceName := args[0]
|
||||||
|
|
||||||
request := &v1.GetUserRequest{
|
request := &v1.GetNamespaceRequest{
|
||||||
Name: userName,
|
Name: namespaceName,
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
_, err := client.GetUser(ctx, request)
|
_, err := client.GetNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -113,8 +113,8 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
if !force {
|
if !force {
|
||||||
prompt := &survey.Confirm{
|
prompt := &survey.Confirm{
|
||||||
Message: fmt.Sprintf(
|
Message: fmt.Sprintf(
|
||||||
"Do you want to remove the user '%s' and any associated preauthkeys?",
|
"Do you want to remove the namespace '%s' and any associated preauthkeys?",
|
||||||
userName,
|
namespaceName,
|
||||||
),
|
),
|
||||||
}
|
}
|
||||||
err := survey.AskOne(prompt, &confirm)
|
err := survey.AskOne(prompt, &confirm)
|
||||||
@@ -124,14 +124,14 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if confirm || force {
|
if confirm || force {
|
||||||
request := &v1.DeleteUserRequest{Name: userName}
|
request := &v1.DeleteNamespaceRequest{Name: namespaceName}
|
||||||
|
|
||||||
response, err := client.DeleteUser(ctx, request)
|
response, err := client.DeleteNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot destroy user: %s",
|
"Cannot destroy namespace: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -139,16 +139,16 @@ var destroyUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
SuccessOutput(response, "User destroyed", output)
|
SuccessOutput(response, "Namespace destroyed", output)
|
||||||
} else {
|
} else {
|
||||||
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
|
SuccessOutput(map[string]string{"Result": "Namespace not destroyed"}, "Namespace not destroyed", output)
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var listUsersCmd = &cobra.Command{
|
var listNamespacesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all the users",
|
Short: "List all the namespaces",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -157,13 +157,13 @@ var listUsersCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListUsersRequest{}
|
request := &v1.ListNamespacesRequest{}
|
||||||
|
|
||||||
response, err := client.ListUsers(ctx, request)
|
response, err := client.ListNamespaces(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
|
fmt.Sprintf("Cannot get namespaces: %s", status.Convert(err).Message()),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -171,19 +171,19 @@ var listUsersCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
if output != "" {
|
||||||
SuccessOutput(response.Users, "", output)
|
SuccessOutput(response.Namespaces, "", output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
tableData := pterm.TableData{{"ID", "Name", "Created"}}
|
||||||
for _, user := range response.GetUsers() {
|
for _, namespace := range response.GetNamespaces() {
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
[]string{
|
[]string{
|
||||||
user.GetId(),
|
namespace.GetId(),
|
||||||
user.GetName(),
|
namespace.GetName(),
|
||||||
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
namespace.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -200,9 +200,9 @@ var listUsersCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameUserCmd = &cobra.Command{
|
var renameNamespaceCmd = &cobra.Command{
|
||||||
Use: "rename OLD_NAME NEW_NAME",
|
Use: "rename OLD_NAME NEW_NAME",
|
||||||
Short: "Renames a user",
|
Short: "Renames a namespace",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
expectedArguments := 2
|
expectedArguments := 2
|
||||||
@@ -219,17 +219,17 @@ var renameUserCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.RenameUserRequest{
|
request := &v1.RenameNamespaceRequest{
|
||||||
OldName: args[0],
|
OldName: args[0],
|
||||||
NewName: args[1],
|
NewName: args[1],
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RenameUser(ctx, request)
|
response, err := client.RenameNamespace(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"Cannot rename user: %s",
|
"Cannot rename namespace: %s",
|
||||||
status.Convert(err).Message(),
|
status.Convert(err).Message(),
|
||||||
),
|
),
|
||||||
output,
|
output,
|
||||||
@@ -238,6 +238,6 @@ var renameUserCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.User, "User renamed", output)
|
SuccessOutput(response.Namespace, "Namespace renamed", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
@@ -19,24 +19,12 @@ import (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(nodeCmd)
|
rootCmd.AddCommand(nodeCmd)
|
||||||
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
listNodesCmd.Flags().StringP("namespace", "n", "", "Filter by namespace")
|
||||||
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
||||||
|
|
||||||
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
|
|
||||||
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
listNodesNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
nodeCmd.AddCommand(listNodesCmd)
|
nodeCmd.AddCommand(listNodesCmd)
|
||||||
|
|
||||||
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
registerNodeCmd.Flags().StringP("namespace", "n", "", "Namespace")
|
||||||
|
err := registerNodeCmd.MarkFlagRequired("namespace")
|
||||||
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
|
|
||||||
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
registerNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err := registerNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
@@ -75,14 +63,9 @@ func init() {
|
|||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
|
moveNodeCmd.Flags().StringP("namespace", "n", "", "New namespace")
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
err = moveNodeCmd.MarkFlagRequired("namespace")
|
||||||
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
|
|
||||||
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
moveNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err = moveNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
@@ -110,9 +93,9 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
Short: "Registers a machine to your network",
|
Short: "Registers a machine to your network",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -133,8 +116,8 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.RegisterMachineRequest{
|
request := &v1.RegisterMachineRequest{
|
||||||
Key: machineKey,
|
Key: machineKey,
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RegisterMachine(ctx, request)
|
response, err := client.RegisterMachine(ctx, request)
|
||||||
@@ -151,9 +134,7 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(
|
SuccessOutput(response.Machine, "Machine register", output)
|
||||||
response.Machine,
|
|
||||||
fmt.Sprintf("Machine %s registered", response.Machine.GivenName), output)
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -163,9 +144,9 @@ var listNodesCmd = &cobra.Command{
|
|||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -181,7 +162,7 @@ var listNodesCmd = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListMachinesRequest{
|
request := &v1.ListMachinesRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListMachines(ctx, request)
|
response, err := client.ListMachines(ctx, request)
|
||||||
@@ -201,7 +182,7 @@ var listNodesCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData, err := nodesToPtables(user, showTags, response.Machines)
|
tableData, err := nodesToPtables(namespace, showTags, response.Machines)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
|
||||||
@@ -405,7 +386,7 @@ var deleteNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
var moveNodeCmd = &cobra.Command{
|
var moveNodeCmd = &cobra.Command{
|
||||||
Use: "move",
|
Use: "move",
|
||||||
Short: "Move node to another user",
|
Short: "Move node to another namespace",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -421,11 +402,11 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Error getting user: %s", err),
|
fmt.Sprintf("Error getting namespace: %s", err),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -456,7 +437,7 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
moveRequest := &v1.MoveMachineRequest{
|
moveRequest := &v1.MoveMachineRequest{
|
||||||
MachineId: identifier,
|
MachineId: identifier,
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
moveResponse, err := client.MoveMachine(ctx, moveRequest)
|
||||||
@@ -473,12 +454,12 @@ var moveNodeCmd = &cobra.Command{
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(moveResponse.Machine, "Node moved to another user", output)
|
SuccessOutput(moveResponse.Machine, "Node moved to another namespace", output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func nodesToPtables(
|
func nodesToPtables(
|
||||||
currentUser string,
|
currentNamespace string,
|
||||||
showTags bool,
|
showTags bool,
|
||||||
machines []*v1.Machine,
|
machines []*v1.Machine,
|
||||||
) (pterm.TableData, error) {
|
) (pterm.TableData, error) {
|
||||||
@@ -486,13 +467,11 @@ func nodesToPtables(
|
|||||||
"ID",
|
"ID",
|
||||||
"Hostname",
|
"Hostname",
|
||||||
"Name",
|
"Name",
|
||||||
"MachineKey",
|
|
||||||
"NodeKey",
|
"NodeKey",
|
||||||
"User",
|
"Namespace",
|
||||||
"IP addresses",
|
"IP addresses",
|
||||||
"Ephemeral",
|
"Ephemeral",
|
||||||
"Last seen",
|
"Last seen",
|
||||||
"Expiration",
|
|
||||||
"Online",
|
"Online",
|
||||||
"Expired",
|
"Expired",
|
||||||
}
|
}
|
||||||
@@ -519,24 +498,12 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var expiry time.Time
|
var expiry time.Time
|
||||||
var expiryTime string
|
|
||||||
if machine.Expiry != nil {
|
if machine.Expiry != nil {
|
||||||
expiry = machine.Expiry.AsTime()
|
expiry = machine.Expiry.AsTime()
|
||||||
expiryTime = expiry.Format("2006-01-02 15:04:05")
|
|
||||||
} else {
|
|
||||||
expiryTime = "N/A"
|
|
||||||
}
|
|
||||||
|
|
||||||
var machineKey key.MachinePublic
|
|
||||||
err := machineKey.UnmarshalText(
|
|
||||||
[]byte(headscale.MachinePublicKeyEnsurePrefix(machine.MachineKey)),
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
machineKey = key.MachinePublic{}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var nodeKey key.NodePublic
|
var nodeKey key.NodePublic
|
||||||
err = nodeKey.UnmarshalText(
|
err := nodeKey.UnmarshalText(
|
||||||
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
[]byte(headscale.NodePublicKeyEnsurePrefix(machine.NodeKey)),
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -544,7 +511,9 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var online string
|
var online string
|
||||||
if machine.Online {
|
if lastSeen.After(
|
||||||
|
time.Now().Add(-5 * time.Minute),
|
||||||
|
) { // TODO: Find a better way to reliably show if online
|
||||||
online = pterm.LightGreen("online")
|
online = pterm.LightGreen("online")
|
||||||
} else {
|
} else {
|
||||||
online = pterm.LightRed("offline")
|
online = pterm.LightRed("offline")
|
||||||
@@ -577,12 +546,12 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
validTags = strings.TrimLeft(validTags, ",")
|
validTags = strings.TrimLeft(validTags, ",")
|
||||||
|
|
||||||
var user string
|
var namespace string
|
||||||
if currentUser == "" || (currentUser == machine.User.Name) {
|
if currentNamespace == "" || (currentNamespace == machine.Namespace.Name) {
|
||||||
user = pterm.LightMagenta(machine.User.Name)
|
namespace = pterm.LightMagenta(machine.Namespace.Name)
|
||||||
} else {
|
} else {
|
||||||
// Shared into this user
|
// Shared into this namespace
|
||||||
user = pterm.LightYellow(machine.User.Name)
|
namespace = pterm.LightYellow(machine.Namespace.Name)
|
||||||
}
|
}
|
||||||
|
|
||||||
var IPV4Address string
|
var IPV4Address string
|
||||||
@@ -599,13 +568,11 @@ func nodesToPtables(
|
|||||||
strconv.FormatUint(machine.Id, headscale.Base10),
|
strconv.FormatUint(machine.Id, headscale.Base10),
|
||||||
machine.Name,
|
machine.Name,
|
||||||
machine.GetGivenName(),
|
machine.GetGivenName(),
|
||||||
machineKey.ShortString(),
|
|
||||||
nodeKey.ShortString(),
|
nodeKey.ShortString(),
|
||||||
user,
|
namespace,
|
||||||
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
||||||
strconv.FormatBool(ephemeral),
|
strconv.FormatBool(ephemeral),
|
||||||
lastSeenTime,
|
lastSeenTime,
|
||||||
expiryTime,
|
|
||||||
online,
|
online,
|
||||||
expired,
|
expired,
|
||||||
}
|
}
|
||||||
|
@@ -3,7 +3,6 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
@@ -20,14 +19,8 @@ const (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(preauthkeysCmd)
|
rootCmd.AddCommand(preauthkeysCmd)
|
||||||
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
|
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "Namespace")
|
||||||
|
err := preauthkeysCmd.MarkPersistentFlagRequired("namespace")
|
||||||
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
|
|
||||||
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
|
|
||||||
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
pakNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("")
|
log.Fatal().Err(err).Msg("")
|
||||||
}
|
}
|
||||||
@@ -40,8 +33,6 @@ func init() {
|
|||||||
Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
|
Bool("ephemeral", false, "Preauthkey for ephemeral nodes")
|
||||||
createPreAuthKeyCmd.Flags().
|
createPreAuthKeyCmd.Flags().
|
||||||
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
|
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
|
||||||
createPreAuthKeyCmd.Flags().
|
|
||||||
StringSlice("tags", []string{}, "Tags to automatically assign to node")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var preauthkeysCmd = &cobra.Command{
|
var preauthkeysCmd = &cobra.Command{
|
||||||
@@ -52,14 +43,14 @@ var preauthkeysCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listPreAuthKeys = &cobra.Command{
|
var listPreAuthKeys = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List the preauthkeys for this user",
|
Short: "List the preauthkeys for this namespace",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -69,7 +60,7 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListPreAuthKeysRequest{
|
request := &v1.ListPreAuthKeysRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListPreAuthKeys(ctx, request)
|
response, err := client.ListPreAuthKeys(ctx, request)
|
||||||
@@ -90,16 +81,7 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
tableData := pterm.TableData{
|
tableData := pterm.TableData{
|
||||||
{
|
{"ID", "Key", "Reusable", "Ephemeral", "Used", "Expiration", "Created"},
|
||||||
"ID",
|
|
||||||
"Key",
|
|
||||||
"Reusable",
|
|
||||||
"Ephemeral",
|
|
||||||
"Used",
|
|
||||||
"Expiration",
|
|
||||||
"Created",
|
|
||||||
"Tags",
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
for _, key := range response.PreAuthKeys {
|
for _, key := range response.PreAuthKeys {
|
||||||
expiration := "-"
|
expiration := "-"
|
||||||
@@ -114,14 +96,6 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
reusable = fmt.Sprintf("%v", key.GetReusable())
|
reusable = fmt.Sprintf("%v", key.GetReusable())
|
||||||
}
|
}
|
||||||
|
|
||||||
aclTags := ""
|
|
||||||
|
|
||||||
for _, tag := range key.AclTags {
|
|
||||||
aclTags += "," + tag
|
|
||||||
}
|
|
||||||
|
|
||||||
aclTags = strings.TrimLeft(aclTags, ",")
|
|
||||||
|
|
||||||
tableData = append(tableData, []string{
|
tableData = append(tableData, []string{
|
||||||
key.GetId(),
|
key.GetId(),
|
||||||
key.GetKey(),
|
key.GetKey(),
|
||||||
@@ -130,7 +104,6 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
strconv.FormatBool(key.GetUsed()),
|
strconv.FormatBool(key.GetUsed()),
|
||||||
expiration,
|
expiration,
|
||||||
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||||
aclTags,
|
|
||||||
})
|
})
|
||||||
|
|
||||||
}
|
}
|
||||||
@@ -149,33 +122,31 @@ var listPreAuthKeys = &cobra.Command{
|
|||||||
|
|
||||||
var createPreAuthKeyCmd = &cobra.Command{
|
var createPreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "create",
|
Use: "create",
|
||||||
Short: "Creates a new preauthkey in the specified user",
|
Short: "Creates a new preauthkey in the specified namespace",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
reusable, _ := cmd.Flags().GetBool("reusable")
|
reusable, _ := cmd.Flags().GetBool("reusable")
|
||||||
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
|
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
|
||||||
tags, _ := cmd.Flags().GetStringSlice("tags")
|
|
||||||
|
|
||||||
log.Trace().
|
log.Trace().
|
||||||
Bool("reusable", reusable).
|
Bool("reusable", reusable).
|
||||||
Bool("ephemeral", ephemeral).
|
Bool("ephemeral", ephemeral).
|
||||||
Str("user", user).
|
Str("namespace", namespace).
|
||||||
Msg("Preparing to create preauthkey")
|
Msg("Preparing to create preauthkey")
|
||||||
|
|
||||||
request := &v1.CreatePreAuthKeyRequest{
|
request := &v1.CreatePreAuthKeyRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
Reusable: reusable,
|
Reusable: reusable,
|
||||||
Ephemeral: ephemeral,
|
Ephemeral: ephemeral,
|
||||||
AclTags: tags,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
durationStr, _ := cmd.Flags().GetString("expiration")
|
durationStr, _ := cmd.Flags().GetString("expiration")
|
||||||
@@ -231,9 +202,9 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
|||||||
},
|
},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, err := cmd.Flags().GetString("user")
|
namespace, err := cmd.Flags().GetString("namespace")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error getting namespace: %s", err), output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
@@ -243,8 +214,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ExpirePreAuthKeyRequest{
|
request := &v1.ExpirePreAuthKeyRequest{
|
||||||
User: user,
|
Namespace: namespace,
|
||||||
Key: args[0],
|
Key: args[0],
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpirePreAuthKey(ctx, request)
|
response, err := client.ExpirePreAuthKey(ctx, request)
|
||||||
|
@@ -12,18 +12,9 @@ import (
|
|||||||
"github.com/tcnksm/go-latest"
|
"github.com/tcnksm/go-latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
deprecateNamespaceMessage = "use --user"
|
|
||||||
)
|
|
||||||
|
|
||||||
var cfgFile string = ""
|
var cfgFile string = ""
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
if len(os.Args) > 1 &&
|
|
||||||
(os.Args[1] == "version" || os.Args[1] == "mockoidc" || os.Args[1] == "completion") {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
cobra.OnInitialize(initConfig)
|
cobra.OnInitialize(initConfig)
|
||||||
rootCmd.PersistentFlags().
|
rootCmd.PersistentFlags().
|
||||||
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
|
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
|
||||||
@@ -56,7 +47,7 @@ func initConfig() {
|
|||||||
|
|
||||||
machineOutput := HasMachineOutputFlag()
|
machineOutput := HasMachineOutputFlag()
|
||||||
|
|
||||||
zerolog.SetGlobalLevel(cfg.Log.Level)
|
zerolog.SetGlobalLevel(cfg.LogLevel)
|
||||||
|
|
||||||
// If the user has requested a "machine" readable format,
|
// If the user has requested a "machine" readable format,
|
||||||
// then disable login so the output remains valid.
|
// then disable login so the output remains valid.
|
||||||
@@ -64,10 +55,6 @@ func initConfig() {
|
|||||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfg.Log.Format == headscale.JSONLogFormat {
|
|
||||||
log.Logger = log.Output(os.Stdout)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !cfg.DisableUpdateCheck && !machineOutput {
|
if !cfg.DisableUpdateCheck && !machineOutput {
|
||||||
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
|
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
|
||||||
Version != "dev" {
|
Version != "dev" {
|
||||||
|
@@ -3,45 +3,37 @@ package cli
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
"net/netip"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
"github.com/juanfont/headscale"
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
"google.golang.org/grpc/status"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
Base10 = 10
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(routesCmd)
|
rootCmd.AddCommand(routesCmd)
|
||||||
|
|
||||||
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
err := listRoutesCmd.MarkFlagRequired("identifier")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf(err.Error())
|
||||||
|
}
|
||||||
routesCmd.AddCommand(listRoutesCmd)
|
routesCmd.AddCommand(listRoutesCmd)
|
||||||
|
|
||||||
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
enableRouteCmd.Flags().
|
||||||
err := enableRouteCmd.MarkFlagRequired("route")
|
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to enable")
|
||||||
|
enableRouteCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
enableRouteCmd.Flags().BoolP("all", "a", false, "All routes from host")
|
||||||
|
|
||||||
|
err = enableRouteCmd.MarkFlagRequired("identifier")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf(err.Error())
|
log.Fatalf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
routesCmd.AddCommand(enableRouteCmd)
|
routesCmd.AddCommand(enableRouteCmd)
|
||||||
|
|
||||||
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
nodeCmd.AddCommand(routesCmd)
|
||||||
err = disableRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(disableRouteCmd)
|
|
||||||
|
|
||||||
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
|
||||||
err = deleteRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(deleteRouteCmd)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var routesCmd = &cobra.Command{
|
var routesCmd = &cobra.Command{
|
||||||
@@ -52,7 +44,7 @@ var routesCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listRoutesCmd = &cobra.Command{
|
var listRoutesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all routes",
|
Short: "List routes advertised and enabled by a given node",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
@@ -72,51 +64,28 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
var routes []*v1.Route
|
request := &v1.GetMachineRouteRequest{
|
||||||
|
MachineId: machineID,
|
||||||
if machineID == 0 {
|
|
||||||
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.Routes, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
routes = response.Routes
|
|
||||||
} else {
|
|
||||||
response, err := client.GetMachineRoutes(ctx, &v1.GetMachineRoutesRequest{
|
|
||||||
MachineId: machineID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get routes for machine %d: %s", machineID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.Routes, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
routes = response.Routes
|
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData := routesToPtables(routes)
|
response, err := client.GetMachineRoute(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.Routes, "", output)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
tableData := routesToPtables(response.Routes)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
|
||||||
@@ -138,12 +107,16 @@ var listRoutesCmd = &cobra.Command{
|
|||||||
|
|
||||||
var enableRouteCmd = &cobra.Command{
|
var enableRouteCmd = &cobra.Command{
|
||||||
Use: "enable",
|
Use: "enable",
|
||||||
Short: "Set a route as enabled",
|
Short: "Set the enabled routes for a given node",
|
||||||
Long: `This command will make as enabled a given route.`,
|
Long: `This command will take a list of routes that will _replace_
|
||||||
|
the current set of routes on a given node.
|
||||||
|
If you would like to disable a route, simply run the command again, but
|
||||||
|
omit the route you do not want to enable.
|
||||||
|
`,
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
machineID, err := cmd.Flags().GetUint64("identifier")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
@@ -158,13 +131,52 @@ var enableRouteCmd = &cobra.Command{
|
|||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
|
var routes []string
|
||||||
RouteId: routeID,
|
|
||||||
})
|
isAll, _ := cmd.Flags().GetBool("all")
|
||||||
|
if isAll {
|
||||||
|
response, err := client.GetMachineRoute(ctx, &v1.GetMachineRouteRequest{
|
||||||
|
MachineId: machineID,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf(
|
||||||
|
"Cannot get machine routes: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
routes = response.GetRoutes().GetAdvertisedRoutes()
|
||||||
|
} else {
|
||||||
|
routes, err = cmd.Flags().GetStringSlice("route")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting routes from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
request := &v1.EnableMachineRoutesRequest{
|
||||||
|
MachineId: machineID,
|
||||||
|
Routes: routes,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.EnableMachineRoutes(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
|
fmt.Sprintf(
|
||||||
|
"Cannot register machine: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -172,127 +184,50 @@ var enableRouteCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
if output != "" {
|
||||||
SuccessOutput(response, "", output)
|
SuccessOutput(response.Routes, "", output)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var disableRouteCmd = &cobra.Command{
|
tableData := routesToPtables(response.Routes)
|
||||||
Use: "disable",
|
if err != nil {
|
||||||
Short: "Set as disabled a given route",
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
Long: `This command will make as disabled a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
ErrorOutput(
|
||||||
err,
|
err,
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
output,
|
output,
|
||||||
)
|
)
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteRouteCmd = &cobra.Command{
|
|
||||||
Use: "delete",
|
|
||||||
Short: "Delete a given route",
|
|
||||||
Long: `This command will delete a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := getHeadscaleCLIClient()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
// routesToPtables converts the list of routes to a nice table.
|
// routesToPtables converts the list of routes to a nice table.
|
||||||
func routesToPtables(routes []*v1.Route) pterm.TableData {
|
func routesToPtables(routes *v1.Routes) pterm.TableData {
|
||||||
tableData := pterm.TableData{{"ID", "Machine", "Prefix", "Advertised", "Enabled", "Primary"}}
|
tableData := pterm.TableData{{"Route", "Enabled"}}
|
||||||
|
|
||||||
for _, route := range routes {
|
for _, route := range routes.GetAdvertisedRoutes() {
|
||||||
var isPrimaryStr string
|
enabled := isStringInSlice(routes.EnabledRoutes, route)
|
||||||
prefix, err := netip.ParsePrefix(route.Prefix)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("Error parsing prefix %s: %s", route.Prefix, err)
|
|
||||||
|
|
||||||
continue
|
tableData = append(tableData, []string{route, strconv.FormatBool(enabled)})
|
||||||
}
|
|
||||||
if prefix == headscale.ExitRouteV4 || prefix == headscale.ExitRouteV6 {
|
|
||||||
isPrimaryStr = "-"
|
|
||||||
} else {
|
|
||||||
isPrimaryStr = strconv.FormatBool(route.IsPrimary)
|
|
||||||
}
|
|
||||||
|
|
||||||
tableData = append(tableData,
|
|
||||||
[]string{
|
|
||||||
strconv.FormatUint(route.Id, Base10),
|
|
||||||
route.Machine.GivenName,
|
|
||||||
route.Prefix,
|
|
||||||
strconv.FormatBool(route.Advertised),
|
|
||||||
strconv.FormatBool(route.Enabled),
|
|
||||||
isPrimaryStr,
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return tableData
|
return tableData
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func isStringInSlice(strs []string, s string) bool {
|
||||||
|
for _, s2 := range strs {
|
||||||
|
if s == s2 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
@@ -14,12 +14,11 @@ import (
|
|||||||
"google.golang.org/grpc"
|
"google.golang.org/grpc"
|
||||||
"google.golang.org/grpc/credentials"
|
"google.golang.org/grpc/credentials"
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
|
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
|
||||||
SocketWritePermissions = 0o666
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func getHeadscaleApp() (*headscale.Headscale, error) {
|
func getHeadscaleApp() (*headscale.Headscale, error) {
|
||||||
@@ -82,19 +81,6 @@ func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.
|
|||||||
|
|
||||||
address = cfg.UnixSocket
|
address = cfg.UnixSocket
|
||||||
|
|
||||||
// Try to give the user better feedback if we cannot write to the headscale
|
|
||||||
// socket.
|
|
||||||
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
|
|
||||||
if err != nil {
|
|
||||||
if os.IsPermission(err) {
|
|
||||||
log.Fatal().
|
|
||||||
Err(err).
|
|
||||||
Str("socket", cfg.UnixSocket).
|
|
||||||
Msgf("Unable to read/write to headscale socket, do you have the correct permissions?")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
socket.Close()
|
|
||||||
|
|
||||||
grpcOptions = append(
|
grpcOptions = append(
|
||||||
grpcOptions,
|
grpcOptions,
|
||||||
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
grpc.WithTransportCredentials(insecure.NewCredentials()),
|
||||||
|
@@ -6,25 +6,11 @@ import (
|
|||||||
|
|
||||||
"github.com/efekarakus/termcolor"
|
"github.com/efekarakus/termcolor"
|
||||||
"github.com/juanfont/headscale/cmd/headscale/cli"
|
"github.com/juanfont/headscale/cmd/headscale/cli"
|
||||||
"github.com/pkg/profile"
|
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
if _, enableProfile := os.LookupEnv("HEADSCALE_PROFILING_ENABLED"); enableProfile {
|
|
||||||
if profilePath, ok := os.LookupEnv("HEADSCALE_PROFILING_PATH"); ok {
|
|
||||||
err := os.MkdirAll(profilePath, os.ModePerm)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("failed to create profiling directory")
|
|
||||||
}
|
|
||||||
|
|
||||||
defer profile.Start(profile.ProfilePath(profilePath)).Stop()
|
|
||||||
} else {
|
|
||||||
defer profile.Start().Stop()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var colors bool
|
var colors bool
|
||||||
switch l := termcolor.SupportLevel(os.Stderr); l {
|
switch l := termcolor.SupportLevel(os.Stderr); l {
|
||||||
case termcolor.Level16M:
|
case termcolor.Level16M:
|
||||||
|
@@ -55,7 +55,7 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
|
|||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
@@ -98,7 +98,7 @@ func (*Suite) TestConfigLoading(c *check.C) {
|
|||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "0.0.0.0:8080")
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
c.Assert(viper.GetString("db_type"), check.Equals, "sqlite3")
|
||||||
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
c.Assert(viper.GetString("db_path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
|
@@ -14,9 +14,7 @@ server_url: http://127.0.0.1:8080
|
|||||||
|
|
||||||
# Address to listen to / bind to on the server
|
# Address to listen to / bind to on the server
|
||||||
#
|
#
|
||||||
# For production:
|
listen_addr: 0.0.0.0:8080
|
||||||
# listen_addr: 0.0.0.0:8080
|
|
||||||
listen_addr: 127.0.0.1:8080
|
|
||||||
|
|
||||||
# Address to listen to /metrics, you may want
|
# Address to listen to /metrics, you may want
|
||||||
# to keep this endpoint private to your internal
|
# to keep this endpoint private to your internal
|
||||||
@@ -29,10 +27,7 @@ metrics_listen_addr: 127.0.0.1:9090
|
|||||||
# remotely with the CLI
|
# remotely with the CLI
|
||||||
# Note: Remote access _only_ works if you have
|
# Note: Remote access _only_ works if you have
|
||||||
# valid certificates.
|
# valid certificates.
|
||||||
#
|
grpc_listen_addr: 0.0.0.0:50443
|
||||||
# For production:
|
|
||||||
# grpc_listen_addr: 0.0.0.0:50443
|
|
||||||
grpc_listen_addr: 127.0.0.1:50443
|
|
||||||
|
|
||||||
# Allow the gRPC admin interface to run in INSECURE
|
# Allow the gRPC admin interface to run in INSECURE
|
||||||
# mode. This is not recommended as the traffic will
|
# mode. This is not recommended as the traffic will
|
||||||
@@ -40,14 +35,14 @@ grpc_listen_addr: 127.0.0.1:50443
|
|||||||
# are doing.
|
# are doing.
|
||||||
grpc_allow_insecure: false
|
grpc_allow_insecure: false
|
||||||
|
|
||||||
# Private key used to encrypt the traffic between headscale
|
# Private key used encrypt the traffic between headscale
|
||||||
# and Tailscale clients.
|
# and Tailscale clients.
|
||||||
# The private key file will be autogenerated if it's missing.
|
# The private key file which will be
|
||||||
#
|
# autogenerated if it's missing
|
||||||
private_key_path: /var/lib/headscale/private.key
|
private_key_path: /var/lib/headscale/private.key
|
||||||
|
|
||||||
# The Noise section includes specific configuration for the
|
# The Noise section includes specific configuration for the
|
||||||
# TS2021 Noise protocol
|
# TS2021 Noise procotol
|
||||||
noise:
|
noise:
|
||||||
# The Noise private key is used to encrypt the
|
# The Noise private key is used to encrypt the
|
||||||
# traffic between headscale and Tailscale clients when
|
# traffic between headscale and Tailscale clients when
|
||||||
@@ -58,12 +53,6 @@ noise:
|
|||||||
# List of IP prefixes to allocate tailaddresses from.
|
# List of IP prefixes to allocate tailaddresses from.
|
||||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||||
# and the associated prefix length, delimited by a slash.
|
# and the associated prefix length, delimited by a slash.
|
||||||
# It must be within IP ranges supported by the Tailscale
|
|
||||||
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
|
||||||
# See below:
|
|
||||||
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
|
||||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
|
||||||
# Any other range is NOT supported, and it will cause unexpected issues.
|
|
||||||
ip_prefixes:
|
ip_prefixes:
|
||||||
- fd7a:115c:a1e0::/48
|
- fd7a:115c:a1e0::/48
|
||||||
- 100.64.0.0/10
|
- 100.64.0.0/10
|
||||||
@@ -89,7 +78,7 @@ derp:
|
|||||||
region_code: "headscale"
|
region_code: "headscale"
|
||||||
region_name: "Headscale Embedded DERP"
|
region_name: "Headscale Embedded DERP"
|
||||||
|
|
||||||
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
|
# Listens in UDP at the configured address for STUN connections to help on NAT traversal.
|
||||||
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
|
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
|
||||||
#
|
#
|
||||||
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
|
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
|
||||||
@@ -123,16 +112,14 @@ disable_check_updates: false
|
|||||||
# Time before an inactive ephemeral node is deleted?
|
# Time before an inactive ephemeral node is deleted?
|
||||||
ephemeral_node_inactivity_timeout: 30m
|
ephemeral_node_inactivity_timeout: 30m
|
||||||
|
|
||||||
# Period to check for node updates within the tailnet. A value too low will severely affect
|
# Period to check for node updates in the tailnet. A value too low will severily affect
|
||||||
# CPU consumption of Headscale. A value too high (over 60s) will cause problems
|
# CPU consumption of Headscale. A value too high (over 60s) will cause problems
|
||||||
# for the nodes, as they won't get updates or keep alive messages frequently enough.
|
# to the nodes, as they won't get updates or keep alive messages in time.
|
||||||
# In case of doubts, do not touch the default 10s.
|
# In case of doubts, do not touch the default 10s.
|
||||||
node_update_check_interval: 10s
|
node_update_check_interval: 10s
|
||||||
|
|
||||||
# SQLite config
|
# SQLite config
|
||||||
db_type: sqlite3
|
db_type: sqlite3
|
||||||
|
|
||||||
# For production:
|
|
||||||
db_path: /var/lib/headscale/db.sqlite
|
db_path: /var/lib/headscale/db.sqlite
|
||||||
|
|
||||||
# # Postgres config
|
# # Postgres config
|
||||||
@@ -143,9 +130,6 @@ db_path: /var/lib/headscale/db.sqlite
|
|||||||
# db_name: headscale
|
# db_name: headscale
|
||||||
# db_user: foo
|
# db_user: foo
|
||||||
# db_pass: bar
|
# db_pass: bar
|
||||||
|
|
||||||
# If other 'sslmode' is required instead of 'require(true)' and 'disabled(false)', set the 'sslmode' you need
|
|
||||||
# in the 'db_ssl' field. Refers to https://www.postgresql.org/docs/current/libpq-ssl.html Table 34.1.
|
|
||||||
# db_ssl: false
|
# db_ssl: false
|
||||||
|
|
||||||
### TLS configuration
|
### TLS configuration
|
||||||
@@ -164,9 +148,15 @@ acme_email: ""
|
|||||||
# Domain name to request a TLS certificate for:
|
# Domain name to request a TLS certificate for:
|
||||||
tls_letsencrypt_hostname: ""
|
tls_letsencrypt_hostname: ""
|
||||||
|
|
||||||
|
# Client (Tailscale/Browser) authentication mode (mTLS)
|
||||||
|
# Acceptable values:
|
||||||
|
# - disabled: client authentication disabled
|
||||||
|
# - relaxed: client certificate is required but not verified
|
||||||
|
# - enforced: client certificate is required and verified
|
||||||
|
tls_client_auth_mode: relaxed
|
||||||
|
|
||||||
# Path to store certificates and metadata needed by
|
# Path to store certificates and metadata needed by
|
||||||
# letsencrypt
|
# letsencrypt
|
||||||
# For production:
|
|
||||||
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
||||||
|
|
||||||
# Type of ACME challenge to use, currently supported types:
|
# Type of ACME challenge to use, currently supported types:
|
||||||
@@ -174,7 +164,7 @@ tls_letsencrypt_cache_dir: /var/lib/headscale/cache
|
|||||||
# See [docs/tls.md](docs/tls.md) for more information
|
# See [docs/tls.md](docs/tls.md) for more information
|
||||||
tls_letsencrypt_challenge_type: HTTP-01
|
tls_letsencrypt_challenge_type: HTTP-01
|
||||||
# When HTTP-01 challenge is chosen, letsencrypt must set up a
|
# When HTTP-01 challenge is chosen, letsencrypt must set up a
|
||||||
# verification endpoint, and it will be listening on:
|
# verification endpoint, and it will be listning on:
|
||||||
# :http = port 80
|
# :http = port 80
|
||||||
tls_letsencrypt_listen: ":http"
|
tls_letsencrypt_listen: ":http"
|
||||||
|
|
||||||
@@ -182,10 +172,7 @@ tls_letsencrypt_listen: ":http"
|
|||||||
tls_cert_path: ""
|
tls_cert_path: ""
|
||||||
tls_key_path: ""
|
tls_key_path: ""
|
||||||
|
|
||||||
log:
|
log_level: info
|
||||||
# Output formatting for logs: text or json
|
|
||||||
format: text
|
|
||||||
level: info
|
|
||||||
|
|
||||||
# Path to a file containg ACL policies.
|
# Path to a file containg ACL policies.
|
||||||
# ACLs can be defined as YAML or HUJSON.
|
# ACLs can be defined as YAML or HUJSON.
|
||||||
@@ -202,25 +189,10 @@ acl_policy_path: ""
|
|||||||
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
|
# - https://tailscale.com/blog/2021-09-private-dns-with-magicdns/
|
||||||
#
|
#
|
||||||
dns_config:
|
dns_config:
|
||||||
# Whether to prefer using Headscale provided DNS or use local.
|
|
||||||
override_local_dns: true
|
|
||||||
|
|
||||||
# List of DNS servers to expose to clients.
|
# List of DNS servers to expose to clients.
|
||||||
nameservers:
|
nameservers:
|
||||||
- 1.1.1.1
|
- 1.1.1.1
|
||||||
|
|
||||||
# NextDNS (see https://tailscale.com/kb/1218/nextdns/).
|
|
||||||
# "abc123" is example NextDNS ID, replace with yours.
|
|
||||||
#
|
|
||||||
# With metadata sharing:
|
|
||||||
# nameservers:
|
|
||||||
# - https://dns.nextdns.io/abc123
|
|
||||||
#
|
|
||||||
# Without metadata sharing:
|
|
||||||
# nameservers:
|
|
||||||
# - 2a07:a8c0::ab:c123
|
|
||||||
# - 2a07:a8c1::ab:c123
|
|
||||||
|
|
||||||
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
||||||
# list of search domains and the DNS to query for each one.
|
# list of search domains and the DNS to query for each one.
|
||||||
#
|
#
|
||||||
@@ -234,17 +206,6 @@ dns_config:
|
|||||||
# Search domains to inject.
|
# Search domains to inject.
|
||||||
domains: []
|
domains: []
|
||||||
|
|
||||||
# Extra DNS records
|
|
||||||
# so far only A-records are supported (on the tailscale side)
|
|
||||||
# See https://github.com/juanfont/headscale/blob/main/docs/dns-records.md#Limitations
|
|
||||||
# extra_records:
|
|
||||||
# - name: "grafana.myvpn.example.com"
|
|
||||||
# type: "A"
|
|
||||||
# value: "100.64.0.3"
|
|
||||||
#
|
|
||||||
# # you can also put it in one line
|
|
||||||
# - { name: "prometheus.myvpn.example.com", type: "A", value: "100.64.0.3" }
|
|
||||||
|
|
||||||
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
# Whether to use [MagicDNS](https://tailscale.com/kb/1081/magicdns/).
|
||||||
# Only works if there is at least a nameserver defined.
|
# Only works if there is at least a nameserver defined.
|
||||||
magic_dns: true
|
magic_dns: true
|
||||||
@@ -252,12 +213,13 @@ dns_config:
|
|||||||
# Defines the base domain to create the hostnames for MagicDNS.
|
# Defines the base domain to create the hostnames for MagicDNS.
|
||||||
# `base_domain` must be a FQDNs, without the trailing dot.
|
# `base_domain` must be a FQDNs, without the trailing dot.
|
||||||
# The FQDN of the hosts will be
|
# The FQDN of the hosts will be
|
||||||
# `hostname.user.base_domain` (e.g., _myhost.myuser.example.com_).
|
# `hostname.namespace.base_domain` (e.g., _myhost.mynamespace.example.com_).
|
||||||
base_domain: example.com
|
base_domain: example.com
|
||||||
|
|
||||||
# Unix socket used for the CLI to connect without authentication
|
# Unix socket used for the CLI to connect without authentication
|
||||||
# Note: for production you will want to set this to something like:
|
# Note: for local development, you probably want to change this to:
|
||||||
unix_socket: /var/run/headscale/headscale.sock
|
# unix_socket: ./headscale.sock
|
||||||
|
unix_socket: /var/run/headscale.sock
|
||||||
unix_socket_permission: "0770"
|
unix_socket_permission: "0770"
|
||||||
#
|
#
|
||||||
# headscale supports experimental OpenID connect support,
|
# headscale supports experimental OpenID connect support,
|
||||||
@@ -265,49 +227,29 @@ unix_socket_permission: "0770"
|
|||||||
# help us test it.
|
# help us test it.
|
||||||
# OpenID Connect
|
# OpenID Connect
|
||||||
# oidc:
|
# oidc:
|
||||||
# only_start_if_oidc_is_available: true
|
|
||||||
# issuer: "https://your-oidc.issuer.com/path"
|
# issuer: "https://your-oidc.issuer.com/path"
|
||||||
# client_id: "your-oidc-client-id"
|
# client_id: "your-oidc-client-id"
|
||||||
# client_secret: "your-oidc-client-secret"
|
# client_secret: "your-oidc-client-secret"
|
||||||
# # Alternatively, set `client_secret_path` to read the secret from the file.
|
|
||||||
# # It resolves environment variables, making integration to systemd's
|
|
||||||
# # `LoadCredential` straightforward:
|
|
||||||
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
|
||||||
# # client_secret and client_secret_path are mutually exclusive.
|
|
||||||
#
|
#
|
||||||
# # The amount of time from a node is authenticated with OpenID until it
|
# Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
||||||
# # expires and needs to reauthenticate.
|
# parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
||||||
# # Setting the value to "0" will mean no expiry.
|
|
||||||
# expiry: 180d
|
|
||||||
#
|
|
||||||
# # Use the expiry from the token received from OpenID when the user logged
|
|
||||||
# # in, this will typically lead to frequent need to reauthenticate and should
|
|
||||||
# # only been enabled if you know what you are doing.
|
|
||||||
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
|
||||||
# use_expiry_from_token: false
|
|
||||||
#
|
|
||||||
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
|
||||||
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
|
||||||
#
|
#
|
||||||
# scope: ["openid", "profile", "email", "custom"]
|
# scope: ["openid", "profile", "email", "custom"]
|
||||||
# extra_params:
|
# extra_params:
|
||||||
# domain_hint: example.com
|
# domain_hint: example.com
|
||||||
#
|
#
|
||||||
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
# List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
||||||
# # authentication request will be rejected.
|
# authentication request will be rejected.
|
||||||
#
|
#
|
||||||
# allowed_domains:
|
# allowed_domains:
|
||||||
# - example.com
|
# - example.com
|
||||||
# # Note: Groups from keycloak have a leading '/'
|
|
||||||
# allowed_groups:
|
|
||||||
# - /headscale
|
|
||||||
# allowed_users:
|
# allowed_users:
|
||||||
# - alice@example.com
|
# - alice@example.com
|
||||||
#
|
#
|
||||||
# # If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
# If `strip_email_domain` is set to `true`, the domain part of the username email address will be removed.
|
||||||
# # This will transform `first-name.last-name@example.com` to the user `first-name.last-name`
|
# This will transform `first-name.last-name@example.com` to the namespace `first-name.last-name`
|
||||||
# # If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
# If `strip_email_domain` is set to `false` the domain part will NOT be removed resulting to the following
|
||||||
# user: `first-name.last-name.example.com`
|
# namespace: `first-name.last-name.example.com`
|
||||||
#
|
#
|
||||||
# strip_email_domain: true
|
# strip_email_domain: true
|
||||||
|
|
||||||
|
318
config.go
318
config.go
@@ -1,22 +1,20 @@
|
|||||||
package headscale
|
package headscale
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/tls"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"net/url"
|
"net/url"
|
||||||
"os"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/coreos/go-oidc/v3/oidc"
|
"github.com/coreos/go-oidc/v3/oidc"
|
||||||
"github.com/prometheus/common/model"
|
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
"tailscale.com/net/tsaddr"
|
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
"tailscale.com/types/dnstype"
|
"tailscale.com/types/dnstype"
|
||||||
)
|
)
|
||||||
@@ -24,16 +22,6 @@ import (
|
|||||||
const (
|
const (
|
||||||
tlsALPN01ChallengeType = "TLS-ALPN-01"
|
tlsALPN01ChallengeType = "TLS-ALPN-01"
|
||||||
http01ChallengeType = "HTTP-01"
|
http01ChallengeType = "HTTP-01"
|
||||||
|
|
||||||
JSONLogFormat = "json"
|
|
||||||
TextLogFormat = "text"
|
|
||||||
|
|
||||||
defaultOIDCExpiryTime = 180 * 24 * time.Hour // 180 Days
|
|
||||||
maxDuration time.Duration = 1<<63 - 1
|
|
||||||
)
|
|
||||||
|
|
||||||
var errOidcMutuallyExclusive = errors.New(
|
|
||||||
"oidc_client_secret and oidc_client_secret_path are mutually exclusive",
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config contains the initial Headscale configuration.
|
// Config contains the initial Headscale configuration.
|
||||||
@@ -49,7 +37,7 @@ type Config struct {
|
|||||||
PrivateKeyPath string
|
PrivateKeyPath string
|
||||||
NoisePrivateKeyPath string
|
NoisePrivateKeyPath string
|
||||||
BaseDomain string
|
BaseDomain string
|
||||||
Log LogConfig
|
LogLevel zerolog.Level
|
||||||
DisableUpdateCheck bool
|
DisableUpdateCheck bool
|
||||||
|
|
||||||
DERP DERPConfig
|
DERP DERPConfig
|
||||||
@@ -61,7 +49,7 @@ type Config struct {
|
|||||||
DBname string
|
DBname string
|
||||||
DBuser string
|
DBuser string
|
||||||
DBpass string
|
DBpass string
|
||||||
DBssl string
|
DBssl bool
|
||||||
|
|
||||||
TLS TLSConfig
|
TLS TLSConfig
|
||||||
|
|
||||||
@@ -84,8 +72,9 @@ type Config struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type TLSConfig struct {
|
type TLSConfig struct {
|
||||||
CertPath string
|
CertPath string
|
||||||
KeyPath string
|
KeyPath string
|
||||||
|
ClientAuthMode tls.ClientAuthType
|
||||||
|
|
||||||
LetsEncrypt LetsEncryptConfig
|
LetsEncrypt LetsEncryptConfig
|
||||||
}
|
}
|
||||||
@@ -98,18 +87,14 @@ type LetsEncryptConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type OIDCConfig struct {
|
type OIDCConfig struct {
|
||||||
OnlyStartIfOIDCIsAvailable bool
|
Issuer string
|
||||||
Issuer string
|
ClientID string
|
||||||
ClientID string
|
ClientSecret string
|
||||||
ClientSecret string
|
Scope []string
|
||||||
Scope []string
|
ExtraParams map[string]string
|
||||||
ExtraParams map[string]string
|
AllowedDomains []string
|
||||||
AllowedDomains []string
|
AllowedUsers []string
|
||||||
AllowedUsers []string
|
StripEmaildomain bool
|
||||||
AllowedGroups []string
|
|
||||||
StripEmaildomain bool
|
|
||||||
Expiry time.Duration
|
|
||||||
UseExpiryFromToken bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type DERPConfig struct {
|
type DERPConfig struct {
|
||||||
@@ -139,11 +124,6 @@ type ACLConfig struct {
|
|||||||
PolicyPath string
|
PolicyPath string
|
||||||
}
|
}
|
||||||
|
|
||||||
type LogConfig struct {
|
|
||||||
Format string
|
|
||||||
Level zerolog.Level
|
|
||||||
}
|
|
||||||
|
|
||||||
func LoadConfig(path string, isFile bool) error {
|
func LoadConfig(path string, isFile bool) error {
|
||||||
if isFile {
|
if isFile {
|
||||||
viper.SetConfigFile(path)
|
viper.SetConfigFile(path)
|
||||||
@@ -165,12 +145,11 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
|
|
||||||
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
viper.SetDefault("tls_letsencrypt_cache_dir", "/var/www/.cache")
|
||||||
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
viper.SetDefault("tls_letsencrypt_challenge_type", http01ChallengeType)
|
||||||
|
viper.SetDefault("tls_client_auth_mode", "relaxed")
|
||||||
|
|
||||||
viper.SetDefault("log.level", "info")
|
viper.SetDefault("log_level", "info")
|
||||||
viper.SetDefault("log.format", TextLogFormat)
|
|
||||||
|
|
||||||
viper.SetDefault("dns_config", nil)
|
viper.SetDefault("dns_config", nil)
|
||||||
viper.SetDefault("dns_config.override_local_dns", true)
|
|
||||||
|
|
||||||
viper.SetDefault("derp.server.enabled", false)
|
viper.SetDefault("derp.server.enabled", false)
|
||||||
viper.SetDefault("derp.server.stun.enabled", true)
|
viper.SetDefault("derp.server.stun.enabled", true)
|
||||||
@@ -184,13 +163,8 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
viper.SetDefault("cli.timeout", "5s")
|
viper.SetDefault("cli.timeout", "5s")
|
||||||
viper.SetDefault("cli.insecure", false)
|
viper.SetDefault("cli.insecure", false)
|
||||||
|
|
||||||
viper.SetDefault("db_ssl", false)
|
|
||||||
|
|
||||||
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
viper.SetDefault("oidc.scope", []string{oidc.ScopeOpenID, "profile", "email"})
|
||||||
viper.SetDefault("oidc.strip_email_domain", true)
|
viper.SetDefault("oidc.strip_email_domain", true)
|
||||||
viper.SetDefault("oidc.only_start_if_oidc_is_available", true)
|
|
||||||
viper.SetDefault("oidc.expiry", "180d")
|
|
||||||
viper.SetDefault("oidc.use_expiry_from_token", false)
|
|
||||||
|
|
||||||
viper.SetDefault("logtail.enabled", false)
|
viper.SetDefault("logtail.enabled", false)
|
||||||
viper.SetDefault("randomize_client_port", false)
|
viper.SetDefault("randomize_client_port", false)
|
||||||
@@ -199,10 +173,6 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
|
|
||||||
viper.SetDefault("node_update_check_interval", "10s")
|
viper.SetDefault("node_update_check_interval", "10s")
|
||||||
|
|
||||||
if IsCLIConfigured() {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := viper.ReadInConfig(); err != nil {
|
if err := viper.ReadInConfig(); err != nil {
|
||||||
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
log.Warn().Err(err).Msg("Failed to read configuration from disk")
|
||||||
|
|
||||||
@@ -238,6 +208,19 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
errorText += "Fatal config error: server_url must start with https:// or http://\n"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_, authModeValid := LookupTLSClientAuthMode(
|
||||||
|
viper.GetString("tls_client_auth_mode"),
|
||||||
|
)
|
||||||
|
|
||||||
|
if !authModeValid {
|
||||||
|
errorText += fmt.Sprintf(
|
||||||
|
"Invalid tls_client_auth_mode supplied: %s. Accepted values: %s, %s, %s.",
|
||||||
|
viper.GetString("tls_client_auth_mode"),
|
||||||
|
DisabledClientAuth,
|
||||||
|
RelaxedClientAuth,
|
||||||
|
EnforcedClientAuth)
|
||||||
|
}
|
||||||
|
|
||||||
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
// Minimum inactivity time out is keepalive timeout (60s) plus a few seconds
|
||||||
// to avoid races
|
// to avoid races
|
||||||
minInactivityTimeout, _ := time.ParseDuration("65s")
|
minInactivityTimeout, _ := time.ParseDuration("65s")
|
||||||
@@ -267,6 +250,10 @@ func LoadConfig(path string, isFile bool) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func GetTLSConfig() TLSConfig {
|
func GetTLSConfig() TLSConfig {
|
||||||
|
tlsClientAuthMode, _ := LookupTLSClientAuthMode(
|
||||||
|
viper.GetString("tls_client_auth_mode"),
|
||||||
|
)
|
||||||
|
|
||||||
return TLSConfig{
|
return TLSConfig{
|
||||||
LetsEncrypt: LetsEncryptConfig{
|
LetsEncrypt: LetsEncryptConfig{
|
||||||
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
Hostname: viper.GetString("tls_letsencrypt_hostname"),
|
||||||
@@ -282,6 +269,7 @@ func GetTLSConfig() TLSConfig {
|
|||||||
KeyPath: AbsolutePathFromConfigPath(
|
KeyPath: AbsolutePathFromConfigPath(
|
||||||
viper.GetString("tls_key_path"),
|
viper.GetString("tls_key_path"),
|
||||||
),
|
),
|
||||||
|
ClientAuthMode: tlsClientAuthMode,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -346,58 +334,17 @@ func GetACLConfig() ACLConfig {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func GetLogConfig() LogConfig {
|
|
||||||
logLevelStr := viper.GetString("log.level")
|
|
||||||
logLevel, err := zerolog.ParseLevel(logLevelStr)
|
|
||||||
if err != nil {
|
|
||||||
logLevel = zerolog.DebugLevel
|
|
||||||
}
|
|
||||||
|
|
||||||
logFormatOpt := viper.GetString("log.format")
|
|
||||||
var logFormat string
|
|
||||||
switch logFormatOpt {
|
|
||||||
case "json":
|
|
||||||
logFormat = JSONLogFormat
|
|
||||||
case "text":
|
|
||||||
logFormat = TextLogFormat
|
|
||||||
case "":
|
|
||||||
logFormat = TextLogFormat
|
|
||||||
default:
|
|
||||||
log.Error().
|
|
||||||
Str("func", "GetLogConfig").
|
|
||||||
Msgf("Could not parse log format: %s. Valid choices are 'json' or 'text'", logFormatOpt)
|
|
||||||
}
|
|
||||||
|
|
||||||
return LogConfig{
|
|
||||||
Format: logFormat,
|
|
||||||
Level: logLevel,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
||||||
if viper.IsSet("dns_config") {
|
if viper.IsSet("dns_config") {
|
||||||
dnsConfig := &tailcfg.DNSConfig{}
|
dnsConfig := &tailcfg.DNSConfig{}
|
||||||
|
|
||||||
overrideLocalDNS := viper.GetBool("dns_config.override_local_dns")
|
|
||||||
|
|
||||||
if viper.IsSet("dns_config.nameservers") {
|
if viper.IsSet("dns_config.nameservers") {
|
||||||
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
nameserversStr := viper.GetStringSlice("dns_config.nameservers")
|
||||||
|
|
||||||
nameservers := []netip.Addr{}
|
nameservers := make([]netip.Addr, len(nameserversStr))
|
||||||
resolvers := []*dnstype.Resolver{}
|
resolvers := make([]*dnstype.Resolver, len(nameserversStr))
|
||||||
|
|
||||||
for _, nameserverStr := range nameserversStr {
|
for index, nameserverStr := range nameserversStr {
|
||||||
// Search for explicit DNS-over-HTTPS resolvers
|
|
||||||
if strings.HasPrefix(nameserverStr, "https://") {
|
|
||||||
resolvers = append(resolvers, &dnstype.Resolver{
|
|
||||||
Addr: nameserverStr,
|
|
||||||
})
|
|
||||||
|
|
||||||
// This nameserver can not be parsed as an IP address
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse nameserver as a regular IP
|
|
||||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().
|
log.Error().
|
||||||
@@ -406,76 +353,59 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
Msgf("Could not parse nameserver IP: %s", nameserverStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
nameservers = append(nameservers, nameserver)
|
nameservers[index] = nameserver
|
||||||
resolvers = append(resolvers, &dnstype.Resolver{
|
resolvers[index] = &dnstype.Resolver{
|
||||||
Addr: nameserver.String(),
|
Addr: nameserver.String(),
|
||||||
})
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
dnsConfig.Nameservers = nameservers
|
dnsConfig.Nameservers = nameservers
|
||||||
|
dnsConfig.Resolvers = resolvers
|
||||||
if overrideLocalDNS {
|
|
||||||
dnsConfig.Resolvers = resolvers
|
|
||||||
} else {
|
|
||||||
dnsConfig.FallbackResolvers = resolvers
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.restricted_nameservers") {
|
if viper.IsSet("dns_config.restricted_nameservers") {
|
||||||
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
if len(dnsConfig.Nameservers) > 0 {
|
||||||
domains := []string{}
|
dnsConfig.Routes = make(map[string][]*dnstype.Resolver)
|
||||||
restrictedDNS := viper.GetStringMapStringSlice(
|
restrictedDNS := viper.GetStringMapStringSlice(
|
||||||
"dns_config.restricted_nameservers",
|
"dns_config.restricted_nameservers",
|
||||||
)
|
|
||||||
for domain, restrictedNameservers := range restrictedDNS {
|
|
||||||
restrictedResolvers := make(
|
|
||||||
[]*dnstype.Resolver,
|
|
||||||
len(restrictedNameservers),
|
|
||||||
)
|
)
|
||||||
for index, nameserverStr := range restrictedNameservers {
|
for domain, restrictedNameservers := range restrictedDNS {
|
||||||
nameserver, err := netip.ParseAddr(nameserverStr)
|
restrictedResolvers := make(
|
||||||
if err != nil {
|
[]*dnstype.Resolver,
|
||||||
log.Error().
|
len(restrictedNameservers),
|
||||||
Str("func", "getDNSConfig").
|
)
|
||||||
Err(err).
|
for index, nameserverStr := range restrictedNameservers {
|
||||||
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
|
nameserver, err := netip.ParseAddr(nameserverStr)
|
||||||
}
|
if err != nil {
|
||||||
restrictedResolvers[index] = &dnstype.Resolver{
|
log.Error().
|
||||||
Addr: nameserver.String(),
|
Str("func", "getDNSConfig").
|
||||||
|
Err(err).
|
||||||
|
Msgf("Could not parse restricted nameserver IP: %s", nameserverStr)
|
||||||
|
}
|
||||||
|
restrictedResolvers[index] = &dnstype.Resolver{
|
||||||
|
Addr: nameserver.String(),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
dnsConfig.Routes[domain] = restrictedResolvers
|
||||||
}
|
}
|
||||||
dnsConfig.Routes[domain] = restrictedResolvers
|
} else {
|
||||||
domains = append(domains, domain)
|
log.Warn().
|
||||||
|
Msg("Warning: dns_config.restricted_nameservers is set, but no nameservers are configured. Ignoring restricted_nameservers.")
|
||||||
}
|
}
|
||||||
dnsConfig.Domains = domains
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.domains") {
|
if viper.IsSet("dns_config.domains") {
|
||||||
domains := viper.GetStringSlice("dns_config.domains")
|
dnsConfig.Domains = viper.GetStringSlice("dns_config.domains")
|
||||||
if len(dnsConfig.Resolvers) > 0 {
|
|
||||||
dnsConfig.Domains = domains
|
|
||||||
} else if domains != nil {
|
|
||||||
log.Warn().
|
|
||||||
Msg("Warning: dns_config.domains is set, but no nameservers are configured. Ignoring domains.")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if viper.IsSet("dns_config.extra_records") {
|
|
||||||
var extraRecords []tailcfg.DNSRecord
|
|
||||||
|
|
||||||
err := viper.UnmarshalKey("dns_config.extra_records", &extraRecords)
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Str("func", "getDNSConfig").
|
|
||||||
Err(err).
|
|
||||||
Msgf("Could not parse dns_config.extra_records")
|
|
||||||
}
|
|
||||||
|
|
||||||
dnsConfig.ExtraRecords = extraRecords
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if viper.IsSet("dns_config.magic_dns") {
|
if viper.IsSet("dns_config.magic_dns") {
|
||||||
dnsConfig.Proxied = viper.GetBool("dns_config.magic_dns")
|
magicDNS := viper.GetBool("dns_config.magic_dns")
|
||||||
|
if len(dnsConfig.Nameservers) > 0 {
|
||||||
|
dnsConfig.Proxied = magicDNS
|
||||||
|
} else if magicDNS {
|
||||||
|
log.Warn().
|
||||||
|
Msg("Warning: dns_config.magic_dns is set, but no nameservers are configured. Ignoring magic_dns.")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var baseDomain string
|
var baseDomain string
|
||||||
@@ -492,17 +422,6 @@ func GetDNSConfig() (*tailcfg.DNSConfig, string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func GetHeadscaleConfig() (*Config, error) {
|
func GetHeadscaleConfig() (*Config, error) {
|
||||||
if IsCLIConfigured() {
|
|
||||||
return &Config{
|
|
||||||
CLI: CLIConfig{
|
|
||||||
Address: viper.GetString("cli.address"),
|
|
||||||
APIKey: viper.GetString("cli.api_key"),
|
|
||||||
Timeout: viper.GetDuration("cli.timeout"),
|
|
||||||
Insecure: viper.GetBool("cli.insecure"),
|
|
||||||
},
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
dnsConfig, baseDomain := GetDNSConfig()
|
dnsConfig, baseDomain := GetDNSConfig()
|
||||||
derpConfig := GetDERPConfig()
|
derpConfig := GetDERPConfig()
|
||||||
logConfig := GetLogTailConfig()
|
logConfig := GetLogTailConfig()
|
||||||
@@ -511,34 +430,33 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
configuredPrefixes := viper.GetStringSlice("ip_prefixes")
|
configuredPrefixes := viper.GetStringSlice("ip_prefixes")
|
||||||
parsedPrefixes := make([]netip.Prefix, 0, len(configuredPrefixes)+1)
|
parsedPrefixes := make([]netip.Prefix, 0, len(configuredPrefixes)+1)
|
||||||
|
|
||||||
|
logLevelStr := viper.GetString("log_level")
|
||||||
|
logLevel, err := zerolog.ParseLevel(logLevelStr)
|
||||||
|
if err != nil {
|
||||||
|
logLevel = zerolog.DebugLevel
|
||||||
|
}
|
||||||
|
|
||||||
|
legacyPrefixField := viper.GetString("ip_prefix")
|
||||||
|
if len(legacyPrefixField) > 0 {
|
||||||
|
log.
|
||||||
|
Warn().
|
||||||
|
Msgf(
|
||||||
|
"%s, %s",
|
||||||
|
"use of 'ip_prefix' for configuration is deprecated",
|
||||||
|
"please see 'ip_prefixes' in the shipped example.",
|
||||||
|
)
|
||||||
|
legacyPrefix, err := netip.ParsePrefix(legacyPrefixField)
|
||||||
|
if err != nil {
|
||||||
|
panic(fmt.Errorf("failed to parse ip_prefix: %w", err))
|
||||||
|
}
|
||||||
|
parsedPrefixes = append(parsedPrefixes, legacyPrefix)
|
||||||
|
}
|
||||||
|
|
||||||
for i, prefixInConfig := range configuredPrefixes {
|
for i, prefixInConfig := range configuredPrefixes {
|
||||||
prefix, err := netip.ParsePrefix(prefixInConfig)
|
prefix, err := netip.ParsePrefix(prefixInConfig)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
panic(fmt.Errorf("failed to parse ip_prefixes[%d]: %w", i, err))
|
||||||
}
|
}
|
||||||
|
|
||||||
if prefix.Addr().Is4() {
|
|
||||||
builder := netipx.IPSetBuilder{}
|
|
||||||
builder.AddPrefix(tsaddr.CGNATRange())
|
|
||||||
ipSet, _ := builder.IPSet()
|
|
||||||
if !ipSet.ContainsPrefix(prefix) {
|
|
||||||
log.Warn().
|
|
||||||
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
|
||||||
prefixInConfig, tsaddr.CGNATRange())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if prefix.Addr().Is6() {
|
|
||||||
builder := netipx.IPSetBuilder{}
|
|
||||||
builder.AddPrefix(tsaddr.TailscaleULARange())
|
|
||||||
ipSet, _ := builder.IPSet()
|
|
||||||
if !ipSet.ContainsPrefix(prefix) {
|
|
||||||
log.Warn().
|
|
||||||
Msgf("Prefix %s is not in the %s range. This is an unsupported configuration.",
|
|
||||||
prefixInConfig, tsaddr.TailscaleULARange())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
parsedPrefixes = append(parsedPrefixes, prefix)
|
parsedPrefixes = append(parsedPrefixes, prefix)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -563,19 +481,6 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
Msgf("'ip_prefixes' not configured, falling back to default: %v", prefixes)
|
||||||
}
|
}
|
||||||
|
|
||||||
oidcClientSecret := viper.GetString("oidc.client_secret")
|
|
||||||
oidcClientSecretPath := viper.GetString("oidc.client_secret_path")
|
|
||||||
if oidcClientSecretPath != "" && oidcClientSecret != "" {
|
|
||||||
return nil, errOidcMutuallyExclusive
|
|
||||||
}
|
|
||||||
if oidcClientSecretPath != "" {
|
|
||||||
secretBytes, err := os.ReadFile(os.ExpandEnv(oidcClientSecretPath))
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
oidcClientSecret = string(secretBytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Config{
|
return &Config{
|
||||||
ServerURL: viper.GetString("server_url"),
|
ServerURL: viper.GetString("server_url"),
|
||||||
Addr: viper.GetString("listen_addr"),
|
Addr: viper.GetString("listen_addr"),
|
||||||
@@ -583,6 +488,7 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
GRPCAddr: viper.GetString("grpc_listen_addr"),
|
GRPCAddr: viper.GetString("grpc_listen_addr"),
|
||||||
GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"),
|
GRPCAllowInsecure: viper.GetBool("grpc_allow_insecure"),
|
||||||
DisableUpdateCheck: viper.GetBool("disable_check_updates"),
|
DisableUpdateCheck: viper.GetBool("disable_check_updates"),
|
||||||
|
LogLevel: logLevel,
|
||||||
|
|
||||||
IPPrefixes: prefixes,
|
IPPrefixes: prefixes,
|
||||||
PrivateKeyPath: AbsolutePathFromConfigPath(
|
PrivateKeyPath: AbsolutePathFromConfigPath(
|
||||||
@@ -610,7 +516,7 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
DBname: viper.GetString("db_name"),
|
DBname: viper.GetString("db_name"),
|
||||||
DBuser: viper.GetString("db_user"),
|
DBuser: viper.GetString("db_user"),
|
||||||
DBpass: viper.GetString("db_pass"),
|
DBpass: viper.GetString("db_pass"),
|
||||||
DBssl: viper.GetString("db_ssl"),
|
DBssl: viper.GetBool("db_ssl"),
|
||||||
|
|
||||||
TLS: GetTLSConfig(),
|
TLS: GetTLSConfig(),
|
||||||
|
|
||||||
@@ -623,41 +529,19 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
UnixSocketPermission: GetFileMode("unix_socket_permission"),
|
UnixSocketPermission: GetFileMode("unix_socket_permission"),
|
||||||
|
|
||||||
OIDC: OIDCConfig{
|
OIDC: OIDCConfig{
|
||||||
OnlyStartIfOIDCIsAvailable: viper.GetBool(
|
|
||||||
"oidc.only_start_if_oidc_is_available",
|
|
||||||
),
|
|
||||||
Issuer: viper.GetString("oidc.issuer"),
|
Issuer: viper.GetString("oidc.issuer"),
|
||||||
ClientID: viper.GetString("oidc.client_id"),
|
ClientID: viper.GetString("oidc.client_id"),
|
||||||
ClientSecret: oidcClientSecret,
|
ClientSecret: viper.GetString("oidc.client_secret"),
|
||||||
Scope: viper.GetStringSlice("oidc.scope"),
|
Scope: viper.GetStringSlice("oidc.scope"),
|
||||||
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
ExtraParams: viper.GetStringMapString("oidc.extra_params"),
|
||||||
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
AllowedDomains: viper.GetStringSlice("oidc.allowed_domains"),
|
||||||
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
AllowedUsers: viper.GetStringSlice("oidc.allowed_users"),
|
||||||
AllowedGroups: viper.GetStringSlice("oidc.allowed_groups"),
|
|
||||||
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
StripEmaildomain: viper.GetBool("oidc.strip_email_domain"),
|
||||||
Expiry: func() time.Duration {
|
|
||||||
// if set to 0, we assume no expiry
|
|
||||||
if value := viper.GetString("oidc.expiry"); value == "0" {
|
|
||||||
return maxDuration
|
|
||||||
} else {
|
|
||||||
expiry, err := model.ParseDuration(value)
|
|
||||||
if err != nil {
|
|
||||||
log.Warn().Msg("failed to parse oidc.expiry, defaulting back to 180 days")
|
|
||||||
|
|
||||||
return defaultOIDCExpiryTime
|
|
||||||
}
|
|
||||||
|
|
||||||
return time.Duration(expiry)
|
|
||||||
}
|
|
||||||
}(),
|
|
||||||
UseExpiryFromToken: viper.GetBool("oidc.use_expiry_from_token"),
|
|
||||||
},
|
},
|
||||||
|
|
||||||
LogTail: logConfig,
|
LogTail: logConfig,
|
||||||
RandomizeClientPort: randomizeClientPort,
|
RandomizeClientPort: randomizeClientPort,
|
||||||
|
|
||||||
ACL: GetACLConfig(),
|
|
||||||
|
|
||||||
CLI: CLIConfig{
|
CLI: CLIConfig{
|
||||||
Address: viper.GetString("cli.address"),
|
Address: viper.GetString("cli.address"),
|
||||||
APIKey: viper.GetString("cli.api_key"),
|
APIKey: viper.GetString("cli.api_key"),
|
||||||
@@ -665,10 +549,6 @@ func GetHeadscaleConfig() (*Config, error) {
|
|||||||
Insecure: viper.GetBool("cli.insecure"),
|
Insecure: viper.GetBool("cli.insecure"),
|
||||||
},
|
},
|
||||||
|
|
||||||
Log: GetLogConfig(),
|
ACL: GetACLConfig(),
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func IsCLIConfigured() bool {
|
|
||||||
return viper.GetString("cli.address") != "" && viper.GetString("cli.api_key") != ""
|
|
||||||
}
|
|
||||||
|
108
db.go
108
db.go
@@ -18,10 +18,8 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
dbVersion = "1"
|
dbVersion = "1"
|
||||||
|
errValueNotFound = Error("not found")
|
||||||
errValueNotFound = Error("not found")
|
|
||||||
ErrCannotParsePrefix = Error("cannot parse prefix")
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// KV is a key-value store in a psql table. For future use...
|
// KV is a key-value store in a psql table. For future use...
|
||||||
@@ -41,16 +39,6 @@ func (h *Headscale) initDB() error {
|
|||||||
db.Exec(`create extension if not exists "uuid-ossp";`)
|
db.Exec(`create extension if not exists "uuid-ossp";`)
|
||||||
}
|
}
|
||||||
|
|
||||||
_ = db.Migrator().RenameTable("namespaces", "users")
|
|
||||||
|
|
||||||
err = db.AutoMigrate(&User{})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "namespace_id", "user_id")
|
|
||||||
_ = db.Migrator().RenameColumn(&PreAuthKey{}, "namespace_id", "user_id")
|
|
||||||
|
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
_ = db.Migrator().RenameColumn(&Machine{}, "ip_address", "ip_addresses")
|
||||||
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
_ = db.Migrator().RenameColumn(&Machine{}, "name", "hostname")
|
||||||
|
|
||||||
@@ -91,70 +79,6 @@ func (h *Headscale) initDB() error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&Route{})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if db.Migrator().HasColumn(&Machine{}, "enabled_routes") {
|
|
||||||
log.Info().Msgf("Database has legacy enabled_routes column in machine, migrating...")
|
|
||||||
|
|
||||||
type MachineAux struct {
|
|
||||||
ID uint64
|
|
||||||
EnabledRoutes IPPrefixes
|
|
||||||
}
|
|
||||||
|
|
||||||
machinesAux := []MachineAux{}
|
|
||||||
err := db.Table("machines").Select("id, enabled_routes").Scan(&machinesAux).Error
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("Error accessing db")
|
|
||||||
}
|
|
||||||
for _, machine := range machinesAux {
|
|
||||||
for _, prefix := range machine.EnabledRoutes {
|
|
||||||
if err != nil {
|
|
||||||
log.Error().
|
|
||||||
Err(err).
|
|
||||||
Str("enabled_route", prefix.String()).
|
|
||||||
Msg("Error parsing enabled_route")
|
|
||||||
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.Preload("Machine").
|
|
||||||
Where("machine_id = ? AND prefix = ?", machine.ID, IPPrefix(prefix)).
|
|
||||||
First(&Route{}).
|
|
||||||
Error
|
|
||||||
if err == nil {
|
|
||||||
log.Info().
|
|
||||||
Str("enabled_route", prefix.String()).
|
|
||||||
Msg("Route already migrated to new table, skipping")
|
|
||||||
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
route := Route{
|
|
||||||
MachineID: machine.ID,
|
|
||||||
Advertised: true,
|
|
||||||
Enabled: true,
|
|
||||||
Prefix: IPPrefix(prefix),
|
|
||||||
}
|
|
||||||
if err := h.db.Create(&route).Error; err != nil {
|
|
||||||
log.Error().Err(err).Msg("Error creating route")
|
|
||||||
} else {
|
|
||||||
log.Info().
|
|
||||||
Uint64("machine_id", route.MachineID).
|
|
||||||
Str("prefix", prefix.String()).
|
|
||||||
Msg("Route migrated")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.Migrator().DropColumn(&Machine{}, "enabled_routes")
|
|
||||||
if err != nil {
|
|
||||||
log.Error().Err(err).Msg("Error dropping enabled_routes column")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
err = db.AutoMigrate(&Machine{})
|
err = db.AutoMigrate(&Machine{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -197,12 +121,12 @@ func (h *Headscale) initDB() error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&PreAuthKey{})
|
err = db.AutoMigrate(&Namespace{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
err = db.AutoMigrate(&PreAuthKeyACLTag{})
|
err = db.AutoMigrate(&PreAuthKey{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -335,30 +259,6 @@ func (hi HostInfo) Value() (driver.Value, error) {
|
|||||||
return string(bytes), err
|
return string(bytes), err
|
||||||
}
|
}
|
||||||
|
|
||||||
type IPPrefix netip.Prefix
|
|
||||||
|
|
||||||
func (i *IPPrefix) Scan(destination interface{}) error {
|
|
||||||
switch value := destination.(type) {
|
|
||||||
case string:
|
|
||||||
prefix, err := netip.ParsePrefix(value)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
*i = IPPrefix(prefix)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("%w: unexpected data type %T", ErrCannotParsePrefix, destination)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Value return json value, implement driver.Valuer interface.
|
|
||||||
func (i IPPrefix) Value() (driver.Value, error) {
|
|
||||||
prefixStr := netip.Prefix(i).String()
|
|
||||||
|
|
||||||
return prefixStr, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type IPPrefixes []netip.Prefix
|
type IPPrefixes []netip.Prefix
|
||||||
|
|
||||||
func (i *IPPrefixes) Scan(destination interface{}) error {
|
func (i *IPPrefixes) Scan(destination interface{}) error {
|
||||||
|
2
derp.go
2
derp.go
@@ -10,7 +10,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v2"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
@@ -157,14 +157,14 @@ func (h *Headscale) DERPHandler(
|
|||||||
|
|
||||||
if !fastStart {
|
if !fastStart {
|
||||||
pubKey := h.privateKey.Public()
|
pubKey := h.privateKey.Public()
|
||||||
pubKeyStr, _ := pubKey.MarshalText() //nolint
|
pubKeyStr := pubKey.UntypedHexString() //nolint
|
||||||
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
fmt.Fprintf(conn, "HTTP/1.1 101 Switching Protocols\r\n"+
|
||||||
"Upgrade: DERP\r\n"+
|
"Upgrade: DERP\r\n"+
|
||||||
"Connection: Upgrade\r\n"+
|
"Connection: Upgrade\r\n"+
|
||||||
"Derp-Version: %v\r\n"+
|
"Derp-Version: %v\r\n"+
|
||||||
"Derp-Public-Key: %s\r\n\r\n",
|
"Derp-Public-Key: %s\r\n\r\n",
|
||||||
derp.ProtocolVersion,
|
derp.ProtocolVersion,
|
||||||
string(pubKeyStr))
|
pubKeyStr)
|
||||||
}
|
}
|
||||||
|
|
||||||
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
h.DERPServer.tailscaleDERP.Accept(req.Context(), netConn, conn, netConn.RemoteAddr().String())
|
||||||
|
49
dns.go
49
dns.go
@@ -3,13 +3,11 @@ package headscale
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"net/url"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
mapset "github.com/deckarep/golang-set/v2"
|
mapset "github.com/deckarep/golang-set/v2"
|
||||||
"go4.org/netipx"
|
"go4.org/netipx"
|
||||||
"tailscale.com/tailcfg"
|
"tailscale.com/tailcfg"
|
||||||
"tailscale.com/types/dnstype"
|
|
||||||
"tailscale.com/util/dnsname"
|
"tailscale.com/util/dnsname"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -22,10 +20,6 @@ const (
|
|||||||
ipv6AddressLength = 128
|
ipv6AddressLength = 128
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
nextDNSDoHPrefix = "https://dns.nextdns.io"
|
|
||||||
)
|
|
||||||
|
|
||||||
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
// generateMagicDNSRootDomains generates a list of DNS entries to be included in `Routes` in `MapResponse`.
|
||||||
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
// This list of reverse DNS entries instructs the OS on what subnets and domains the Tailscale embedded DNS
|
||||||
// server (listening in 100.100.100.100 udp/53) should be used for.
|
// server (listening in 100.100.100.100 udp/53) should be used for.
|
||||||
@@ -158,62 +152,37 @@ func generateIPv6DNSRootDomain(ipPrefix netip.Prefix) []dnsname.FQDN {
|
|||||||
return fqdns
|
return fqdns
|
||||||
}
|
}
|
||||||
|
|
||||||
// If any nextdns DoH resolvers are present in the list of resolvers it will
|
|
||||||
// take metadata from the machine metadata and instruct tailscale to add it
|
|
||||||
// to the requests. This makes it possible to identify from which device the
|
|
||||||
// requests come in the NextDNS dashboard.
|
|
||||||
//
|
|
||||||
// This will produce a resolver like:
|
|
||||||
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
|
|
||||||
func addNextDNSMetadata(resolvers []*dnstype.Resolver, machine Machine) {
|
|
||||||
for _, resolver := range resolvers {
|
|
||||||
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
|
|
||||||
attrs := url.Values{
|
|
||||||
"device_name": []string{machine.Hostname},
|
|
||||||
"device_model": []string{machine.HostInfo.OS},
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(machine.IPAddresses) > 0 {
|
|
||||||
attrs.Add("device_ip", machine.IPAddresses[0].String())
|
|
||||||
}
|
|
||||||
|
|
||||||
resolver.Addr = fmt.Sprintf("%s?%s", resolver.Addr, attrs.Encode())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func getMapResponseDNSConfig(
|
func getMapResponseDNSConfig(
|
||||||
dnsConfigOrig *tailcfg.DNSConfig,
|
dnsConfigOrig *tailcfg.DNSConfig,
|
||||||
baseDomain string,
|
baseDomain string,
|
||||||
machine Machine,
|
machine Machine,
|
||||||
peers Machines,
|
peers Machines,
|
||||||
) *tailcfg.DNSConfig {
|
) *tailcfg.DNSConfig {
|
||||||
var dnsConfig *tailcfg.DNSConfig = dnsConfigOrig.Clone()
|
var dnsConfig *tailcfg.DNSConfig
|
||||||
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
if dnsConfigOrig != nil && dnsConfigOrig.Proxied { // if MagicDNS is enabled
|
||||||
// Only inject the Search Domain of the current user - shared nodes should use their full FQDN
|
// Only inject the Search Domain of the current namespace - shared nodes should use their full FQDN
|
||||||
|
dnsConfig = dnsConfigOrig.Clone()
|
||||||
dnsConfig.Domains = append(
|
dnsConfig.Domains = append(
|
||||||
dnsConfig.Domains,
|
dnsConfig.Domains,
|
||||||
fmt.Sprintf(
|
fmt.Sprintf(
|
||||||
"%s.%s",
|
"%s.%s",
|
||||||
machine.User.Name,
|
machine.Namespace.Name,
|
||||||
baseDomain,
|
baseDomain,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
|
|
||||||
userSet := mapset.NewSet[User]()
|
namespaceSet := mapset.NewSet[Namespace]()
|
||||||
userSet.Add(machine.User)
|
namespaceSet.Add(machine.Namespace)
|
||||||
for _, p := range peers {
|
for _, p := range peers {
|
||||||
userSet.Add(p.User)
|
namespaceSet.Add(p.Namespace)
|
||||||
}
|
}
|
||||||
for _, user := range userSet.ToSlice() {
|
for _, namespace := range namespaceSet.ToSlice() {
|
||||||
dnsRoute := fmt.Sprintf("%v.%v", user.Name, baseDomain)
|
dnsRoute := fmt.Sprintf("%v.%v", namespace.Name, baseDomain)
|
||||||
dnsConfig.Routes[dnsRoute] = nil
|
dnsConfig.Routes[dnsRoute] = nil
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
dnsConfig = dnsConfigOrig
|
dnsConfig = dnsConfigOrig
|
||||||
}
|
}
|
||||||
|
|
||||||
addNextDNSMetadata(dnsConfig.Resolvers, machine)
|
|
||||||
|
|
||||||
return dnsConfig
|
return dnsConfig
|
||||||
}
|
}
|
||||||
|
90
dns_test.go
90
dns_test.go
@@ -112,52 +112,48 @@ func (s *Suite) TestMagicDNSRootDomainsIPv6SingleMultiple(c *check.C) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
||||||
userShared1, err := app.CreateUser("shared1")
|
namespaceShared1, err := app.CreateNamespace("shared1")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared2, err := app.CreateUser("shared2")
|
namespaceShared2, err := app.CreateNamespace("shared2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared3, err := app.CreateUser("shared3")
|
namespaceShared3, err := app.CreateNamespace("shared3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||||
userShared2.Name,
|
namespaceShared2.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||||
userShared3.Name,
|
namespaceShared3.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
PreAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
machineInShared1 := &Machine{
|
machineInShared1 := &Machine{
|
||||||
@@ -166,15 +162,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
Hostname: "test_get_shared_nodes_1",
|
Hostname: "test_get_shared_nodes_1",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared1)
|
app.db.Save(machineInShared1)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared2 := &Machine{
|
machineInShared2 := &Machine{
|
||||||
@@ -183,15 +179,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_2",
|
Hostname: "test_get_shared_nodes_2",
|
||||||
UserID: userShared2.ID,
|
NamespaceID: namespaceShared2.ID,
|
||||||
User: *userShared2,
|
Namespace: *namespaceShared2,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared2)
|
app.db.Save(machineInShared2)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared3 := &Machine{
|
machineInShared3 := &Machine{
|
||||||
@@ -200,15 +196,15 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_3",
|
Hostname: "test_get_shared_nodes_3",
|
||||||
UserID: userShared3.ID,
|
NamespaceID: namespaceShared3.ID,
|
||||||
User: *userShared3,
|
Namespace: *namespaceShared3,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared3)
|
app.db.Save(machineInShared3)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machine2InShared1 := &Machine{
|
machine2InShared1 := &Machine{
|
||||||
@@ -217,8 +213,8 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_4",
|
Hostname: "test_get_shared_nodes_4",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||||
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
AuthKeyID: uint(PreAuthKey2InShared1.ID),
|
||||||
@@ -245,66 +241,62 @@ func (s *Suite) TestDNSConfigMapResponseWithMagicDNS(c *check.C) {
|
|||||||
|
|
||||||
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
c.Assert(len(dnsConfig.Routes), check.Equals, 3)
|
||||||
|
|
||||||
domainRouteShared1 := fmt.Sprintf("%s.%s", userShared1.Name, baseDomain)
|
domainRouteShared1 := fmt.Sprintf("%s.%s", namespaceShared1.Name, baseDomain)
|
||||||
_, ok := dnsConfig.Routes[domainRouteShared1]
|
_, ok := dnsConfig.Routes[domainRouteShared1]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
|
|
||||||
domainRouteShared2 := fmt.Sprintf("%s.%s", userShared2.Name, baseDomain)
|
domainRouteShared2 := fmt.Sprintf("%s.%s", namespaceShared2.Name, baseDomain)
|
||||||
_, ok = dnsConfig.Routes[domainRouteShared2]
|
_, ok = dnsConfig.Routes[domainRouteShared2]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
|
|
||||||
domainRouteShared3 := fmt.Sprintf("%s.%s", userShared3.Name, baseDomain)
|
domainRouteShared3 := fmt.Sprintf("%s.%s", namespaceShared3.Name, baseDomain)
|
||||||
_, ok = dnsConfig.Routes[domainRouteShared3]
|
_, ok = dnsConfig.Routes[domainRouteShared3]
|
||||||
c.Assert(ok, check.Equals, true)
|
c.Assert(ok, check.Equals, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
||||||
userShared1, err := app.CreateUser("shared1")
|
namespaceShared1, err := app.CreateNamespace("shared1")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared2, err := app.CreateUser("shared2")
|
namespaceShared2, err := app.CreateNamespace("shared2")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
userShared3, err := app.CreateUser("shared3")
|
namespaceShared3, err := app.CreateNamespace("shared3")
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
preAuthKeyInShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
preAuthKeyInShared2, err := app.CreatePreAuthKey(
|
||||||
userShared2.Name,
|
namespaceShared2.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
preAuthKeyInShared3, err := app.CreatePreAuthKey(
|
||||||
userShared3.Name,
|
namespaceShared3.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
preAuthKey2InShared1, err := app.CreatePreAuthKey(
|
||||||
userShared1.Name,
|
namespaceShared1.Name,
|
||||||
false,
|
false,
|
||||||
false,
|
false,
|
||||||
nil,
|
nil,
|
||||||
nil,
|
|
||||||
)
|
)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, "test_get_shared_nodes_1")
|
_, err = app.GetMachine(namespaceShared1.Name, "test_get_shared_nodes_1")
|
||||||
c.Assert(err, check.NotNil)
|
c.Assert(err, check.NotNil)
|
||||||
|
|
||||||
machineInShared1 := &Machine{
|
machineInShared1 := &Machine{
|
||||||
@@ -313,15 +305,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
NodeKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
DiscoKey: "686824e749f3b7f2a5927ee6c1e422aee5292592d9179a271ed7b3e659b44a66",
|
||||||
Hostname: "test_get_shared_nodes_1",
|
Hostname: "test_get_shared_nodes_1",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.1")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
AuthKeyID: uint(preAuthKeyInShared1.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared1)
|
app.db.Save(machineInShared1)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared1.Name, machineInShared1.Hostname)
|
_, err = app.GetMachine(namespaceShared1.Name, machineInShared1.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared2 := &Machine{
|
machineInShared2 := &Machine{
|
||||||
@@ -330,15 +322,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_2",
|
Hostname: "test_get_shared_nodes_2",
|
||||||
UserID: userShared2.ID,
|
NamespaceID: namespaceShared2.ID,
|
||||||
User: *userShared2,
|
Namespace: *namespaceShared2,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.2")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
AuthKeyID: uint(preAuthKeyInShared2.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared2)
|
app.db.Save(machineInShared2)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared2.Name, machineInShared2.Hostname)
|
_, err = app.GetMachine(namespaceShared2.Name, machineInShared2.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machineInShared3 := &Machine{
|
machineInShared3 := &Machine{
|
||||||
@@ -347,15 +339,15 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_3",
|
Hostname: "test_get_shared_nodes_3",
|
||||||
UserID: userShared3.ID,
|
NamespaceID: namespaceShared3.ID,
|
||||||
User: *userShared3,
|
Namespace: *namespaceShared3,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.3")},
|
||||||
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
AuthKeyID: uint(preAuthKeyInShared3.ID),
|
||||||
}
|
}
|
||||||
app.db.Save(machineInShared3)
|
app.db.Save(machineInShared3)
|
||||||
|
|
||||||
_, err = app.GetMachine(userShared3.Name, machineInShared3.Hostname)
|
_, err = app.GetMachine(namespaceShared3.Name, machineInShared3.Hostname)
|
||||||
c.Assert(err, check.IsNil)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
machine2InShared1 := &Machine{
|
machine2InShared1 := &Machine{
|
||||||
@@ -364,8 +356,8 @@ func (s *Suite) TestDNSConfigMapResponseWithoutMagicDNS(c *check.C) {
|
|||||||
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
NodeKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
DiscoKey: "dec46ef9dc45c7d2f03bfcd5a640d9e24e3cc68ce3d9da223867c9bc6d5e9863",
|
||||||
Hostname: "test_get_shared_nodes_4",
|
Hostname: "test_get_shared_nodes_4",
|
||||||
UserID: userShared1.ID,
|
NamespaceID: namespaceShared1.ID,
|
||||||
User: *userShared1,
|
Namespace: *namespaceShared1,
|
||||||
RegisterMethod: RegisterMethodAuthKey,
|
RegisterMethod: RegisterMethodAuthKey,
|
||||||
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
IPAddresses: []netip.Addr{netip.MustParseAddr("100.64.0.4")},
|
||||||
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
AuthKeyID: uint(preAuthKey2InShared1.ID),
|
||||||
|
53
docs/README.md
Normal file
53
docs/README.md
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
# headscale documentation
|
||||||
|
|
||||||
|
This page contains the official and community contributed documentation for `headscale`.
|
||||||
|
|
||||||
|
If you are having trouble with following the documentation or get unexpected results,
|
||||||
|
please ask on [Discord](https://discord.gg/c84AZQhmpx) instead of opening an Issue.
|
||||||
|
|
||||||
|
## Official documentation
|
||||||
|
|
||||||
|
### How-to
|
||||||
|
|
||||||
|
- [Running headscale on Linux](running-headscale-linux.md)
|
||||||
|
- [Control headscale remotely](remote-cli.md)
|
||||||
|
- [Using a Windows client with headscale](windows-client.md)
|
||||||
|
|
||||||
|
### References
|
||||||
|
|
||||||
|
- [Configuration](../config-example.yaml)
|
||||||
|
- [Glossary](glossary.md)
|
||||||
|
- [TLS](tls.md)
|
||||||
|
|
||||||
|
## Community documentation
|
||||||
|
|
||||||
|
Community documentation is not actively maintained by the headscale authors and is
|
||||||
|
written by community members. It is _not_ verified by `headscale` developers.
|
||||||
|
|
||||||
|
**It might be outdated and it might miss necessary steps**.
|
||||||
|
|
||||||
|
- [Running headscale in a container](running-headscale-container.md)
|
||||||
|
- [Running headscale on OpenBSD](running-headscale-openbsd.md)
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
### Policy ACLs
|
||||||
|
|
||||||
|
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
||||||
|
|
||||||
|
For instance, instead of referring to users when defining groups you must
|
||||||
|
use namespaces (which are the equivalent to user/logins in Tailscale.com).
|
||||||
|
|
||||||
|
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
||||||
|
|
||||||
|
When using ACL's the Namespace borders are no longer applied. All machines
|
||||||
|
whichever the Namespace have the ability to communicate with other hosts as
|
||||||
|
long as the ACL's permits this exchange.
|
||||||
|
|
||||||
|
The [ACLs](acls.md) document should help understand a fictional case of setting
|
||||||
|
up ACLs in a small company. All concepts presented in this document could be
|
||||||
|
applied outside of business oriented usage.
|
||||||
|
|
||||||
|
### Apple devices
|
||||||
|
|
||||||
|
An endpoint with information on how to connect your Apple devices (currently macOS only) is available at `/apple` on your running instance.
|
25
docs/acls.md
25
docs/acls.md
@@ -1,15 +1,4 @@
|
|||||||
Headscale implements the same policy ACLs as Tailscale.com, adapted to the self-hosted environment.
|
# ACLs use case example
|
||||||
|
|
||||||
For instance, instead of referring to users when defining groups you must
|
|
||||||
use users (which are the equivalent to user/logins in Tailscale.com).
|
|
||||||
|
|
||||||
Please check https://tailscale.com/kb/1018/acls/, and `./tests/acls/` in this repo for working examples.
|
|
||||||
|
|
||||||
When using ACL's the User borders are no longer applied. All machines
|
|
||||||
whichever the User have the ability to communicate with other hosts as
|
|
||||||
long as the ACL's permits this exchange.
|
|
||||||
|
|
||||||
## ACLs use case example
|
|
||||||
|
|
||||||
Let's build an example use case for a small business (It may be the place where
|
Let's build an example use case for a small business (It may be the place where
|
||||||
ACL's are the most useful).
|
ACL's are the most useful).
|
||||||
@@ -40,21 +29,19 @@ servers.
|
|||||||
|
|
||||||
## ACL setup
|
## ACL setup
|
||||||
|
|
||||||
Note: Users will be created automatically when users authenticate with the
|
Note: Namespaces will be created automatically when users authenticate with the
|
||||||
Headscale server.
|
Headscale server.
|
||||||
|
|
||||||
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
ACLs could be written either on [huJSON](https://github.com/tailscale/hujson)
|
||||||
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
or YAML. Check the [test ACLs](../tests/acls) for further information.
|
||||||
|
|
||||||
When registering the servers we will need to add the flag
|
When registering the servers we will need to add the flag
|
||||||
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user that is
|
`--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user (namespace) that is
|
||||||
registering the server should be allowed to do it. Since anyone can add tags to
|
registering the server should be allowed to do it. Since anyone can add tags to
|
||||||
a server they can register, the check of the tags is done on headscale server
|
a server they can register, the check of the tags is done on headscale server
|
||||||
and only valid tags are applied. A tag is valid if the user that is
|
and only valid tags are applied. A tag is valid if the namespace that is
|
||||||
registering it is allowed to do it.
|
registering it is allowed to do it.
|
||||||
|
|
||||||
To use ACLs in headscale, you must edit your config.yaml file. In there you will find a `acl_policy_path: ""` parameter. This will need to point to your ACL file. More info on how these policies are written can be found [here](https://tailscale.com/kb/1018/acls/).
|
|
||||||
|
|
||||||
Here are the ACL's to implement the same permissions as above:
|
Here are the ACL's to implement the same permissions as above:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
@@ -177,8 +164,8 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
"dst": ["tag:dev-app-servers:80,443"]
|
"dst": ["tag:dev-app-servers:80,443"]
|
||||||
},
|
},
|
||||||
|
|
||||||
// We still have to allow internal users communications since nothing guarantees that each user have
|
// We still have to allow internal namespaces communications since nothing guarantees that each user have
|
||||||
// their own users.
|
// their own namespaces.
|
||||||
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
||||||
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
||||||
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
||||||
|
@@ -1,90 +0,0 @@
|
|||||||
# Setting custom DNS records
|
|
||||||
|
|
||||||
!!! warning "Community documentation"
|
|
||||||
|
|
||||||
This page is not actively maintained by the headscale authors and is
|
|
||||||
written by community members. It is _not_ verified by `headscale` developers.
|
|
||||||
|
|
||||||
**It might be outdated and it might miss necessary steps**.
|
|
||||||
|
|
||||||
## Goal
|
|
||||||
|
|
||||||
This documentation has the goal of showing how a user can set custom DNS records with `headscale`s magic dns.
|
|
||||||
An example use case is to serve apps on the same host via a reverse proxy like NGINX, in this case a Prometheus monitoring stack. This allows to nicely access the service with "http://grafana.myvpn.example.com" instead of the hostname and portnum combination "http://hostname-in-magic-dns.myvpn.example.com:3000".
|
|
||||||
|
|
||||||
## Setup
|
|
||||||
|
|
||||||
### 1. Change the configuration
|
|
||||||
|
|
||||||
1. Change the `config.yaml` to contain the desired records like so:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
dns_config:
|
|
||||||
...
|
|
||||||
extra_records:
|
|
||||||
- name: "prometheus.myvpn.example.com"
|
|
||||||
type: "A"
|
|
||||||
value: "100.64.0.3"
|
|
||||||
|
|
||||||
- name: "grafana.myvpn.example.com"
|
|
||||||
type: "A"
|
|
||||||
value: "100.64.0.3"
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Restart your headscale instance.
|
|
||||||
|
|
||||||
Beware of the limitations listed later on!
|
|
||||||
|
|
||||||
### 2. Verify that the records are set
|
|
||||||
|
|
||||||
You can use a DNS querying tool of your choice on one of your hosts to verify that your newly set records are actually available in MagicDNS, here we used [`dig`](https://man.archlinux.org/man/dig.1.en):
|
|
||||||
|
|
||||||
```
|
|
||||||
$ dig grafana.myvpn.example.com
|
|
||||||
|
|
||||||
; <<>> DiG 9.18.10 <<>> grafana.myvpn.example.com
|
|
||||||
;; global options: +cmd
|
|
||||||
;; Got answer:
|
|
||||||
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44054
|
|
||||||
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
|
|
||||||
|
|
||||||
;; OPT PSEUDOSECTION:
|
|
||||||
; EDNS: version: 0, flags:; udp: 65494
|
|
||||||
;; QUESTION SECTION:
|
|
||||||
;grafana.myvpn.example.com. IN A
|
|
||||||
|
|
||||||
;; ANSWER SECTION:
|
|
||||||
grafana.myvpn.example.com. 593 IN A 100.64.0.3
|
|
||||||
|
|
||||||
;; Query time: 0 msec
|
|
||||||
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
|
|
||||||
;; WHEN: Sat Dec 31 11:46:55 CET 2022
|
|
||||||
;; MSG SIZE rcvd: 66
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Optional: Setup the reverse proxy
|
|
||||||
|
|
||||||
The motivating example here was to be able to access internal monitoring services on the same host without specifying a port:
|
|
||||||
|
|
||||||
```
|
|
||||||
server {
|
|
||||||
listen 80;
|
|
||||||
listen [::]:80;
|
|
||||||
|
|
||||||
server_name grafana.myvpn.example.com;
|
|
||||||
|
|
||||||
location / {
|
|
||||||
proxy_pass http://localhost:3000;
|
|
||||||
proxy_set_header Host $http_host;
|
|
||||||
proxy_set_header X-Real-IP $remote_addr;
|
|
||||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
|
||||||
proxy_set_header X-Forwarded-Proto $scheme;
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Limitations
|
|
||||||
|
|
||||||
[Not all types of records are supported](https://github.com/tailscale/tailscale/blob/6edf357b96b28ee1be659a70232c0135b2ffedfd/ipn/ipnlocal/local.go#L2989-L3007), especially no CNAME records.
|
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user