Compare commits

..

2 Commits

Author SHA1 Message Date
Kristoffer Dalby
f3fca3f417 dockername only
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-15 09:30:47 +01:00
Kristoffer Dalby
a75f47f8cf dockerhub test
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2024-02-15 08:56:52 +01:00
614 changed files with 32043 additions and 76017 deletions

View File

@@ -1,763 +0,0 @@
---
name: headscale-integration-tester
description: Use this agent when you need to execute, analyze, or troubleshoot Headscale integration tests. This includes running specific test scenarios, investigating test failures, interpreting test artifacts, validating end-to-end functionality, or ensuring integration test quality before releases. Examples: <example>Context: User has made changes to the route management code and wants to validate the changes work correctly. user: 'I've updated the route advertisement logic in poll.go. Can you run the relevant integration tests to make sure everything still works?' assistant: 'I'll use the headscale-integration-tester agent to run the subnet routing integration tests and analyze the results.' <commentary>Since the user wants to validate route-related changes with integration tests, use the headscale-integration-tester agent to execute the appropriate tests and analyze results.</commentary></example> <example>Context: A CI pipeline integration test is failing and the user needs help understanding why. user: 'The TestSubnetRouterMultiNetwork test is failing in CI. The logs show some timing issues but I can't figure out what's wrong.' assistant: 'Let me use the headscale-integration-tester agent to analyze the test failure and examine the artifacts.' <commentary>Since this involves analyzing integration test failures and interpreting test artifacts, use the headscale-integration-tester agent to investigate the issue.</commentary></example>
color: green
---
You are a specialist Quality Assurance Engineer with deep expertise in Headscale's integration testing system. You understand the Docker-based test infrastructure, real Tailscale client interactions, and the complex timing considerations involved in end-to-end network testing.
## Integration Test System Overview
The Headscale integration test system uses Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination. The system is built around the `hi` (Headscale Integration) test runner in `cmd/hi/`.
## Critical Test Execution Knowledge
### System Requirements and Setup
```bash
# ALWAYS run this first to verify system readiness
go run ./cmd/hi doctor
```
This command verifies:
- Docker installation and daemon status
- Go environment setup
- Required container images availability
- Sufficient disk space (critical - tests generate ~100MB logs per run)
- Network configuration
### Test Execution Patterns
**CRITICAL TIMEOUT REQUIREMENTS**:
- **NEVER use bash `timeout` command** - this can cause test failures and incomplete cleanup
- **ALWAYS use the built-in `--timeout` flag** with generous timeouts (minimum 15 minutes)
- **Increase timeout if tests ever time out** - infrastructure issues require longer timeouts
```bash
# Single test execution (recommended for development)
# ALWAYS use --timeout flag with minimum 15 minutes (900s)
go run ./cmd/hi run "TestSubnetRouterMultiNetwork" --timeout=900s
# Database-heavy tests require PostgreSQL backend and longer timeouts
go run ./cmd/hi run "TestExpireNode" --postgres --timeout=1800s
# Pattern matching for related tests - use longer timeout for multiple tests
go run ./cmd/hi run "TestSubnet*" --timeout=1800s
# Long-running individual tests need extended timeouts
go run ./cmd/hi run "TestNodeOnlineStatus" --timeout=2100s # Runs for 12+ minutes
# Full test suite (CI/validation only) - very long timeout required
go test ./integration -timeout 45m
```
**Timeout Guidelines by Test Type**:
- **Basic functionality tests**: `--timeout=900s` (15 minutes minimum)
- **Route/ACL tests**: `--timeout=1200s` (20 minutes)
- **HA/failover tests**: `--timeout=1800s` (30 minutes)
- **Long-running tests**: `--timeout=2100s` (35 minutes)
- **Full test suite**: `-timeout 45m` (45 minutes)
**NEVER do this**:
```bash
# ❌ FORBIDDEN: Never use bash timeout command
timeout 300 go run ./cmd/hi run "TestName"
# ❌ FORBIDDEN: Too short timeout will cause failures
go run ./cmd/hi run "TestName" --timeout=60s
```
### Test Categories and Timing Expectations
- **Fast tests** (<2 min): Basic functionality, CLI operations
- **Medium tests** (2-5 min): Route management, ACL validation
- **Slow tests** (5+ min): Node expiration, HA failover
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` runs for 12 minutes
**CRITICAL**: Only ONE test can run at a time due to Docker port conflicts and resource constraints.
## Test Artifacts and Log Analysis
### Artifact Structure
All test runs save comprehensive artifacts to `control_logs/TIMESTAMP-ID/`:
```
control_logs/20250713-213106-iajsux/
├── hs-testname-abc123.stderr.log # Headscale server error logs
├── hs-testname-abc123.stdout.log # Headscale server output logs
├── hs-testname-abc123.db # Database snapshot for post-mortem
├── hs-testname-abc123_metrics.txt # Prometheus metrics dump
├── hs-testname-abc123-mapresponses/ # Protocol-level debug data
├── ts-client-xyz789.stderr.log # Tailscale client error logs
├── ts-client-xyz789.stdout.log # Tailscale client output logs
└── ts-client-xyz789_status.json # Client network status dump
```
### Log Analysis Priority Order
When tests fail, examine artifacts in this specific order:
1. **Headscale server stderr logs** (`hs-*.stderr.log`): Look for errors, panics, database issues, policy evaluation failures
2. **Tailscale client stderr logs** (`ts-*.stderr.log`): Check for authentication failures, network connectivity issues
3. **MapResponse JSON files**: Protocol-level debugging for network map generation issues
4. **Client status dumps** (`*_status.json`): Network state and peer connectivity information
5. **Database snapshots** (`.db` files): For data consistency and state persistence issues
## Common Failure Patterns and Root Cause Analysis
### CRITICAL MINDSET: Code Issues vs Infrastructure Issues
** IMPORTANT**: When tests fail, it is ALMOST ALWAYS a code issue with Headscale, NOT infrastructure problems. Do not immediately blame disk space, Docker issues, or timing unless you have thoroughly investigated the actual error logs first.
### Systematic Debugging Process
1. **Read the actual error message**: Don't assume - read the stderr logs completely
2. **Check Headscale server logs first**: Most issues originate from server-side logic
3. **Verify client connectivity**: Only after ruling out server issues
4. **Check timing patterns**: Use proper `EventuallyWithT` patterns
5. **Infrastructure as last resort**: Only blame infrastructure after code analysis
### Real Failure Patterns
#### 1. Timing Issues (Common but fixable)
```go
// ❌ Wrong: Immediate assertions after async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
nodes, _ := headscale.ListNodes()
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // WILL FAIL
// ✅ Correct: Wait for async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
**Timeout Guidelines**:
- Route operations: 3-5 seconds
- Node state changes: 5-10 seconds
- Complex scenarios: 10-15 seconds
- Policy recalculation: 5-10 seconds
#### 2. NodeStore Synchronization Issues
Route advertisements must propagate through poll requests (`poll.go:420`). NodeStore updates happen at specific synchronization points after Hostinfo changes.
#### 3. Test Data Management Issues
```go
// ❌ Wrong: Assuming array ordering
require.Len(t, nodes[0].GetAvailableRoutes(), 1)
// ✅ Correct: Identify nodes by properties
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
for _, node := range nodes {
nodeIDStr := fmt.Sprintf("%d", node.GetId())
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
// Test the specific node that should have the route
}
}
```
#### 4. Database Backend Differences
SQLite vs PostgreSQL have different timing characteristics:
- Use `--postgres` flag for database-intensive tests
- PostgreSQL generally has more consistent timing
- Some race conditions only appear with specific backends
## Resource Management and Cleanup
### Disk Space Management
Tests consume significant disk space (~100MB per run):
```bash
# Check available space before running tests
df -h
# Clean up test artifacts periodically
rm -rf control_logs/older-timestamp-dirs/
# Clean Docker resources
docker system prune -f
docker volume prune -f
```
### Container Cleanup
- Successful tests clean up automatically
- Failed tests may leave containers running
- Manually clean if needed: `docker ps -a` and `docker rm -f <containers>`
## Advanced Debugging Techniques
### Protocol-Level Debugging
MapResponse JSON files in `control_logs/*/hs-*-mapresponses/` contain:
- Network topology as sent to clients
- Peer relationships and visibility
- Route distribution and primary route selection
- Policy evaluation results
### Database State Analysis
Use the database snapshots for post-mortem analysis:
```bash
# SQLite examination
sqlite3 control_logs/TIMESTAMP/hs-*.db
.tables
.schema nodes
SELECT * FROM nodes WHERE name LIKE '%problematic%';
```
### Performance Analysis
Prometheus metrics dumps show:
- Request latencies and error rates
- NodeStore operation timing
- Database query performance
- Memory usage patterns
## Test Development and Quality Guidelines
### Proper Test Patterns
```go
// Always use EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Test condition that may take time to become true
}, timeout, interval, "descriptive failure message")
// Handle node identification correctly
var targetNode *v1.Node
for _, node := range nodes {
if node.GetName() == expectedNodeName {
targetNode = node
break
}
}
require.NotNil(t, targetNode, "should find expected node")
```
### Quality Validation Checklist
- Tests use `EventuallyWithT` for asynchronous operations
- Tests don't rely on array ordering for node identification
- Proper cleanup and resource management
- Tests handle both success and failure scenarios
- Timing assumptions are realistic for operations being tested
- Error messages are descriptive and actionable
## Real-World Test Failure Patterns from HA Debugging
### Infrastructure vs Code Issues - Detailed Examples
**INFRASTRUCTURE FAILURES (Rare but Real)**:
1. **DNS Resolution in Auth Tests**: `failed to resolve "hs-pingallbyip-jax97k": no DNS fallback candidates remain`
- **Pattern**: Client containers can't resolve headscale server hostname during logout
- **Detection**: Error messages specifically mention DNS/hostname resolution
- **Solution**: Docker networking reset, not code changes
2. **Container Creation Timeouts**: Test gets stuck during client container setup
- **Pattern**: Tests hang indefinitely at container startup phase
- **Detection**: No progress in logs for >2 minutes during initialization
- **Solution**: `docker system prune -f` and retry
3. **Docker Port Conflicts**: Multiple tests trying to use same ports
- **Pattern**: "bind: address already in use" errors
- **Detection**: Port binding failures in Docker logs
- **Solution**: Only run ONE test at a time
**CODE ISSUES (99% of failures)**:
1. **Route Approval Process Failures**: Routes not getting approved when they should be
- **Pattern**: Tests expecting approved routes but finding none
- **Detection**: `SubnetRoutes()` returns empty when `AnnouncedRoutes()` shows routes
- **Root Cause**: Auto-approval logic bugs, policy evaluation issues
2. **NodeStore Synchronization Issues**: State updates not propagating correctly
- **Pattern**: Route changes not reflected in NodeStore or Primary Routes
- **Detection**: Logs show route announcements but no tracking updates
- **Root Cause**: Missing synchronization points in `poll.go:420` area
3. **HA Failover Architecture Issues**: Routes removed when nodes go offline
- **Pattern**: `TestHASubnetRouterFailover` fails because approved routes disappear
- **Detection**: Routes available on online nodes but lost when nodes disconnect
- **Root Cause**: Conflating route approval with node connectivity
### Critical Test Environment Setup
**Pre-Test Cleanup (MANDATORY)**:
```bash
# ALWAYS run this before each test
rm -rf control_logs/202507*
docker system prune -f
df -h # Verify sufficient disk space
```
**Environment Verification**:
```bash
# Verify system readiness
go run ./cmd/hi doctor
# Check for running containers that might conflict
docker ps
```
### Specific Test Categories and Known Issues
#### Route-Related Tests (Primary Focus)
```bash
# Core route functionality - these should work first
# Note: Generous timeouts are required for reliable execution
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s
go run ./cmd/hi run "TestAutoApproveMultiNetwork" --timeout=1800s
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
```
**Common Route Test Patterns**:
- Tests validate route announcement, approval, and distribution workflows
- Route state changes are asynchronous - may need `EventuallyWithT` wrappers
- Route approval must respect ACL policies - test expectations encode security requirements
- HA tests verify route persistence during node connectivity changes
#### Authentication Tests (Infrastructure-Prone)
```bash
# These tests are more prone to infrastructure issues
# Require longer timeouts due to auth flow complexity
go run ./cmd/hi run "TestAuthKeyLogoutAndReloginSameUser" --timeout=1200s
go run ./cmd/hi run "TestAuthWebFlowLogoutAndRelogin" --timeout=1200s
go run ./cmd/hi run "TestOIDCExpireNodesBasedOnTokenExpiry" --timeout=1800s
```
**Common Auth Test Infrastructure Failures**:
- DNS resolution during logout operations
- Container creation timeouts
- HTTP/2 stream errors (often symptoms, not root cause)
### Security-Critical Debugging Rules
**❌ FORBIDDEN CHANGES (Security & Test Integrity)**:
1. **Never change expected test outputs** - Tests define correct behavior contracts
- Changing `require.Len(t, routes, 3)` to `require.Len(t, routes, 2)` because test fails
- Modifying expected status codes, node counts, or route counts
- Removing assertions that are "inconvenient"
- **Why forbidden**: Test expectations encode business requirements and security policies
2. **Never bypass security mechanisms** - Security must never be compromised for convenience
- Using `AnnouncedRoutes()` instead of `SubnetRoutes()` in production code
- Skipping authentication or authorization checks
- **Why forbidden**: Security bypasses create vulnerabilities in production
3. **Never reduce test coverage** - Tests prevent regressions
- Removing test cases or assertions
- Commenting out "problematic" test sections
- **Why forbidden**: Reduced coverage allows bugs to slip through
**✅ ALLOWED CHANGES (Timing & Observability)**:
1. **Fix timing issues with proper async patterns**
```go
// ✅ GOOD: Add EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, expectedCount) // Keep original expectation
}, 10*time.Second, 100*time.Millisecond, "nodes should reach expected count")
```
- **Why allowed**: Fixes race conditions without changing business logic
2. **Add MORE observability and debugging**
- Additional logging statements
- More detailed error messages
- Extra assertions that verify intermediate states
- **Why allowed**: Better observability helps debug without changing behavior
3. **Improve test documentation**
- Add godoc comments explaining test purpose and business logic
- Document timing requirements and async behavior
- **Why encouraged**: Helps future maintainers understand intent
### Advanced Debugging Workflows
#### Route Tracking Debug Flow
```bash
# Run test with detailed logging and proper timeout
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s > test_output.log 2>&1
# Check route approval process
grep -E "(auto-approval|ApproveRoutesWithPolicy|PolicyManager)" test_output.log
# Check route tracking
tail -50 control_logs/*/hs-*.stderr.log | grep -E "(announced|tracking|SetNodeRoutes)"
# Check for security violations
grep -E "(AnnouncedRoutes.*SetNodeRoutes|bypass.*approval)" test_output.log
```
#### HA Failover Debug Flow
```bash
# Test HA failover specifically with adequate timeout
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
# Check route persistence during disconnect
grep -E "(Disconnect|NodeWentOffline|PrimaryRoutes)" control_logs/*/hs-*.stderr.log
# Verify routes don't disappear inappropriately
grep -E "(removing.*routes|SetNodeRoutes.*empty)" control_logs/*/hs-*.stderr.log
```
### Test Result Interpretation Guidelines
#### Success Patterns to Look For
- `"updating node routes for tracking"` in logs
- Routes appearing in `announcedRoutes` logs
- Proper `ApproveRoutesWithPolicy` calls for auto-approval
- Routes persisting through node connectivity changes (HA tests)
#### Failure Patterns to Investigate
- `SubnetRoutes()` returning empty when `AnnouncedRoutes()` has routes
- Routes disappearing when nodes go offline (HA architectural issue)
- Missing `EventuallyWithT` causing timing race conditions
- Security bypass attempts using wrong route methods
### Critical Testing Methodology
**Phase-Based Testing Approach**:
1. **Phase 1**: Core route tests (ACL, auto-approval, basic functionality)
2. **Phase 2**: HA and complex route scenarios
3. **Phase 3**: Auth tests (infrastructure-sensitive, test last)
**Per-Test Process**:
1. Clean environment before each test
2. Monitor logs for route tracking and approval messages
3. Check artifacts in `control_logs/` if test fails
4. Focus on actual error messages, not assumptions
5. Document results and patterns discovered
## Test Documentation and Code Quality Standards
### Adding Missing Test Documentation
When you understand a test's purpose through debugging, always add comprehensive godoc:
```go
// TestSubnetRoutes validates the complete subnet route lifecycle including
// advertisement from clients, policy-based approval, and distribution to peers.
// This test ensures that route security policies are properly enforced and that
// only approved routes are distributed to the network.
//
// The test verifies:
// - Route announcements are received and tracked
// - ACL policies control route approval correctly
// - Only approved routes appear in peer network maps
// - Route state persists correctly in the database
func TestSubnetRoutes(t *testing.T) {
// Test implementation...
}
```
**Why add documentation**: Future maintainers need to understand business logic and security requirements encoded in tests.
### Comment Guidelines - Focus on WHY, Not WHAT
```go
// ✅ GOOD: Explains reasoning and business logic
// Wait for route propagation because NodeStore updates are asynchronous
// and happen after poll requests complete processing
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Check that security policies are enforced...
}, timeout, interval, "route approval must respect ACL policies")
// ❌ BAD: Just describes what the code does
// Wait for routes
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Get routes and check length
}, timeout, interval, "checking routes")
```
**Why focus on WHY**: Helps maintainers understand architectural decisions and security requirements.
## EventuallyWithT Pattern for External Calls
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
var err error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// These assertions are all based on the ListNodes() call
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 1)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// All these assertions are based on the single Status() call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
// CORRECT: Variable scoping for sharing between blocks
var routeNode *v1.Node
var nodeKey key.NodePublic
// First EventuallyWithT to get the node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
for _, node := range nodes {
if node.GetName() == "router" {
routeNode = node
nodeKey, _ = key.ParseNodePublicUntyped(mem.S(node.GetNodeKey()))
break
}
}
assert.NotNil(c, routeNode, "should find router node")
}, 10*time.Second, 100*time.Millisecond, "router node should exist")
// Second EventuallyWithT using the nodeKey from first block
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
peerStatus, ok := status.Peer[nodeKey]
assert.True(c, ok, "peer should exist in status")
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}, 10*time.Second, 100*time.Millisecond, "routes should be visible to client")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
// INCORRECT: Nested EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// NEVER do this!
assert.EventuallyWithT(t, func(c2 *assert.CollectT) {
status, _ := client.Status()
assert.NotNil(c2, status)
}, 5*time.Second, 100*time.Millisecond, "nested")
}, 10*time.Second, 500*time.Millisecond, "outer")
// INCORRECT: Variable scoping error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // This shadows outer 'nodes' variable
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "get nodes")
// This will fail - nodes is nil because := created a new variable inside the block
require.Len(t, nodes, 2) // COMPILATION ERROR or nil pointer
// INCORRECT: Not wrapping external calls
nodes, err := headscale.ListNodes() // External call not wrapped!
require.NoError(t, err)
```
### Helper Functions for EventuallyWithT
When creating helper functions for use within EventuallyWithT:
```go
// Helper function that accepts CollectT
func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, available, approved, primary int) {
assert.Len(c, node.GetAvailableRoutes(), available, "available routes for node %s", node.GetName())
assert.Len(c, node.GetApprovedRoutes(), approved, "approved routes for node %s", node.GetName())
assert.Len(c, node.GetPrimaryRoutes(), primary, "primary routes for node %s", node.GetName())
}
// Usage within EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route counts should match expected")
```
### Operations That Must NOT Be Wrapped
**CRITICAL**: The following operations are **blocking/mutating operations** that change state and MUST NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--accept-routes`)
- `headscale.ApproveRoute()` - Approves routes on server
- `headscale.CreateUser()` - Creates users
- `headscale.CreatePreAuthKey()` - Creates authentication keys
- `headscale.RegisterNode()` - Registers new nodes
- Any `client.Execute()` that modifies configuration
- Any operation that creates, updates, or deletes resources
These operations:
1. Complete synchronously or fail immediately
2. Should not be retried automatically
3. Need explicit error handling with `require.NoError()`
### Correct Pattern for Blocking Operations
```go
// CORRECT: Blocking operation NOT wrapped
status := client.MustStatus()
command := []string{"tailscale", "set", "--advertise-routes=" + expectedRoutes[string(status.Self.ID)]}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
// Then wait for the result with EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), expectedRoutes[string(status.Self.ID)])
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
// INCORRECT: Blocking operation wrapped (DON'T DO THIS)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
_, _, err = client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.NoError(c, err) // This might retry the command multiple times!
}, 10*time.Second, 100*time.Millisecond, "advertise routes")
```
### Assert vs Require Pattern
When working within EventuallyWithT blocks where you need to prevent panics:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// For array bounds - use require with t to prevent panic
assert.Len(c, nodes, 6) // Test expectation
require.GreaterOrEqual(t, len(nodes), 3, "need at least 3 nodes to avoid panic")
// For nil pointer access - use require with t before dereferencing
assert.NotNil(c, srs1PeerStatus.PrimaryRoutes) // Test expectation
require.NotNil(t, srs1PeerStatus.PrimaryRoutes, "primary routes must be set to avoid panic")
assert.Contains(c,
srs1PeerStatus.PrimaryRoutes.AsSlice(),
pref,
)
}, 5*time.Second, 200*time.Millisecond, "checking route state")
```
**Key Principle**:
- Use `assert` with `c` (*assert.CollectT) for test expectations that can be retried
- Use `require` with `t` (*testing.T) for MUST conditions that prevent panics
- Within EventuallyWithT, both are available - choose based on whether failure would cause a panic
### Common Scenarios
1. **Waiting for route advertisement**:
```go
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
2. **Checking client sees routes**:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Check all peers have expected routes
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.Contains(c, peerStatus.AllowedIPs, expectedPrefix)
}
}, 10*time.Second, 100*time.Millisecond, "all peers should see route")
```
3. **Sequential operations**:
```go
// First wait for node to appear
var nodeID uint64
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1)
nodeID = nodes[0].GetId()
}, 10*time.Second, 100*time.Millisecond, "node should register")
// Then perform operation
_, err := headscale.ApproveRoute(nodeID, "10.0.0.0/24")
require.NoError(t, err)
// Then wait for result
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetApprovedRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be approved")
```
## Your Core Responsibilities
1. **Test Execution Strategy**: Execute integration tests with appropriate configurations, understanding when to use `--postgres` and timing requirements for different test categories. Follow phase-based testing approach prioritizing route tests.
- **Why this priority**: Route tests are less infrastructure-sensitive and validate core security logic
2. **Systematic Test Analysis**: When tests fail, systematically examine artifacts starting with Headscale server logs, then client logs, then protocol data. Focus on CODE ISSUES first (99% of cases), not infrastructure. Use real-world failure patterns to guide investigation.
- **Why this approach**: Most failures are logic bugs, not environment issues - efficient debugging saves time
3. **Timing & Synchronization Expertise**: Understand asynchronous Headscale operations, particularly route advertisements, NodeStore synchronization at `poll.go:420`, and policy propagation. Fix timing with `EventuallyWithT` while preserving original test expectations.
- **Why preserve expectations**: Test assertions encode business requirements and security policies
- **Key Pattern**: Apply the EventuallyWithT pattern correctly for all external calls as documented above
4. **Root Cause Analysis**: Distinguish between actual code regressions (route approval logic, HA failover architecture), timing issues requiring `EventuallyWithT` patterns, and genuine infrastructure problems (DNS, Docker, container issues).
- **Why this distinction matters**: Different problem types require completely different solution approaches
- **EventuallyWithT Issues**: Often manifest as flaky tests or immediate assertion failures after async operations
5. **Security-Aware Quality Validation**: Ensure tests properly validate end-to-end functionality with realistic timing expectations and proper error handling. Never suggest security bypasses or test expectation changes. Add comprehensive godoc when you understand test business logic.
- **Why security focus**: Integration tests are the last line of defense against security regressions
- **EventuallyWithT Usage**: Proper use prevents race conditions without weakening security assertions
**CRITICAL PRINCIPLE**: Test expectations are sacred contracts that define correct system behavior. When tests fail, fix the code to match the test, never change the test to match broken code. Only timing and observability improvements are allowed - business logic expectations are immutable.
**EventuallyWithT PRINCIPLE**: Every external call to headscale server or tailscale client must be wrapped in EventuallyWithT. Follow the five key rules strictly: one external call per block, proper variable scoping, no nesting, use CollectT for assertions, and provide descriptive messages.
**Remember**: Test failures are usually code issues in Headscale that need to be fixed, not infrastructure problems to be ignored. Use the specific debugging workflows and failure patterns documented above to efficiently identify root causes. Infrastructure issues have very specific signatures - everything else is code-related.

View File

@@ -17,7 +17,3 @@ LICENSE
.vscode
*.sock
node_modules/
package-lock.json
package.json

16
.github/CODEOWNERS vendored
View File

@@ -1,10 +1,10 @@
* @juanfont @kradalby
*.md @ohdearaugustin @nblock
*.yml @ohdearaugustin @nblock
*.yaml @ohdearaugustin @nblock
Dockerfile* @ohdearaugustin @nblock
.goreleaser.yaml @ohdearaugustin @nblock
/docs/ @ohdearaugustin @nblock
/.github/workflows/ @ohdearaugustin @nblock
/.github/renovate.json @ohdearaugustin @nblock
*.md @ohdearaugustin
*.yml @ohdearaugustin
*.yaml @ohdearaugustin
Dockerfile* @ohdearaugustin
.goreleaser.yaml @ohdearaugustin
/docs/ @ohdearaugustin
/.github/workflows/ @ohdearaugustin
/.github/renovate.json @ohdearaugustin

65
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,65 @@
---
name: "Bug report"
about: "Create a bug report to help us improve"
title: ""
labels: ["bug"]
assignees: ""
---
<!--
Before posting a bug report, discuss the behaviour you are expecting with the Discord community
to make sure that it is truly a bug.
The issue tracker is not the place to ask for support or how to set up Headscale.
Bug reports without the sufficient information will be closed.
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Bug description
<!-- A clear and concise description of what the bug is. Describe the expected bahavior
and how it is currently different. If you are unsure if it is a bug, consider discussing
it on our Discord server first. -->
## Environment
<!-- Please add relevant information about your system. For example:
- Version of headscale used
- Version of tailscale client
- OS (e.g. Linux, Mac, Cygwin, WSL, etc.) and version
- Kernel version
- The relevant config parameters you used
- Log output
-->
- OS:
- Headscale version:
- Tailscale version:
<!--
We do not support running Headscale in a container nor behind a (reverse) proxy.
If either of these are true for your environment, ask the community in Discord
instead of filing a bug report.
-->
- [ ] Headscale is behind a (reverse) proxy
- [ ] Headscale runs in a container
## To Reproduce
<!-- Steps to reproduce the behavior. -->
## Logs and attachments
<!-- Please attach files with:
- Client netmap dump (see below)
- ACL configuration
- Headscale configuration
Dump the netmap of tailscale clients:
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
Please provide information describing the netmap, which client, which headscale version etc.
-->

View File

@@ -1,108 +0,0 @@
name: 🐞 Bug
description: File a bug/issue
title: "[Bug] <title>"
labels: ["bug", "needs triage"]
body:
- type: checkboxes
attributes:
label: Is this a support request?
description:
This issue tracker is for bugs and feature requests only. If you need
help, please use ask in our Discord community
options:
- label: This is not a support request
required: true
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description:
Please search to see if an issue already exists for the bug you
encountered.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: Current Behavior
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: Expected Behavior
description: A concise description of what you expected to happen.
validations:
required: true
- type: textarea
attributes:
label: Steps To Reproduce
description: Steps to reproduce the behavior.
placeholder: |
1. In this environment...
1. With this config...
1. Run '...'
1. See error...
validations:
required: true
- type: textarea
attributes:
label: Environment
description: |
Please provide information about your environment.
If you are using a container, always provide the headscale version and not only the Docker image version.
Please do not put "latest".
Describe your "headscale network". Is there a lot of nodes, are the nodes all interconnected, are some subnet routers?
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
examples:
- **OS**: Ubuntu 24.04
- **Headscale version**: 0.24.3
- **Tailscale version**: 1.80.0
- **Number of nodes**: 20
value: |
- OS:
- Headscale version:
- Tailscale version:
render: markdown
validations:
required: true
- type: checkboxes
attributes:
label: Runtime environment
options:
- label: Headscale is behind a (reverse) proxy
required: false
- label: Headscale runs in a container
required: false
- type: textarea
attributes:
label: Debug information
description: |
Please have a look at our [Debugging and troubleshooting
guide](https://headscale.net/development/ref/debug/) to learn about
common debugging techniques.
Links? References? Anything that will give us more context about the issue you are encountering.
If **any** of these are omitted we will likely close your issue, do **not** ignore them.
- Client netmap dump (see below)
- Policy configuration
- Headscale configuration
- Headscale log (with `trace` enabled)
Dump the netmap of tailscale clients:
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
Dump the status of tailscale clients:
`tailscale status --json > DESCRIPTIVE_NAME.json`
Get the logs of a Tailscale client that is not working as expected.
`tailscale debug daemon-logs`
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
**Ensure** you use formatting for files you attach.
Do **not** paste in long files.
validations:
required: true

View File

@@ -0,0 +1,26 @@
---
name: "Feature request"
about: "Suggest an idea for headscale"
title: ""
labels: ["enhancement"]
assignees: ""
---
<!--
We typically have a clear roadmap for what we want to improve and reserve the right
to close feature requests that does not fit in the roadmap, or fit with the scope
of the project, or we actually want to implement ourselves.
Headscale is a multinational community across the globe. Our language is English.
All bug reports needs to be in English.
-->
## Why
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
## Description
<!-- A clear and precise description of what new or changed feature you want. -->

View File

@@ -1,36 +0,0 @@
name: 🚀 Feature Request
description: Suggest an idea for Headscale
title: "[Feature] <title>"
labels: [enhancement]
body:
- type: textarea
attributes:
label: Use case
description: Please describe the use case for this feature.
placeholder: |
<!-- Include the reason, why you would need the feature. E.g. what problem
does it solve? Or which workflow is currently frustrating and will be improved by
this? -->
validations:
required: true
- type: textarea
attributes:
label: Description
description: A clear and precise description of what new or changed feature you want.
validations:
required: true
- type: checkboxes
attributes:
label: Contribution
description: Are you willing to contribute to the implementation of this feature?
options:
- label: I can write the design doc for this feature
required: false
- label: I can contribute this feature
required: false
- type: textarea
attributes:
label: How can it be implemented?
description: Free text for your ideas on how this feature could be implemented.
validations:
required: false

View File

@@ -12,7 +12,7 @@ If you find mistakes in the documentation, please submit a fix to the documentat
<!-- Please tick if the following things apply. You… -->
- [ ] have read the [CONTRIBUTING.md](./CONTRIBUTING.md) file
- [ ] read the [CONTRIBUTING guidelines](README.md#contributing)
- [ ] raised a GitHub issue or discussed it on the projects chat beforehand
- [ ] added unit tests
- [ ] added integration tests

View File

@@ -13,37 +13,34 @@ concurrency:
cancel-in-progress: true
jobs:
build-nix:
build:
runs-on: ubuntu-latest
permissions: write-all
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
uses: tj-actions/changed-files@v34
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run nix build
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run build
id: build
if: steps.changed-files.outputs.files == 'true'
if: steps.changed-files.outputs.any_changed == 'true'
run: |
nix build |& tee build-result
BUILD_STATUS="${PIPESTATUS[0]}"
@@ -57,7 +54,7 @@ jobs:
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
uses: actions/github-script@v6
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
@@ -69,37 +66,8 @@ jobs:
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: steps.changed-files.outputs.files == 'true'
- uses: actions/upload-artifact@v3
if: steps.changed-files.outputs.any_changed == 'true'
with:
name: headscale-linux
path: result/bin/headscale
build-cross:
runs-on: ubuntu-latest
strategy:
matrix:
env:
- "GOARCH=arm64 GOOS=linux"
- "GOARCH=amd64 GOOS=linux"
- "GOARCH=arm64 GOOS=darwin"
- "GOARCH=amd64 GOOS=darwin"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run go cross compile
env:
CGO_ENABLED: 0
run:
env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
./cmd/headscale
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: "headscale-${{ matrix.env }}"
path: "headscale"

View File

@@ -1,55 +0,0 @@
name: Check Generated Files
on:
push:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-generated:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- '**/*.proto'
- 'buf.gen.yaml'
- 'tools/**'
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run make generate
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- make generate
- name: Check for uncommitted changes
if: steps.changed-files.outputs.files == 'true'
run: |
if ! git diff --exit-code; then
echo "❌ Generated files are not up to date!"
echo "Please run 'make generate' and commit the changes."
exit 1
else
echo "✅ All generated files are up to date."
fi

View File

@@ -1,46 +0,0 @@
name: Check integration tests workflow
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Generate and check integration tests
if: steps.changed-files.outputs.files == 'true'
run: |
nix develop --command bash -c "cd .github/workflows && go generate"
git diff --exit-code .github/workflows/test-integration.yaml
- name: Show missing tests
if: failure()
run: |
git diff .github/workflows/test-integration.yaml

35
.github/workflows/contributors.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Contributors
on:
push:
branches:
- main
workflow_dispatch:
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Delete upstream contributor branch
# Allow continue on failure to account for when the
# upstream branch is deleted or does not exist.
continue-on-error: true
run: git push origin --delete update-contributors
- name: Create up-to-date contributors branch
run: git checkout -B update-contributors
- name: Push empty contributors branch
run: git push origin update-contributors
- name: Switch back to main
run: git checkout main
- uses: BobAnkh/add-contributors@v0.2.2
with:
CONTRIBUTOR: "## Contributors"
COLUMN_PER_ROW: "6"
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: "100"
FONT_SIZE: "14"
PATH: "/README.md"
COMMIT_MESSAGE: "docs(README): update contributors"
AVATAR_SHAPE: "round"
BRANCH: "update-contributors"
PULL_REQUEST: "main"

View File

@@ -1,51 +0,0 @@
name: Deploy docs
on:
push:
branches:
# Main branch for development docs
- main
# Doc maintenance branches
- doc/[0-9]+.[0-9]+.[0-9]+
tags:
# Stable release tags
- v[0-9]+.[0-9]+.[0-9]+
paths:
- "docs/**"
- "mkdocs.yml"
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Install python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Configure git
run: |
git config user.name github-actions
git config user.email github-actions@github.com
- name: Deploy development docs
if: github.ref == 'refs/heads/main'
run: mike deploy --push development unstable
- name: Deploy stable docs from doc branches
if: startsWith(github.ref, 'refs/heads/doc/')
run: mike deploy --push ${GITHUB_REF_NAME##*/}
- name: Deploy stable docs from tag
if: startsWith(github.ref, 'refs/tags/v')
# This assumes that only newer tags are pushed
run: mike deploy --push --update-aliases ${GITHUB_REF_NAME#v} stable latest

View File

@@ -1,27 +0,0 @@
name: Test documentation build
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict

45
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Build documentation
on:
push:
branches:
- main
workflow_dispatch:
permissions:
contents: read
pages: write
id-token: write
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install python
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Setup dependencies
run: pip install -r docs/requirements.txt
- name: Build docs
run: mkdocs build --strict
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: ./site
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1

View File

@@ -1,91 +0,0 @@
package main
//go:generate go run ./gh-action-integration-generator.go
import (
"bytes"
"fmt"
"log"
"os/exec"
"strings"
)
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
cmd := exec.Command(rgBin, args...)
var out bytes.Buffer
cmd.Stdout = &out
err = cmd.Run()
if err != nil {
log.Fatalf("failed to run command: %s", err)
}
tests := strings.Split(strings.TrimSpace(out.String()), "\n")
return tests
}
func updateYAML(tests []string, jobName string, testPath string) {
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", "))
yqCommand := fmt.Sprintf(
"yq eval '.jobs.%s.strategy.matrix.test = %s' %s -i",
jobName,
testsForYq,
testPath,
)
cmd := exec.Command("bash", "-c", yqCommand)
var stdout bytes.Buffer
var stderr bytes.Buffer
cmd.Stdout = &stdout
cmd.Stderr = &stderr
err := cmd.Run()
if err != nil {
log.Printf("stdout: %s", stdout.String())
log.Printf("stderr: %s", stderr.String())
log.Fatalf("failed to run yq command: %s", err)
}
fmt.Printf("YAML file (%s) job %s updated successfully\n", testPath, jobName)
}
func main() {
tests := findTests()
quotedTests := make([]string, len(tests))
for i, test := range tests {
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
}
// Define selected tests for PostgreSQL
postgresTestNames := []string{
"TestACLAllowUserDst",
"TestPingAllByIP",
"TestEphemeral2006DeletedTooQuickly",
"TestPingAllByIPManyUpDown",
"TestSubnetRouterMultiNetwork",
}
quotedPostgresTests := make([]string, len(postgresTestNames))
for i, test := range postgresTestNames {
quotedPostgresTests[i] = fmt.Sprintf("\"%s\"", test)
}
// Update both SQLite and PostgreSQL job matrices
updateYAML(quotedTests, "sqlite", "./test-integration.yaml")
updateYAML(quotedPostgresTests, "postgres", "./test-integration.yaml")
}

View File

@@ -1,5 +1,6 @@
name: GitHub Actions Version Updater
# Controls when the action will run.
on:
schedule:
# Automatically run on every Sunday
@@ -7,17 +8,16 @@ on:
jobs:
build:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v2
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@64be81ba69383f81f2be476703ea6570c4c8686e # v0.8.1
uses: saadmk11/github-actions-version-updater@v0.7.1
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}

View File

@@ -1,82 +0,0 @@
name: Integration Test Template
on:
workflow_call:
inputs:
test:
required: true
type: string
postgres_flag:
required: false
type: string
default: ""
database_name:
required: true
type: string
jobs:
test:
runs-on: ubuntu-latest
env:
# Github does not allow us to access secrets in pull requests,
# so this env var is used to check if we have the secret or not.
# If we have the secrets, meaning we are running on push in a fork,
# there might be secrets available for more debugging.
# If TS_OAUTH_CLIENT_ID and TS_OAUTH_SECRET is set, then the job
# will join a debug tailscale network, set up SSH and a tmux session.
# The SSH will be configured to use the SSH key of the Github user
# that triggered the build.
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- name: Tailscale
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: tailscale/github-action@6986d2c82a91fbac2949fe01f5bab95cf21b5102 # v3.2.2
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:gh
- name: Setup SSH server for Actor
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: alexellis/setup-sshd-actor@master
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run Integration Test
if: always() && steps.changed-files.outputs.files == 'true'
run:
nix develop --command -- hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \
--timeout=120m \
${{ inputs.postgres_flag }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ inputs.database_name }}-${{ inputs.test }}-logs
path: "control_logs/*/*.log"
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ inputs.database_name }}-${{ inputs.test }}-archives
path: "control_logs/*/*.tar"
- name: Setup a blocking tmux session
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: alexellis/block-with-tmux-action@master

View File

@@ -1,6 +1,7 @@
---
name: Lint
on: [pull_request]
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
@@ -10,87 +11,70 @@ jobs:
golangci-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
uses: tj-actions/changed-files@v34
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: golangci-lint
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- golangci-lint run
--new-from-rev=${{github.event.pull_request.base.sha}}
--output.text.path=stdout
--output.text.print-linter-name
--output.text.print-issued-lines
--output.text.colors
if: steps.changed-files.outputs.any_changed == 'true'
uses: golangci/golangci-lint-action@v2
with:
version: v1.51.2
# Only block PRs on new problems.
# If this is not enabled, we will end up having PRs
# blocked because new linters has appared and other
# parts of the code is affected.
only-new-issues: true
prettier-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
uses: tj-actions/changed-files@v14.1
with:
filters: |
files:
- '*.nix'
- '**/*.md'
- '**/*.yml'
- '**/*.yaml'
- '**/*.ts'
- '**/*.js'
- '**/*.sass'
- '**/*.css'
- '**/*.scss'
- '**/*.html'
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
files: |
*.nix
**/*.md
**/*.yml
**/*.yaml
**/*.ts
**/*.js
**/*.sass
**/*.css
**/*.scss
**/*.html
- name: Prettify code
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- prettier --no-error-on-unmatched-pattern
--ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
if: steps.changed-files.outputs.any_changed == 'true'
uses: creyD/prettier_action@v4.3
with:
prettier_options: >-
--check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
only_changed: false
dry: true
proto-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
- uses: actions/checkout@v2
- uses: bufbuild/buf-setup-action@v1.7.0
- uses: bufbuild/buf-lint-action@v1
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Buf lint
run: nix develop --command -- buf lint proto
input: "proto"

View File

@@ -9,36 +9,30 @@ on:
jobs:
goreleaser:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Login to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- name: Run goreleaser
run: nix develop --command -- goreleaser release --clean
run: nix develop --command -- goreleaser release --clean --debug
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,29 +1,22 @@
name: Close inactive issues
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
- uses: actions/stale@v5
with:
days-before-issue-stale: 90
days-before-issue-close: 7
stale-issue-label: "stale"
stale-issue-message:
"This issue is stale because it has been open for 90 days with no
activity."
close-issue-message:
"This issue was closed because it has been inactive for 14 days
since being marked as stale."
stale-issue-message: "This issue is stale because it has been open for 90 days with no activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1
exempt-issue-labels: "no-stale-bot"
repo-token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowStarDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLAllowStarDst:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLAllowStarDst
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowStarDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUser80Dst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLAllowUser80Dst:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLAllowUser80Dst
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUser80Dst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLAllowUserDst
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLAllowUserDst:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLAllowUserDst
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLAllowUserDst$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDenyAllPort80
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLDenyAllPort80:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLDenyAllPort80
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDenyAllPort80$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLDevice1CanAccessDevice2
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLDevice1CanAccessDevice2:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLDevice1CanAccessDevice2
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLDevice1CanAccessDevice2$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLHostsInNetMapTable
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLHostsInNetMapTable:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLHostsInNetMapTable
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLHostsInNetMapTable$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReach
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLNamedHostsCanReach:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLNamedHostsCanReach
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReach$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestACLNamedHostsCanReachBySubnet
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestACLNamedHostsCanReachBySubnet:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestACLNamedHostsCanReachBySubnet
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestACLNamedHostsCanReachBySubnet$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestApiKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestApiKeyCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestApiKeyCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestApiKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthKeyLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestAuthKeyLogoutAndRelogin:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestAuthKeyLogoutAndRelogin
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthKeyLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestAuthWebFlowAuthenticationPingAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestAuthWebFlowAuthenticationPingAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestAuthWebFlowLogoutAndRelogin
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestAuthWebFlowLogoutAndRelogin:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestAuthWebFlowLogoutAndRelogin
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestAuthWebFlowLogoutAndRelogin$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestCreateTailscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestCreateTailscale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestCreateTailscale
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestCreateTailscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestDERPServerScenario
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestDERPServerScenario:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestDERPServerScenario
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestDERPServerScenario$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnableDisableAutoApprovedRoute
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestEnableDisableAutoApprovedRoute:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestEnableDisableAutoApprovedRoute
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnableDisableAutoApprovedRoute$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEnablingRoutes
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestEnablingRoutes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestEnablingRoutes
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEnablingRoutes$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestEphemeral:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestEphemeral
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestExpireNode
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestExpireNode:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestExpireNode
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestExpireNode$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHASubnetRouterFailover
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestHASubnetRouterFailover:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestHASubnetRouterFailover
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHASubnetRouterFailover$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestHeadscale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestHeadscale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestHeadscale
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestHeadscale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeAdvertiseTagNoACLCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeAdvertiseTagNoACLCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeAdvertiseTagNoACLCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeAdvertiseTagNoACLCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeAdvertiseTagWithACLCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeAdvertiseTagWithACLCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeAdvertiseTagWithACLCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeAdvertiseTagWithACLCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeExpireCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeExpireCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeExpireCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeExpireCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeMoveCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeMoveCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeMoveCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeMoveCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeOnlineLastSeenStatus
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeOnlineLastSeenStatus:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeOnlineLastSeenStatus
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeOnlineLastSeenStatus$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeRenameCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeRenameCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeRenameCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeRenameCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestNodeTagCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestNodeTagCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestNodeTagCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestNodeTagCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCAuthenticationPingAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestOIDCAuthenticationPingAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestOIDCAuthenticationPingAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCAuthenticationPingAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestOIDCExpireNodesBasedOnTokenExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestOIDCExpireNodesBasedOnTokenExpiry:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestOIDCExpireNodesBasedOnTokenExpiry
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestOIDCExpireNodesBasedOnTokenExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByHostname
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPingAllByHostname:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPingAllByHostname
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByHostname$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByIP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPingAllByIP:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPingAllByIP
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPingAllByIPPublicDERP
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPingAllByIPPublicDERP:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPingAllByIPPublicDERP
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPingAllByIPPublicDERP$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPreAuthKeyCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPreAuthKeyCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandReusableEphemeral
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPreAuthKeyCommandReusableEphemeral:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPreAuthKeyCommandReusableEphemeral
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandReusableEphemeral$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestPreAuthKeyCommandWithoutExpiry
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestPreAuthKeyCommandWithoutExpiry:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestPreAuthKeyCommandWithoutExpiry
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestPreAuthKeyCommandWithoutExpiry$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestResolveMagicDNS
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestResolveMagicDNS:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestResolveMagicDNS
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestResolveMagicDNS$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHIsBlockedInACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHIsBlockedInACL:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHIsBlockedInACL
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHIsBlockedInACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHMultipleUsersAllToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHMultipleUsersAllToAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHMultipleUsersAllToAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHMultipleUsersAllToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHNoSSHConfigured
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHNoSSHConfigured:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHNoSSHConfigured
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHNoSSHConfigured$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHOneUserToAll
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHOneUserToAll:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHOneUserToAll
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHOneUserToAll$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSSHUserOnlyIsolation
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSSHUserOnlyIsolation:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSSHUserOnlyIsolation
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSSHUserOnlyIsolation$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestSubnetRouteACL
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestSubnetRouteACL:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestSubnetRouteACL
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestSubnetRouteACL$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTaildrop
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestTaildrop:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestTaildrop
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTaildrop$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestTailscaleNodesJoiningHeadcale
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestTailscaleNodesJoiningHeadcale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestTailscaleNodesJoiningHeadcale
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestTailscaleNodesJoiningHeadcale$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -0,0 +1,67 @@
# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - TestUserCommand
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
TestUserCommand:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run TestUserCommand
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^TestUserCommand$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"

View File

@@ -1,114 +0,0 @@
name: integration
# To debug locally on a branch, and when needing secrets
# change this to include `push` so the build is ran on
# the main repository.
on: [pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
sqlite:
strategy:
fail-fast: false
matrix:
test:
- TestACLHostsInNetMapTable
- TestACLAllowUser80Dst
- TestACLDenyAllPort80
- TestACLAllowUserDst
- TestACLAllowStarDst
- TestACLNamedHostsCanReachBySubnet
- TestACLNamedHostsCanReach
- TestACLDevice1CanAccessDevice2
- TestPolicyUpdateWhileRunningWithCLIInDatabase
- TestACLAutogroupMember
- TestACLAutogroupTagged
- TestACLAutogroupSelf
- TestACLPolicyPropagationOverTime
- TestAPIAuthenticationBypass
- TestAPIAuthenticationBypassCurl
- TestGRPCAuthenticationBypass
- TestCLIWithConfigAuthenticationBypass
- TestAuthKeyLogoutAndReloginSameUser
- TestAuthKeyLogoutAndReloginNewUser
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestOIDC024UserCreation
- TestOIDCAuthenticationWithPKCE
- TestOIDCReloginSameNodeNewUser
- TestOIDCFollowUpUrl
- TestOIDCMultipleOpenedLoginUrls
- TestOIDCReloginSameNodeSameUser
- TestOIDCExpiryAfterRestart
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndReloginSameUser
- TestAuthWebFlowLogoutAndReloginNewUser
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
- TestPreAuthKeyCommandReusableEphemeral
- TestPreAuthKeyCorrectUserLoggedInCommand
- TestApiKeyCommand
- TestNodeTagCommand
- TestNodeAdvertiseTagCommand
- TestNodeCommand
- TestNodeExpireCommand
- TestNodeRenameCommand
- TestNodeMoveCommand
- TestPolicyCommand
- TestPolicyBrokenConfigCommand
- TestDERPVerifyEndpoint
- TestResolveMagicDNS
- TestResolveMagicDNSExtraRecordsPath
- TestDERPServerScenario
- TestDERPServerWebsocketScenario
- TestPingAllByIP
- TestPingAllByIPPublicDERP
- TestEphemeral
- TestEphemeralInAlternateTimezone
- TestEphemeral2006DeletedTooQuickly
- TestPingAllByHostname
- TestTaildrop
- TestUpdateHostnameFromClient
- TestExpireNode
- TestSetNodeExpiryInFuture
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- Test2118DeletingOnlineNodePanics
- TestEnablingRoutes
- TestHASubnetRouterFailover
- TestSubnetRouteACL
- TestEnablingExitRoutes
- TestSubnetRouterMultiNetwork
- TestSubnetRouterMultiNetworkExitNode
- TestAutoApproveMultiNetwork
- TestSubnetRouteACLFiltering
- TestHeadscale
- TestTailscaleNodesJoiningHeadcale
- TestSSHOneUserToAll
- TestSSHMultipleUsersAllToAll
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
- TestSSHAutogroupSelf
uses: ./.github/workflows/integration-test-template.yml
with:
test: ${{ matrix.test }}
postgres_flag: "--postgres=0"
database_name: "sqlite"
postgres:
strategy:
fail-fast: false
matrix:
test:
- TestACLAllowUserDst
- TestPingAllByIP
- TestEphemeral2006DeletedTooQuickly
- TestPingAllByIPManyUpDown
- TestSubnetRouterMultiNetwork
uses: ./.github/workflows/integration-test-template.yml
with:
test: ${{ matrix.test }}
postgres_flag: "--postgres=1"
database_name: "postgres"

View File

@@ -11,38 +11,26 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v3
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
uses: tj-actions/changed-files@v34
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- uses: DeterminateSystems/nix-installer-action@main
if: steps.changed-files.outputs.any_changed == 'true'
- uses: DeterminateSystems/magic-nix-cache-action@main
if: steps.changed-files.outputs.any_changed == 'true'
- name: Run tests
if: steps.changed-files.outputs.files == 'true'
env:
# As of 2025-01-06, these env vars was not automatically
# set anymore which breaks the initdb for postgres on
# some of the database migration tests.
LC_ALL: "en_US.UTF-8"
LC_CTYPE: "en_US.UTF-8"
run: nix develop --command -- gotestsum
if: steps.changed-files.outputs.any_changed == 'true'
run: nix develop --check

View File

@@ -6,14 +6,13 @@ on:
jobs:
lockfile:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@v3
- name: Install Nix
uses: DeterminateSystems/nix-installer-action@21a544727d0c62386e78b4befe52d19ad12692e3 # v17
uses: DeterminateSystems/nix-installer-action@main
- name: Update flake.lock
uses: DeterminateSystems/update-flake-lock@428c2b58a4b7414dabd372acb6a03dba1084d3ab # v25
uses: DeterminateSystems/update-flake-lock@main
with:
pr-title: "Update flake.lock"

10
.gitignore vendored
View File

@@ -1,9 +1,6 @@
ignored/
tailscale/
.vscode/
.claude/
*.prof
# Binaries for programs and plugins
*.exe
@@ -23,9 +20,8 @@ vendor/
dist/
/headscale
config.json
config.yaml
config*.yaml
!config-example.yaml
derp.yaml
*.hujson
*.key
@@ -49,7 +45,3 @@ integration_test/etc/config.dump.yaml
/site
__debug_bin
node_modules/
package-lock.json
package.json

View File

@@ -1,80 +1,77 @@
---
version: "2"
run:
timeout: 10m
build-tags:
- ts2019
issues:
skip-dirs:
- gen
linters:
default: all
enable-all: true
disable:
- cyclop
- depguard
- dupl
- exhaustruct
- funlen
- exhaustivestruct
- revive
- lll
- interfacer
- scopelint
- maligned
- golint
- gofmt
- gochecknoglobals
- gochecknoinits
- gocognit
- godox
- interfacebloat
- ireturn
- lll
- maintidx
- makezero
- musttag
- nestif
- nolintlint
- paralleltest
- revive
- funlen
- exhaustivestruct
- tagliatelle
- testpackage
- varnamelen
- wrapcheck
- wsl
settings:
gocritic:
disabled-checks:
- appendAssign
- ifElseChain
nlreturn:
block-size: 4
varnamelen:
ignore-names:
- err
- db
- id
- ip
- ok
- c
- tt
- tx
- rx
- sb
- wg
- pr
- p
- p2
ignore-type-assert-ok: true
ignore-map-index-ok: true
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
- gen
- godox
- ireturn
- execinquery
- exhaustruct
- nolintlint
- musttag # causes issues with imported libs
- depguard
formatters:
enable:
- gci
- gofmt
- gofumpt
- goimports
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$
- gen
# deprecated
- structcheck # replaced by unused
- ifshort # deprecated by the owner
- varcheck # replaced by unused
- nosnakecase # replaced by revive
- deadcode # replaced by unused
# We should strive to enable these:
- wrapcheck
- dupl
- makezero
- maintidx
# Limits the methods of an interface to 10. We have more in integration tests
- interfacebloat
# We might want to enable this, but it might be a lot of work
- cyclop
- nestif
- wsl # might be incompatible with gofumpt
- testpackage
- paralleltest
linters-settings:
varnamelen:
ignore-type-assert-ok: true
ignore-map-index-ok: true
ignore-names:
- err
- db
- id
- ip
- ok
- c
- tt
gocritic:
disabled-checks:
- appendAssign
# TODO(kradalby): Remove this
- ifElseChain

View File

@@ -1,40 +1,11 @@
---
version: 2
before:
hooks:
- go mod tidy -compat=1.25
- go mod tidy -compat=1.22
- go mod vendor
release:
prerelease: auto
draft: true
header: |
## Upgrade
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
**It's best to update from one stable version to the next** (e.g., 0.24.0 → 0.25.1 → 0.26.1) in case you are multiple releases behind. You should always pick the latest available patch release.
Be sure to check the changelog above for version-specific upgrade instructions and breaking changes.
### Backup Your Database
**Always backup your database before upgrading.** Here's how to backup a SQLite database:
```bash
# Stop headscale
systemctl stop headscale
# Backup sqlite database
cp /var/lib/headscale/db.sqlite /var/lib/headscale/db.sqlite.backup
# Backup sqlite WAL/SHM files (if they exist)
cp /var/lib/headscale/db.sqlite-wal /var/lib/headscale/db.sqlite-wal.backup
cp /var/lib/headscale/db.sqlite-shm /var/lib/headscale/db.sqlite-shm.backup
# Start headscale (migration will run automatically)
systemctl start headscale
```
builds:
- id: headscale
@@ -46,18 +17,23 @@ builds:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
flags:
- -mod=readonly
ldflags:
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
tags:
- ts2019
archives:
- id: golang-cross
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
formats:
- binary
format: binary
source:
enabled: true
@@ -76,22 +52,15 @@ nfpms:
# List file contents: dpkg -c dist/headscale...deb
# Package metadata: dpkg --info dist/headscale....deb
#
- ids:
- builds:
- headscale
package_name: headscale
priority: optional
vendor: headscale
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
homepage: https://github.com/juanfont/headscale
description: |-
Open source implementation of the Tailscale control server.
Headscale aims to implement a self-hosted, open source alternative to the
Tailscale control server. Headscale's goal is to provide self-hosters and
hobbyists with an open-source server they can use for their projects and
labs. It implements a narrow scope, a single Tailscale network (tailnet),
suitable for a personal use, or a small open-source organisation.
license: BSD
bindir: /usr/bin
section: net
formats:
- deb
contents:
@@ -100,31 +69,19 @@ nfpms:
type: config|noreplace
file_info:
mode: 0644
- src: ./packaging/systemd/headscale.service
- src: ./docs/packaging/headscale.systemd.service
dst: /usr/lib/systemd/system/headscale.service
- dst: /var/lib/headscale
type: dir
- src: LICENSE
dst: /usr/share/doc/headscale/copyright
- dst: /var/run/headscale
type: dir
scripts:
postinstall: ./packaging/deb/postinst
postremove: ./packaging/deb/postrm
preremove: ./packaging/deb/prerm
deb:
lintian_overrides:
- no-changelog # Our CHANGELOG.md uses a different formatting
- no-manual-page
- statically-linked-binary
postinstall: ./docs/packaging/postinstall.sh
postremove: ./docs/packaging/postremove.sh
kos:
- id: ghcr
repositories:
- ghcr.io/juanfont/headscale
- headscale/headscale
# bare tells KO to only use the repository
# for tagging and naming the container.
bare: true
repository: ghcr.io/kradalby/headscale
base_image: gcr.io/distroless/base-debian12
build: headscale
main: ./cmd/headscale
@@ -132,53 +89,77 @@ kos:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
- latest
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "{{ .Major }}.{{ .Minor }}.{{ .Patch }}"
- "{{ .Major }}.{{ .Minor }}"
- "{{ .Major }}"
- "sha-{{ .ShortCommit }}"
creation_time: "{{.CommitTimestamp}}"
ko_data_creation_time: "{{.CommitTimestamp}}"
- "{{ if not .Prerelease }}stable{{ end }}"
- id: dockerhub
build: headscale
base_image: gcr.io/distroless/base-debian12
repository: kradalby
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- latest
- "{{ .Tag }}"
- "{{ .Major }}.{{ .Minor }}.{{ .Patch }}"
- "{{ .Major }}.{{ .Minor }}"
- "{{ .Major }}"
- "sha-{{ .ShortCommit }}"
- "{{ if not .Prerelease }}stable{{ end }}"
- id: ghcr-debug
repositories:
- ghcr.io/juanfont/headscale
- headscale/headscale
bare: true
base_image: gcr.io/distroless/base-debian12:debug
repository: ghcr.io/kradalby/headscale
base_image: "debian:12"
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
- "{{ if not .Prerelease }}stable-debug{{ else }}unstable-debug{{ end }}"
- latest
- "{{ .Tag }}-debug"
- '{{ trimprefix .Tag "v" }}-debug'
- "{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug"
- "{{ .Major }}.{{ .Minor }}-debug"
- "{{ .Major }}-debug"
- "sha-{{ .ShortCommit }}-debug"
- id: dockerhub-debug
build: headscale
base_image: "debian:12"
repository: kradalby
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- latest
- "{{ .Tag }}-debug"
- "{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug"
- "{{ .Major }}.{{ .Minor }}-debug"
- "{{ .Major }}-debug"
- "sha-{{ .ShortCommit }}-debug"
checksum:
name_template: "checksums.txt"
snapshot:
version_template: "{{ .Tag }}-next"
name_template: "{{ .Tag }}-next"
changelog:
sort: asc
filters:

View File

@@ -1,48 +0,0 @@
{
"mcpServers": {
"claude-code-mcp": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@steipete/claude-code-mcp@latest"
],
"env": {}
},
"sequential-thinking": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
],
"env": {}
},
"nixos": {
"type": "stdio",
"command": "uvx",
"args": [
"mcp-nixos"
],
"env": {}
},
"context7": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp"
],
"env": {}
},
"git": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@cyanheads/git-mcp-server"
],
"env": {}
}
}
}

View File

@@ -1,5 +1 @@
.github/workflows/test-integration-v2*
docs/about/features.md
docs/ref/configuration.md
docs/ref/oidc.md
docs/ref/remote-cli.md

File diff suppressed because it is too large Load Diff

531
CLAUDE.md
View File

@@ -1,531 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Overview
Headscale is an open-source implementation of the Tailscale control server written in Go. It provides self-hosted coordination for Tailscale networks (tailnets), managing node registration, IP allocation, policy enforcement, and DERP routing.
## Development Commands
### Quick Setup
```bash
# Recommended: Use Nix for dependency management
nix develop
# Full development workflow
make dev # runs fmt + lint + test + build
```
### Essential Commands
```bash
# Build headscale binary
make build
# Run tests
make test
go test ./... # All unit tests
go test -race ./... # With race detection
# Run specific integration test
go run ./cmd/hi run "TestName" --postgres
# Code formatting and linting
make fmt # Format all code (Go, docs, proto)
make lint # Lint all code (Go, proto)
make fmt-go # Format Go code only
make lint-go # Lint Go code only
# Protocol buffer generation (after modifying proto/)
make generate
# Clean build artifacts
make clean
```
### Integration Testing
```bash
# Use the hi (Headscale Integration) test runner
go run ./cmd/hi doctor # Check system requirements
go run ./cmd/hi run "TestPattern" # Run specific test
go run ./cmd/hi run "TestPattern" --postgres # With PostgreSQL backend
# Test artifacts are saved to control_logs/ with logs and debug data
```
## Project Structure & Architecture
### Top-Level Organization
```
headscale/
├── cmd/ # Command-line applications
│ ├── headscale/ # Main headscale server binary
│ └── hi/ # Headscale Integration test runner
├── hscontrol/ # Core control plane logic
├── integration/ # End-to-end Docker-based tests
├── proto/ # Protocol buffer definitions
├── gen/ # Generated code (protobuf)
├── docs/ # Documentation
└── packaging/ # Distribution packaging
```
### Core Packages (`hscontrol/`)
**Main Server (`hscontrol/`)**
- `app.go`: Application setup, dependency injection, server lifecycle
- `handlers.go`: HTTP/gRPC API endpoints for management operations
- `grpcv1.go`: gRPC service implementation for headscale API
- `poll.go`: **Critical** - Handles Tailscale MapRequest/MapResponse protocol
- `noise.go`: Noise protocol implementation for secure client communication
- `auth.go`: Authentication flows (web, OIDC, command-line)
- `oidc.go`: OpenID Connect integration for user authentication
**State Management (`hscontrol/state/`)**
- `state.go`: Central coordinator for all subsystems (database, policy, IP allocation, DERP)
- `node_store.go`: **Performance-critical** - In-memory cache with copy-on-write semantics
- Thread-safe operations with deadlock detection
- Coordinates between database persistence and real-time operations
**Database Layer (`hscontrol/db/`)**
- `db.go`: Database abstraction, GORM setup, migration management
- `node.go`: Node lifecycle, registration, expiration, IP assignment
- `users.go`: User management, namespace isolation
- `api_key.go`: API authentication tokens
- `preauth_keys.go`: Pre-authentication keys for automated node registration
- `ip.go`: IP address allocation and management
- `policy.go`: Policy storage and retrieval
- Schema migrations in `schema.sql` with extensive test data coverage
**Policy Engine (`hscontrol/policy/`)**
- `policy.go`: Core ACL evaluation logic, HuJSON parsing
- `v2/`: Next-generation policy system with improved filtering
- `matcher/`: ACL rule matching and evaluation engine
- Determines peer visibility, route approval, and network access rules
- Supports both file-based and database-stored policies
**Network Management (`hscontrol/`)**
- `derp/`: DERP (Designated Encrypted Relay for Packets) server implementation
- NAT traversal when direct connections fail
- Fallback relay for firewall-restricted environments
- `mapper/`: Converts internal Headscale state to Tailscale's wire protocol format
- `tail.go`: Tailscale-specific data structure generation
- `routes/`: Subnet route management and primary route selection
- `dns/`: DNS record management and MagicDNS implementation
**Utilities & Support (`hscontrol/`)**
- `types/`: Core data structures, configuration, validation
- `util/`: Helper functions for networking, DNS, key management
- `templates/`: Client configuration templates (Apple, Windows, etc.)
- `notifier/`: Event notification system for real-time updates
- `metrics.go`: Prometheus metrics collection
- `capver/`: Tailscale capability version management
### Key Subsystem Interactions
**Node Registration Flow**
1. **Client Connection**: `noise.go` handles secure protocol handshake
2. **Authentication**: `auth.go` validates credentials (web/OIDC/preauth)
3. **State Creation**: `state.go` coordinates IP allocation via `db/ip.go`
4. **Storage**: `db/node.go` persists node, `NodeStore` caches in memory
5. **Network Setup**: `mapper/` generates initial Tailscale network map
**Ongoing Operations**
1. **Poll Requests**: `poll.go` receives periodic client updates
2. **State Updates**: `NodeStore` maintains real-time node information
3. **Policy Application**: `policy/` evaluates ACL rules for peer relationships
4. **Map Distribution**: `mapper/` sends network topology to all affected clients
**Route Management**
1. **Advertisement**: Clients announce routes via `poll.go` Hostinfo updates
2. **Storage**: `db/` persists routes, `NodeStore` caches for performance
3. **Approval**: `policy/` auto-approves routes based on ACL rules
4. **Distribution**: `routes/` selects primary routes, `mapper/` distributes to peers
### Command-Line Tools (`cmd/`)
**Main Server (`cmd/headscale/`)**
- `headscale.go`: CLI parsing, configuration loading, server startup
- Supports daemon mode, CLI operations (user/node management), database operations
**Integration Test Runner (`cmd/hi/`)**
- `main.go`: Test execution framework with Docker orchestration
- `run.go`: Individual test execution with artifact collection
- `doctor.go`: System requirements validation
- `docker.go`: Container lifecycle management
- Essential for validating changes against real Tailscale clients
### Generated & External Code
**Protocol Buffers (`proto/` → `gen/`)**
- Defines gRPC API for headscale management operations
- Client libraries can generate from these definitions
- Run `make generate` after modifying `.proto` files
**Integration Testing (`integration/`)**
- `scenario.go`: Docker test environment setup
- `tailscale.go`: Tailscale client container management
- Individual test files for specific functionality areas
- Real end-to-end validation with network isolation
### Critical Performance Paths
**High-Frequency Operations**
1. **MapRequest Processing** (`poll.go`): Every 15-60 seconds per client
2. **NodeStore Reads** (`node_store.go`): Every operation requiring node data
3. **Policy Evaluation** (`policy/`): On every peer relationship calculation
4. **Route Lookups** (`routes/`): During network map generation
**Database Write Patterns**
- **Frequent**: Node heartbeats, endpoint updates, route changes
- **Moderate**: User operations, policy updates, API key management
- **Rare**: Schema migrations, bulk operations
### Configuration & Deployment
**Configuration** (`hscontrol/types/config.go`)**
- Database connection settings (SQLite/PostgreSQL)
- Network configuration (IP ranges, DNS settings)
- Policy mode (file vs database)
- DERP relay configuration
- OIDC provider settings
**Key Dependencies**
- **GORM**: Database ORM with migration support
- **Tailscale Libraries**: Core networking and protocol code
- **Zerolog**: Structured logging throughout the application
- **Buf**: Protocol buffer toolchain for code generation
### Development Workflow Integration
The architecture supports incremental development:
- **Unit Tests**: Focus on individual packages (`*_test.go` files)
- **Integration Tests**: Validate cross-component interactions
- **Database Tests**: Extensive migration and data integrity validation
- **Policy Tests**: ACL rule evaluation and edge cases
- **Performance Tests**: NodeStore and high-frequency operation validation
## Integration Testing System
### Overview
Headscale uses Docker-based integration tests with real Tailscale clients to validate end-to-end functionality. The integration test system is complex and requires specialized knowledge for effective execution and debugging.
### **MANDATORY: Use the headscale-integration-tester Agent**
**CRITICAL REQUIREMENT**: For ANY integration test execution, analysis, troubleshooting, or validation, you MUST use the `headscale-integration-tester` agent. This agent contains specialized knowledge about:
- Test execution strategies and timing requirements
- Infrastructure vs code issue distinction (99% vs 1% failure patterns)
- Security-critical debugging rules and forbidden practices
- Comprehensive artifact analysis workflows
- Real-world failure patterns from HA debugging experiences
### Quick Reference Commands
```bash
# Check system requirements (always run first)
go run ./cmd/hi doctor
# Run single test (recommended for development)
go run ./cmd/hi run "TestName"
# Use PostgreSQL for database-heavy tests
go run ./cmd/hi run "TestName" --postgres
# Pattern matching for related tests
go run ./cmd/hi run "TestPattern*"
```
**Critical Notes**:
- Only ONE test can run at a time (Docker port conflicts)
- Tests generate ~100MB of logs per run in `control_logs/`
- Clean environment before each test: `rm -rf control_logs/202507* && docker system prune -f`
### Test Artifacts Location
All test runs save comprehensive debugging artifacts to `control_logs/TIMESTAMP-ID/` including server logs, client logs, database dumps, MapResponse protocol data, and Prometheus metrics.
**For all integration test work, use the headscale-integration-tester agent - it contains the complete knowledge needed for effective testing and debugging.**
## NodeStore Implementation Details
**Key Insight from Recent Work**: The NodeStore is a critical performance optimization that caches node data in memory while ensuring consistency with the database. When working with route advertisements or node state changes:
1. **Timing Considerations**: Route advertisements need time to propagate from clients to server. Use `require.EventuallyWithT()` patterns in tests instead of immediate assertions.
2. **Synchronization Points**: NodeStore updates happen at specific points like `poll.go:420` after Hostinfo changes. Ensure these are maintained when modifying the polling logic.
3. **Peer Visibility**: The NodeStore's `peersFunc` determines which nodes are visible to each other. Policy-based filtering is separate from monitoring visibility - expired nodes should remain visible for debugging but marked as expired.
## Testing Guidelines
### Integration Test Patterns
#### **CRITICAL: EventuallyWithT Pattern for External Calls**
**All external calls in integration tests MUST be wrapped in EventuallyWithT blocks** to handle eventual consistency in distributed systems. External calls include:
- `client.Status()` - Getting Tailscale client status
- `client.Curl()` - Making HTTP requests through clients
- `client.Traceroute()` - Running network diagnostics
- `headscale.ListNodes()` - Querying headscale server state
- Any other calls that interact with external systems or network operations
**Key Rules**:
1. **Never use bare `require.NoError(t, err)` with external calls** - Always wrap in EventuallyWithT
2. **Keep related assertions together** - If multiple assertions depend on the same external call, keep them in the same EventuallyWithT block
3. **Split unrelated external calls** - Different external calls should be in separate EventuallyWithT blocks
4. **Never nest EventuallyWithT calls** - Each EventuallyWithT should be at the same level
5. **Declare shared variables at function scope** - Variables used across multiple EventuallyWithT blocks must be declared before first use
**Examples**:
```go
// CORRECT: External call wrapped in EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Related assertions using the same status call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.NotNil(c, peerStatus.PrimaryRoutes)
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedRoutes)
}
}, 5*time.Second, 200*time.Millisecond, "Verifying client status and routes")
// INCORRECT: Bare external call without EventuallyWithT
status, err := client.Status() // ❌ Will fail intermittently
require.NoError(t, err)
// CORRECT: Separate EventuallyWithT for different external calls
// First external call - headscale.ListNodes()
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route state changes should propagate to nodes")
// Second external call - client.Status()
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6()})
}
}, 10*time.Second, 500*time.Millisecond, "routes should be visible to client")
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // ❌ First external call
assert.NoError(c, err)
status, err := client.Status() // ❌ Different external call - should be separate
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "mixed calls")
// CORRECT: Variable scoping for shared data
var (
srs1, srs2, srs3 *ipnstate.Status
clientStatus *ipnstate.Status
srs1PeerStatus *ipnstate.PeerStatus
)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
srs1 = subRouter1.MustStatus() // = not :=
srs2 = subRouter2.MustStatus()
clientStatus = client.MustStatus()
srs1PeerStatus = clientStatus.Peer[srs1.Self.PublicKey]
// assertions...
}, 5*time.Second, 200*time.Millisecond, "checking router status")
// CORRECT: Wrapping client operations
assert.EventuallyWithT(t, func(c *assert.CollectT) {
result, err := client.Curl(weburl)
assert.NoError(c, err)
assert.Len(c, result, 13)
}, 5*time.Second, 200*time.Millisecond, "Verifying HTTP connectivity")
assert.EventuallyWithT(t, func(c *assert.CollectT) {
tr, err := client.Traceroute(webip)
assert.NoError(c, err)
assertTracerouteViaIPWithCollect(c, tr, expectedRouter.MustIPv4())
}, 5*time.Second, 200*time.Millisecond, "Verifying network path")
```
**Helper Functions**:
- Use `requirePeerSubnetRoutesWithCollect` instead of `requirePeerSubnetRoutes` inside EventuallyWithT
- Use `requireNodeRouteCountWithCollect` instead of `requireNodeRouteCount` inside EventuallyWithT
- Use `assertTracerouteViaIPWithCollect` instead of `assertTracerouteViaIP` inside EventuallyWithT
```go
// Node route checking by actual node properties, not array position
var routeNode *v1.Node
for _, node := range nodes {
if nodeIDStr := fmt.Sprintf("%d", node.GetId()); expectedRoutes[nodeIDStr] != "" {
routeNode = node
break
}
}
```
### Running Problematic Tests
- Some tests require significant time (e.g., `TestNodeOnlineStatus` runs for 12 minutes)
- Infrastructure issues like disk space can cause test failures unrelated to code changes
- Use `--postgres` flag when testing database-heavy scenarios
## Quality Assurance and Testing Requirements
### **MANDATORY: Always Use Specialized Testing Agents**
**CRITICAL REQUIREMENT**: For ANY task involving testing, quality assurance, review, or validation, you MUST use the appropriate specialized agent at the END of your task list. This ensures comprehensive quality validation and prevents regressions.
**Required Agents for Different Task Types**:
1. **Integration Testing**: Use `headscale-integration-tester` agent for:
- Running integration tests with `cmd/hi`
- Analyzing test failures and artifacts
- Troubleshooting Docker-based test infrastructure
- Validating end-to-end functionality changes
2. **Quality Control**: Use `quality-control-enforcer` agent for:
- Code review and validation
- Ensuring best practices compliance
- Preventing common pitfalls and anti-patterns
- Validating architectural decisions
**Agent Usage Pattern**: Always add the appropriate agent as the FINAL step in any task list to ensure quality validation occurs after all work is complete.
### Integration Test Debugging Reference
Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including:
- Headscale server logs (stderr/stdout)
- Tailscale client logs and status
- Database dumps and network captures
- MapResponse JSON files for protocol debugging
**For integration test issues, ALWAYS use the headscale-integration-tester agent - do not attempt manual debugging.**
## EventuallyWithT Pattern for Integration Tests
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Operations That Must NOT Be Wrapped
The following are **blocking operations** that modify state and should NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--exit-node`)
- Any command that changes configuration or state
- Use `client.MustStatus()` instead of `client.Status()` when you just need the ID for a blocking operation
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Blocking operation NOT wrapped
for _, client := range allClients {
status := client.MustStatus()
command := []string{
"tailscale",
"set",
"--advertise-routes=" + expectedRoutes[string(status.Self.ID)],
}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
}
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Blocking operation wrapped in EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// This is a blocking operation - should NOT be in EventuallyWithT!
command := []string{
"tailscale",
"set",
"--advertise-routes=" + expectedRoutes[string(status.Self.ID)],
}
_, _, err = client.Execute(command)
assert.NoError(c, err)
}, 5*time.Second, 200*time.Millisecond, "wrong pattern")
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
```
## Important Notes
- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting)
- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately
- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting
- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing)
- **Integration Tests**: Require Docker and can consume significant disk space - use headscale-integration-tester agent
- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management
- **Quality Assurance**: Always use appropriate specialized agents for testing and validation tasks
- **NEVER create gists in the user's name**: Do not use the `create_gist` tool - present information directly in the response instead

View File

@@ -62,7 +62,7 @@ event.
Instances of abusive, harassing, or otherwise unacceptable behavior
may be reported to the community leaders responsible for enforcement
on our [Discord server](https://discord.gg/c84AZQhmpx). All complaints
at our Discord channel. All complaints
will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and

View File

@@ -1,34 +0,0 @@
# Contributing
Headscale is "Open Source, acknowledged contribution", this means that any contribution will have to be discussed with the maintainers before being added to the project.
This model has been chosen to reduce the risk of burnout by limiting the maintenance overhead of reviewing and validating third-party code.
## Why do we have this model?
Headscale has a small maintainer team that tries to balance working on the project, fixing bugs and reviewing contributions.
When we work on issues ourselves, we develop first hand knowledge of the code and it makes it possible for us to maintain and own the code as the project develops.
Code contributions are seen as a positive thing. People enjoy and engage with our project, but it also comes with some challenges; we have to understand the code, we have to understand the feature, we might have to become familiar with external libraries or services and we think about security implications. All those steps are required during the reviewing process. After the code has been merged, the feature has to be maintained. Any changes reliant on external services must be updated and expanded accordingly.
The review and day-1 maintenance adds a significant burden on the maintainers. Often we hope that the contributor will help out, but we found that most of the time, they disappear after their new feature was added.
This means that when someone contributes, we are mostly happy about it, but we do have to run it through a series of checks to establish if we actually can maintain this feature.
## What do we require?
A general description is provided here and an explicit list is provided in our pull request template.
All new features have to start out with a design document, which should be discussed on the issue tracker (not discord). It should include a use case for the feature, how it can be implemented, who will implement it and a plan for maintaining it.
All features have to be end-to-end tested (integration tests) and have good unit test coverage to ensure that they work as expected. This will also ensure that the feature continues to work as expected over time. If a change cannot be tested, a strong case for why this is not possible needs to be presented.
The contributor should help to maintain the feature over time. In case the feature is not maintained probably, the maintainers reserve themselves the right to remove features they redeem as unmaintainable. This should help to improve the quality of the software and keep it in a maintainable state.
## Bug fixes
Headscale is open to code contributions for bug fixes without discussion.
## Documentation
If you find mistakes in the documentation, please submit a fix to the documentation.

33
Dockerfile.debug Normal file
View File

@@ -0,0 +1,33 @@
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.22-bookworm AS build
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale
RUN test -e /go/bin/headscale
# Debug image
FROM docker.io/golang:1.22-bookworm
COPY --from=build /go/bin/headscale /bin/headscale
ENV TZ UTC
RUN apt-get update \
&& apt-get install --no-install-recommends --yes less jq \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
RUN mkdir -p /var/run/headscale
# Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT []
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -1,19 +0,0 @@
# For testing purposes only
FROM golang:alpine AS build-env
WORKDIR /go/src
RUN apk add --no-cache git
ARG VERSION_BRANCH=main
RUN git clone https://github.com/tailscale/tailscale.git --branch=$VERSION_BRANCH --depth=1
WORKDIR /go/src/tailscale
ARG TARGETARCH
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
FROM alpine:3.22
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/
ENTRYPOINT [ "/usr/local/bin/derper" ]

View File

@@ -1,29 +0,0 @@
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.25-trixie
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
RUN apt-get --update install --no-install-recommends --yes less jq sqlite3 dnsutils \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
RUN mkdir -p /var/run/headscale
# Install delve debugger
RUN go install github.com/go-delve/delve/cmd/dlv@latest
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
COPY . .
# Build debug binary with debug symbols for delve
RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale
# Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT []
EXPOSE 8080/tcp 40000/tcp
CMD ["/go/bin/dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/go/bin/headscale", "--"]

View File

@@ -1,45 +1,21 @@
# Copyright (c) Tailscale Inc & AUTHORS
# SPDX-License-Identifier: BSD-3-Clause
# This Dockerfile and the images produced are for testing headscale,
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
# This Dockerfile is more or less lifted from tailscale/tailscale
# to ensure a similar build process when testing the HEAD of tailscale.
FROM golang:latest
FROM golang:1.25-alpine AS build-env
RUN apt-get update \
&& apt-get install -y dnsutils git iptables ssh ca-certificates \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /go/src
RUN useradd --shell=/bin/bash --create-home ssh-it-user
RUN apk add --no-cache git
# Replace `RUN git...` with `COPY` and a local checked out version of Tailscale in `./tailscale`
# to test specific commits of the Tailscale client. This is useful when trying to find out why
# something specific broke between two versions of Tailscale with for example `git bisect`.
# COPY ./tailscale .
RUN git clone https://github.com/tailscale/tailscale.git
WORKDIR /go/src/tailscale
WORKDIR /go/tailscale
# see build_docker.sh
ARG VERSION_LONG=""
ENV VERSION_LONG=$VERSION_LONG
ARG VERSION_SHORT=""
ENV VERSION_SHORT=$VERSION_SHORT
ARG VERSION_GIT_HASH=""
ENV VERSION_GIT_HASH=$VERSION_GIT_HASH
ARG TARGETARCH
ARG BUILD_TAGS=""
RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\
-X tailscale.com/version.longStamp=$VERSION_LONG \
-X tailscale.com/version.shortStamp=$VERSION_SHORT \
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.22
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be
# using build_docker.sh which sets an entrypoint for the image.
RUN mkdir /tailscale && ln -s /usr/local/bin/containerboot /tailscale/run.sh
RUN git checkout main \
&& sh build_dist.sh tailscale.com/cmd/tailscale \
&& sh build_dist.sh tailscale.com/cmd/tailscaled \
&& cp tailscale /usr/local/bin/ \
&& cp tailscaled /usr/local/bin/

152
Makefile
View File

@@ -1,129 +1,53 @@
# Headscale Makefile
# Modern Makefile following best practices
# Calculate version
version ?= $(shell git describe --always --tags --dirty)
# Version calculation
VERSION ?= $(shell git describe --always --tags --dirty)
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
# Build configuration
# Determine if OS supports pie
GOOS ?= $(shell uname | tr '[:upper:]' '[:lower:]')
ifeq ($(filter $(GOOS), openbsd netbsd solaris plan9), )
PIE_FLAGS = -buildmode=pie
ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
pieflags = -buildmode=pie
else
endif
# Tool availability check with nix warning
define check_tool
@command -v $(1) >/dev/null 2>&1 || { \
echo "Warning: $(1) not found. Run 'nix develop' to ensure all dependencies are available."; \
exit 1; \
}
endef
# Source file collections using shell find for better performance
GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*')
PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*')
DOC_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*')
# Default target
.PHONY: all
all: lint test build
# Dependency checking
.PHONY: check-deps
check-deps:
$(call check_tool,go)
$(call check_tool,golangci-lint)
$(call check_tool,gofumpt)
$(call check_tool,prettier)
$(call check_tool,clang-format)
$(call check_tool,buf)
# Build targets
.PHONY: build
build: check-deps $(GO_SOURCES) go.mod go.sum
@echo "Building headscale..."
go build $(PIE_FLAGS) -ldflags "-X main.version=$(VERSION)" -o headscale ./cmd/headscale
# Test targets
.PHONY: test
test: check-deps $(GO_SOURCES) go.mod go.sum
@echo "Running Go tests..."
go test -race ./...
# GO_SOURCES = $(wildcard *.go)
# PROTO_SOURCES = $(wildcard **/*.proto)
GO_SOURCES = $(call rwildcard,,*.go)
PROTO_SOURCES = $(call rwildcard,,*.proto)
# Formatting targets
.PHONY: fmt
fmt: fmt-go fmt-prettier fmt-proto
build:
nix build
.PHONY: fmt-go
fmt-go: check-deps $(GO_SOURCES)
@echo "Formatting Go code..."
gofumpt -l -w .
golangci-lint run --fix
dev: lint test build
.PHONY: fmt-prettier
fmt-prettier: check-deps $(DOC_SOURCES)
@echo "Formatting documentation and config files..."
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
prettier --write --print-width 80 --prose-wrap always CHANGELOG.md
test:
gotestsum -- -short -coverprofile=coverage.out ./...
.PHONY: fmt-proto
fmt-proto: check-deps $(PROTO_SOURCES)
@echo "Formatting Protocol Buffer files..."
clang-format -i $(PROTO_SOURCES)
test_integration:
docker run \
-t --rm \
-v ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
-v $$PWD:$$PWD -w $$PWD/integration \
-v /var/run/docker.sock:/var/run/docker.sock \
golang:1 \
go run gotest.tools/gotestsum@latest -- -failfast ./... -timeout 120m -parallel 8
# Linting targets
.PHONY: lint
lint: lint-go lint-proto
lint:
golangci-lint run --fix --timeout 10m
.PHONY: lint-go
lint-go: check-deps $(GO_SOURCES) go.mod go.sum
@echo "Linting Go code..."
golangci-lint run --timeout 10m
fmt:
prettier --write '**/**.{ts,js,md,yaml,yml,sass,css,scss,html}'
golines --max-len=88 --base-formatter=gofumpt -w $(GO_SOURCES)
clang-format -style="{BasedOnStyle: Google, IndentWidth: 4, AlignConsecutiveDeclarations: true, AlignConsecutiveAssignments: true, ColumnLimit: 0}" -i $(PROTO_SOURCES)
.PHONY: lint-proto
lint-proto: check-deps $(PROTO_SOURCES)
@echo "Linting Protocol Buffer files..."
cd proto/ && buf lint
proto-lint:
cd proto/ && go run github.com/bufbuild/buf/cmd/buf lint
# Code generation
.PHONY: generate
generate: check-deps
@echo "Generating code..."
go generate ./...
compress: build
upx --brute headscale
# Clean targets
.PHONY: clean
clean:
rm -rf headscale gen
# Development workflow
.PHONY: dev
dev: fmt lint test build
# Help target
.PHONY: help
help:
@echo "Headscale Development Makefile"
@echo ""
@echo "Main targets:"
@echo " all - Run lint, test, and build (default)"
@echo " build - Build headscale binary"
@echo " test - Run Go tests"
@echo " fmt - Format all code (Go, docs, proto)"
@echo " lint - Lint all code (Go, proto)"
@echo " generate - Generate code from Protocol Buffers"
@echo " dev - Full development workflow (fmt + lint + test + build)"
@echo " clean - Clean build artifacts"
@echo ""
@echo "Specific targets:"
@echo " fmt-go - Format Go code only"
@echo " fmt-prettier - Format documentation only"
@echo " fmt-proto - Format Protocol Buffer files only"
@echo " lint-go - Lint Go code only"
@echo " lint-proto - Lint Protocol Buffer files only"
@echo ""
@echo "Dependencies:"
@echo " check-deps - Verify required tools are available"
@echo ""
@echo "Note: If not running in a nix shell, ensure dependencies are available:"
@echo " nix develop"
generate:
rm -rf gen
buf generate proto

1027
README.md

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,173 @@
package main
//go:generate go run ./main.go
import (
"bytes"
"fmt"
"log"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"text/template"
)
var (
githubWorkflowPath = "../../.github/workflows/"
jobFileNameTemplate = `test-integration-v2-%s.yaml`
jobTemplate = template.Must(
template.New("jobTemplate").
Parse(`# DO NOT EDIT, generated with cmd/gh-action-integration-generator/main.go
# To regenerate, run "go generate" in cmd/gh-action-integration-generator/
name: Integration Test v2 - {{.Name}}
on: [pull_request]
concurrency:
group: {{ "${{ github.workflow }}-$${{ github.head_ref || github.run_id }}" }}
cancel-in-progress: true
jobs:
{{.Name}}:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 2
- uses: DeterminateSystems/nix-installer-action@main
- uses: DeterminateSystems/magic-nix-cache-action@main
- uses: satackey/action-docker-layer-caching@main
continue-on-error: true
- name: Get changed files
id: changed-files
uses: tj-actions/changed-files@v34
with:
files: |
*.nix
go.*
**/*.go
integration_test/
config-example.yaml
- name: Run {{.Name}}
uses: Wandalen/wretry.action@master
if: steps.changed-files.outputs.any_changed == 'true'
with:
attempt_limit: 5
command: |
nix develop --command -- docker run \
--tty --rm \
--volume ~/.cache/hs-integration-go:/go \
--name headscale-test-suite \
--volume $PWD:$PWD -w $PWD/integration \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume $PWD/control_logs:/tmp/control \
golang:1 \
go run gotest.tools/gotestsum@latest -- ./... \
-failfast \
-timeout 120m \
-parallel 1 \
-run "^{{.Name}}$"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: logs
path: "control_logs/*.log"
- uses: actions/upload-artifact@v3
if: always() && steps.changed-files.outputs.any_changed == 'true'
with:
name: pprof
path: "control_logs/*.pprof.tar"
`),
)
)
const workflowFilePerm = 0o600
func removeTests() {
glob := fmt.Sprintf(jobFileNameTemplate, "*")
files, err := filepath.Glob(filepath.Join(githubWorkflowPath, glob))
if err != nil {
log.Fatalf("failed to find test files")
}
for _, file := range files {
err := os.Remove(file)
if err != nil {
log.Printf("failed to remove: %s", err)
}
}
}
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
log.Fatalf("failed to find rg (ripgrep) binary")
}
args := []string{
"--regexp", "func (Test.+)\\(.*",
"../../integration/",
"--replace", "$1",
"--sort", "path",
"--no-line-number",
"--no-filename",
"--no-heading",
}
log.Printf("executing: %s %s", rgBin, strings.Join(args, " "))
ripgrep := exec.Command(
rgBin,
args...,
)
result, err := ripgrep.CombinedOutput()
if err != nil {
log.Printf("out: %s", result)
log.Fatalf("failed to run ripgrep: %s", err)
}
tests := strings.Split(string(result), "\n")
tests = tests[:len(tests)-1]
return tests
}
func main() {
type testConfig struct {
Name string
}
tests := findTests()
removeTests()
for _, test := range tests {
log.Printf("generating workflow for %s", test)
var content bytes.Buffer
if err := jobTemplate.Execute(&content, testConfig{
Name: test,
}); err != nil {
log.Fatalf("failed to render template: %s", err)
}
testPath := path.Join(githubWorkflowPath, fmt.Sprintf(jobFileNameTemplate, test))
err := os.WriteFile(testPath, content.Bytes(), workflowFilePerm)
if err != nil {
log.Fatalf("failed to write github job: %s", err)
}
}
}

View File

@@ -5,13 +5,14 @@ import (
"strconv"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
)
const (
@@ -54,7 +55,7 @@ var listAPIKeys = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -67,10 +68,14 @@ var listAPIKeys = &cobra.Command{
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetApiKeys(), "", output)
return
}
tableData := pterm.TableData{
@@ -98,6 +103,8 @@ var listAPIKeys = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
@@ -113,6 +120,9 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
log.Trace().
Msg("Preparing to create ApiKey")
request := &v1.CreateApiKeyRequest{}
durationStr, _ := cmd.Flags().GetString("expiration")
@@ -124,13 +134,19 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
return
}
expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace().
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -141,6 +157,8 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
fmt.Sprintf("Cannot create Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
@@ -161,9 +179,11 @@ var expireAPIKeyCmd = &cobra.Command{
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -178,6 +198,8 @@ var expireAPIKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response, "Key expired", output)
@@ -198,9 +220,11 @@ var deleteAPIKeyCmd = &cobra.Command{
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -215,6 +239,8 @@ var deleteAPIKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response, "Key deleted", output)

View File

@@ -14,7 +14,7 @@ var configTestCmd = &cobra.Command{
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) {
_, err := newHeadscaleServerWithConfig()
_, err := getHeadscaleApp()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing")
}

View File

@@ -4,10 +4,10 @@ import (
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
"tailscale.com/types/key"
)
const (
@@ -64,9 +64,11 @@ var createNodeCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -77,24 +79,31 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting node from flag: %s", err),
output,
)
return
}
registrationID, err := cmd.Flags().GetString("key")
machineKey, err := cmd.Flags().GetString("key")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting key from flag: %s", err),
output,
)
return
}
_, err = types.RegistrationIDFromString(registrationID)
var mkey key.MachinePublic
err = mkey.UnmarshalText([]byte(machineKey))
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to parse machine key from flag: %s", err),
output,
)
return
}
routes, err := cmd.Flags().GetStringSlice("route")
@@ -104,10 +113,12 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
return
}
request := &v1.DebugCreateNodeRequest{
Key: registrationID,
Key: machineKey,
Name: name,
User: user,
Routes: routes,
@@ -117,9 +128,11 @@ var createNodeCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
"Cannot create node: "+status.Convert(err).Message(),
fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node created", output)

View File

@@ -1,29 +0,0 @@
package cli
import (
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(healthCmd)
}
var healthCmd = &cobra.Command{
Use: "health",
Short: "Check the health of the Headscale server",
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.Health(ctx, &v1.HealthRequest{})
if err != nil {
ErrorOutput(err, "Error checking health", output)
}
SuccessOutput(response, "", output)
},
}

View File

@@ -1,11 +1,8 @@
package cli
import (
"encoding/json"
"errors"
"fmt"
"net"
"net/http"
"os"
"strconv"
"time"
@@ -67,19 +64,6 @@ func mockOIDC() error {
accessTTL = newTTL
}
userStr := os.Getenv("MOCKOIDC_USERS")
if userStr == "" {
return errors.New("MOCKOIDC_USERS not defined")
}
var users []mockoidc.MockUser
err := json.Unmarshal([]byte(userStr), &users)
if err != nil {
return fmt.Errorf("unmarshalling users: %w", err)
}
log.Info().Interface("users", users).Msg("loading users from JSON")
log.Info().Msgf("Access token TTL: %s", accessTTL)
port, err := strconv.Atoi(portStr)
@@ -87,7 +71,7 @@ func mockOIDC() error {
return err
}
mock, err := getMockOIDC(clientID, clientSecret, users)
mock, err := getMockOIDC(clientID, clientSecret)
if err != nil {
return err
}
@@ -109,18 +93,12 @@ func mockOIDC() error {
return nil
}
func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser) (*mockoidc.MockOIDC, error) {
func getMockOIDC(clientID string, clientSecret string) (*mockoidc.MockOIDC, error) {
keypair, err := mockoidc.NewKeypair(nil)
if err != nil {
return nil, err
}
userQueue := mockoidc.UserQueue{}
for _, user := range users {
userQueue.Push(&user)
}
mock := mockoidc.MockOIDC{
ClientID: clientID,
ClientSecret: clientSecret,
@@ -129,19 +107,9 @@ func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser
CodeChallengeMethodsSupported: []string{"plain", "S256"},
Keypair: keypair,
SessionStore: mockoidc.NewSessionStore(),
UserQueue: &userQueue,
UserQueue: &mockoidc.UserQueue{},
ErrorQueue: &mockoidc.ErrorQueue{},
}
mock.AddMiddleware(func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Info().Msgf("Request: %+v", r)
h.ServeHTTP(w, r)
if r.Response != nil {
log.Info().Msgf("Response: %+v", r.Response)
}
})
})
return &mock, nil
}

View File

@@ -4,18 +4,16 @@ import (
"fmt"
"log"
"net/netip"
"slices"
"strconv"
"strings"
"time"
survey "github.com/AlecAivazis/survey/v2"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/samber/lo"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/timestamppb"
"tailscale.com/types/key"
)
@@ -28,10 +26,8 @@ func init() {
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
listNodesNamespaceFlag.Hidden = true
nodeCmd.AddCommand(listNodesCmd)
listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
nodeCmd.AddCommand(listNodeRoutesCmd)
nodeCmd.AddCommand(listNodesCmd)
registerNodeCmd.Flags().StringP("user", "u", "", "User")
@@ -42,34 +38,33 @@ func init() {
err := registerNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
err = registerNodeCmd.MarkFlagRequired("key")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(registerNodeCmd)
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.")
err = expireNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(expireNodeCmd)
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = renameNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(renameNodeCmd)
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = deleteNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(deleteNodeCmd)
@@ -77,10 +72,10 @@ func init() {
err = moveNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
moveNodeCmd.Flags().Uint64P("user", "u", 0, "New user")
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
@@ -89,21 +84,19 @@ func init() {
err = moveNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal(err.Error())
log.Fatalf(err.Error())
}
nodeCmd.AddCommand(moveNodeCmd)
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
tagCmd.MarkFlagRequired("identifier")
tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
err = tagCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatalf(err.Error())
}
tagCmd.Flags().
StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
nodeCmd.AddCommand(tagCmd)
approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
approveRoutesCmd.MarkFlagRequired("identifier")
approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`)
nodeCmd.AddCommand(approveRoutesCmd)
nodeCmd.AddCommand(backfillNodeIPsCmd)
}
var nodeCmd = &cobra.Command{
@@ -120,23 +113,27 @@ var registerNodeCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
registrationID, err := cmd.Flags().GetString("key")
machineKey, err := cmd.Flags().GetString("key")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting node key from flag: %s", err),
output,
)
return
}
request := &v1.RegisterNodeRequest{
Key: registrationID,
Key: machineKey,
User: user,
}
@@ -150,6 +147,8 @@ var registerNodeCmd = &cobra.Command{
),
output,
)
return
}
SuccessOutput(
@@ -167,13 +166,17 @@ var listNodesCmd = &cobra.Command{
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
showTags, err := cmd.Flags().GetBool("tags")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -185,18 +188,24 @@ var listNodesCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
"Cannot get nodes: "+status.Convert(err).Message(),
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetNodes(), "", output)
return
}
tableData, err := nodesToPtables(user, showTags, response.GetNodes())
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
@@ -206,70 +215,8 @@ var listNodesCmd = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
}
},
}
var listNodeRoutesCmd = &cobra.Command{
Use: "list-routes",
Short: "List routes available on nodes",
Aliases: []string{"lsr", "routes"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ListNodesRequest{}
response, err := client.ListNodes(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot get nodes: "+status.Convert(err).Message(),
output,
)
}
if output != "" {
SuccessOutput(response.GetNodes(), "", output)
}
nodes := response.GetNodes()
if identifier != 0 {
for _, node := range response.GetNodes() {
if node.GetId() == identifier {
nodes = []*v1.Node{node}
break
}
}
}
nodes = lo.Filter(nodes, func(n *v1.Node, _ int) bool {
return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0)
})
tableData, err := nodeRoutesToPtables(nodes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
@@ -289,39 +236,16 @@ var expireNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
expiry, err := cmd.Flags().GetString("expiry")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting expiry to string: %s", err),
output,
)
return
}
expiryTime := time.Now()
if expiry != "" {
expiryTime, err = time.Parse(time.RFC3339, expiry)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting expiry to string: %s", err),
output,
)
return
}
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ExpireNodeRequest{
NodeId: identifier,
Expiry: timestamppb.New(expiryTime),
}
response, err := client.ExpireNode(ctx, request)
@@ -334,6 +258,8 @@ var expireNodeCmd = &cobra.Command{
),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node expired", output)
@@ -353,9 +279,11 @@ var renameNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -378,6 +306,8 @@ var renameNodeCmd = &cobra.Command{
),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node renamed", output)
@@ -398,9 +328,11 @@ var deleteNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -412,9 +344,14 @@ var deleteNodeCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
"Error getting node node: "+status.Convert(err).Message(),
fmt.Sprintf(
"Error getting node node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
deleteRequest := &v1.DeleteNodeRequest{
@@ -424,10 +361,16 @@ var deleteNodeCmd = &cobra.Command{
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo(fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetNode().GetName(),
))
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetNode().GetName(),
),
}
err = survey.AskOne(prompt, &confirm)
if err != nil {
return
}
}
if confirm || force {
@@ -440,9 +383,14 @@ var deleteNodeCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
"Error deleting node: "+status.Convert(err).Message(),
fmt.Sprintf(
"Error deleting node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(
map[string]string{"Result": "Node deleted"},
@@ -469,18 +417,22 @@ var moveNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
user, err := cmd.Flags().GetUint64("user")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting user: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -492,9 +444,14 @@ var moveNodeCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
"Error getting node: "+status.Convert(err).Message(),
fmt.Sprintf(
"Error getting node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
moveRequest := &v1.MoveNodeRequest{
@@ -506,59 +463,20 @@ var moveNodeCmd = &cobra.Command{
if err != nil {
ErrorOutput(
err,
"Error moving node: "+status.Convert(err).Message(),
fmt.Sprintf(
"Error moving node: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output)
},
}
var backfillNodeIPsCmd = &cobra.Command{
Use: "backfillips",
Short: "Backfill IPs missing from nodes",
Long: `
Backfill IPs can be used to add/remove IPs from nodes
based on the current configuration of Headscale.
If there are nodes that does not have IPv4 or IPv6
even if prefixes for both are configured in the config,
this command can be used to assign IPs of the sort to
all nodes that are missing.
If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer
be assigned to nodes.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("Are you sure that you want to assign/remove IPs to/from nodes?")
}
if confirm || force {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm || force})
if err != nil {
ErrorOutput(
err,
"Error backfilling IPs: "+status.Convert(err).Message(),
output,
)
}
SuccessOutput(changes, "Node IPs backfilled successfully", output)
}
},
}
func nodesToPtables(
currentUser string,
showTags bool,
@@ -646,14 +564,14 @@ func nodesToPtables(
forcedTags = strings.TrimLeft(forcedTags, ",")
var invalidTags string
for _, tag := range node.GetInvalidTags() {
if !slices.Contains(node.GetForcedTags(), tag) {
if !contains(node.GetForcedTags(), tag) {
invalidTags += "," + pterm.LightRed(tag)
}
}
invalidTags = strings.TrimLeft(invalidTags, ",")
var validTags string
for _, tag := range node.GetValidTags() {
if !slices.Contains(node.GetForcedTags(), tag) {
if !contains(node.GetForcedTags(), tag) {
validTags += "," + pterm.LightGreen(tag)
}
}
@@ -703,42 +621,13 @@ func nodesToPtables(
return tableData, nil
}
func nodeRoutesToPtables(
nodes []*v1.Node,
) (pterm.TableData, error) {
tableHeader := []string{
"ID",
"Hostname",
"Approved",
"Available",
"Serving (Primary)",
}
tableData := pterm.TableData{tableHeader}
for _, node := range nodes {
nodeData := []string{
strconv.FormatUint(node.GetId(), util.Base10),
node.GetGivenName(),
strings.Join(node.GetApprovedRoutes(), ", "),
strings.Join(node.GetAvailableRoutes(), ", "),
strings.Join(node.GetSubnetRoutes(), ", "),
}
tableData = append(
tableData,
nodeData,
)
}
return tableData, nil
}
var tagCmd = &cobra.Command{
Use: "tag",
Short: "Manage the tags of a node",
Aliases: []string{"tags", "t"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -750,6 +639,8 @@ var tagCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
tagsToSet, err := cmd.Flags().GetStringSlice("tags")
if err != nil {
@@ -758,6 +649,8 @@ var tagCmd = &cobra.Command{
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err),
output,
)
return
}
// Sending tags to node
@@ -772,57 +665,8 @@ var tagCmd = &cobra.Command{
fmt.Sprintf("Error while sending tags to headscale: %s", err),
output,
)
}
if resp != nil {
SuccessOutput(
resp.GetNode(),
"Node updated",
output,
)
}
},
}
var approveRoutesCmd = &cobra.Command{
Use: "approve-routes",
Short: "Manage the approved routes of a node",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
// retrieve flags from CLI
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
routes, err := cmd.Flags().GetStringSlice("routes")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error retrieving list of routes to add to node, %v", err),
output,
)
}
// Sending routes to node
request := &v1.SetApprovedRoutesRequest{
NodeId: identifier,
Routes: routes,
}
resp, err := client.SetApprovedRoutes(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error while sending routes to headscale: %s", err),
output,
)
return
}
if resp != nil {

View File

@@ -1,212 +0,0 @@
package cli
import (
"fmt"
"io"
"os"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"tailscale.com/types/views"
)
const (
bypassFlag = "bypass-grpc-and-access-database-directly"
)
func init() {
rootCmd.AddCommand(policyCmd)
getPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(getPolicy)
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := setPolicy.MarkFlagRequired("file"); err != nil {
log.Fatal().Err(err).Msg("")
}
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(setPolicy)
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := checkPolicy.MarkFlagRequired("file"); err != nil {
log.Fatal().Err(err).Msg("")
}
policyCmd.AddCommand(checkPolicy)
}
var policyCmd = &cobra.Command{
Use: "policy",
Short: "Manage the Headscale ACL Policy",
}
var getPolicy = &cobra.Command{
Use: "get",
Short: "Print the current ACL Policy",
Aliases: []string{"show", "view", "fetch"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
var policy string
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
}
if !confirm && !force {
ErrorOutput(nil, "Aborting command", output)
return
}
cfg, err := types.LoadServerConfig()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output)
}
d, err := db.NewHeadscaleDatabase(
cfg.Database,
cfg.BaseDomain,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
}
pol, err := d.GetPolicy()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading Policy from database: %s", err), output)
}
policy = pol.Data
} else {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.GetPolicyRequest{}
response, err := client.GetPolicy(ctx, request)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
}
policy = response.GetPolicy()
}
// TODO(pallabpain): Maybe print this better?
// This does not pass output as we dont support yaml, json or json-line
// output for this command. It is HuJSON already.
SuccessOutput("", policy, "")
},
}
var setPolicy = &cobra.Command{
Use: "set",
Short: "Updates the ACL Policy",
Long: `
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
Aliases: []string{"put", "update"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
}
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
}
if !confirm && !force {
ErrorOutput(nil, "Aborting command", output)
return
}
cfg, err := types.LoadServerConfig()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output)
}
d, err := db.NewHeadscaleDatabase(
cfg.Database,
cfg.BaseDomain,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
}
users, err := d.ListUsers()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to load users for policy validation: %s", err), output)
}
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output)
return
}
_, err = d.SetPolicy(string(policyBytes))
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
} else {
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
if _, err := client.SetPolicy(ctx, request); err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
}
SuccessOutput(nil, "Policy updated.", "")
},
}
var checkPolicy = &cobra.Command{
Use: "check",
Short: "Check the Policy file for errors",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
}
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output)
}
SuccessOutput(nil, "Policy is valid", "")
},
}

View File

@@ -20,7 +20,7 @@ const (
func init() {
rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
@@ -57,12 +57,14 @@ var listPreAuthKeys = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetUint64("user")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -83,6 +85,8 @@ var listPreAuthKeys = &cobra.Command{
if output != "" {
SuccessOutput(response.GetPreAuthKeys(), "", output)
return
}
tableData := pterm.TableData{
@@ -103,6 +107,13 @@ var listPreAuthKeys = &cobra.Command{
expiration = ColourTime(key.GetExpiration().AsTime())
}
var reusable string
if key.GetEphemeral() {
reusable = "N/A"
} else {
reusable = fmt.Sprintf("%v", key.GetReusable())
}
aclTags := ""
for _, tag := range key.GetAclTags() {
@@ -112,9 +123,9 @@ var listPreAuthKeys = &cobra.Command{
aclTags = strings.TrimLeft(aclTags, ",")
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), 10),
key.GetId(),
key.GetKey(),
strconv.FormatBool(key.GetReusable()),
reusable,
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
@@ -130,6 +141,8 @@ var listPreAuthKeys = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
@@ -141,15 +154,23 @@ var createPreAuthKeyCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetUint64("user")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags")
log.Trace().
Bool("reusable", reusable).
Bool("ephemeral", ephemeral).
Str("user", user).
Msg("Preparing to create preauthkey")
request := &v1.CreatePreAuthKeyRequest{
User: user,
Reusable: reusable,
@@ -166,6 +187,8 @@ var createPreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
return
}
expiration := time.Now().UTC().Add(time.Duration(duration))
@@ -176,7 +199,7 @@ var createPreAuthKeyCmd = &cobra.Command{
request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -187,6 +210,8 @@ var createPreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
output,
)
return
}
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
@@ -206,12 +231,14 @@ var expirePreAuthKeyCmd = &cobra.Command{
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetUint64("user")
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -227,6 +254,8 @@ var expirePreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
output,
)
return
}
SuccessOutput(response, "Key expired", output)

View File

@@ -4,14 +4,11 @@ import (
"fmt"
"os"
"runtime"
"slices"
"strings"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/tcnksm/go-latest"
)
@@ -27,11 +24,6 @@ func init() {
return
}
if slices.Contains(os.Args, "policy") && slices.Contains(os.Args, "check") {
zerolog.SetGlobalLevel(zerolog.Disabled)
return
}
cobra.OnInitialize(initConfig)
rootCmd.PersistentFlags().
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
@@ -57,79 +49,45 @@ func initConfig() {
}
}
cfg, err := types.GetHeadscaleConfig()
if err != nil {
log.Fatal().Caller().Err(err).Msg("Failed to get headscale configuration")
}
machineOutput := HasMachineOutputFlag()
zerolog.SetGlobalLevel(cfg.Log.Level)
// If the user has requested a "node" readable format,
// then disable login so the output remains valid.
if machineOutput {
zerolog.SetGlobalLevel(zerolog.Disabled)
}
logFormat := viper.GetString("log.format")
if logFormat == types.JSONLogFormat {
if cfg.Log.Format == types.JSONLogFormat {
log.Logger = log.Output(os.Stdout)
}
disableUpdateCheck := viper.GetBool("disable_check_updates")
if !disableUpdateCheck && !machineOutput {
versionInfo := types.GetVersionInfo()
if !cfg.DisableUpdateCheck && !machineOutput {
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
!versionInfo.Dirty {
Version != "dev" {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
Owner: "juanfont",
Repository: "headscale",
}
res, err := latest.Check(githubTag, versionInfo.Version)
res, err := latest.Check(githubTag, Version)
if err == nil && res.Outdated {
//nolint
log.Warn().Msgf(
fmt.Printf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
versionInfo.Version,
Version,
)
}
}
}
}
var prereleases = []string{"alpha", "beta", "rc", "dev"}
func isPreReleaseVersion(version string) bool {
for _, unstable := range prereleases {
if strings.Contains(version, unstable) {
return true
}
}
return false
}
// filterPreReleasesIfStable returns a function that filters out
// pre-release tags if the current version is stable.
// If the current version is a pre-release, it does not filter anything.
// versionFunc is a function that returns the current version string, it is
// a func for testability.
func filterPreReleasesIfStable(versionFunc func() string) func(string) bool {
return func(tag string) bool {
version := versionFunc()
// If we are on a pre-release version, then we do not filter anything
// as we want to recommend the user the latest pre-release.
if isPreReleaseVersion(version) {
return false
}
// If we are on a stable release, filter out pre-releases.
for _, ignore := range prereleases {
if strings.Contains(tag, ignore) {
return true
}
}
return false
}
}
var rootCmd = &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",

View File

@@ -1,293 +0,0 @@
package cli
import (
"testing"
)
func TestFilterPreReleasesIfStable(t *testing.T) {
tests := []struct {
name string
currentVersion string
tag string
expectedFilter bool
description string
}{
{
name: "stable version filters alpha tag",
currentVersion: "0.23.0",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "When on stable release, alpha tags should be filtered",
},
{
name: "stable version filters beta tag",
currentVersion: "0.23.0",
tag: "v0.24.0-beta.2",
expectedFilter: true,
description: "When on stable release, beta tags should be filtered",
},
{
name: "stable version filters rc tag",
currentVersion: "0.23.0",
tag: "v0.24.0-rc.1",
expectedFilter: true,
description: "When on stable release, rc tags should be filtered",
},
{
name: "stable version allows stable tag",
currentVersion: "0.23.0",
tag: "v0.24.0",
expectedFilter: false,
description: "When on stable release, stable tags should not be filtered",
},
{
name: "alpha version allows alpha tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-alpha.2",
expectedFilter: false,
description: "When on alpha release, alpha tags should not be filtered",
},
{
name: "alpha version allows beta tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on alpha release, beta tags should not be filtered",
},
{
name: "alpha version allows rc tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on alpha release, rc tags should not be filtered",
},
{
name: "alpha version allows stable tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on alpha release, stable tags should not be filtered",
},
{
name: "beta version allows alpha tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on beta release, alpha tags should not be filtered",
},
{
name: "beta version allows beta tag",
currentVersion: "0.23.0-beta.2",
tag: "v0.24.0-beta.3",
expectedFilter: false,
description: "When on beta release, beta tags should not be filtered",
},
{
name: "beta version allows rc tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on beta release, rc tags should not be filtered",
},
{
name: "beta version allows stable tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on beta release, stable tags should not be filtered",
},
{
name: "rc version allows alpha tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on rc release, alpha tags should not be filtered",
},
{
name: "rc version allows beta tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on rc release, beta tags should not be filtered",
},
{
name: "rc version allows rc tag",
currentVersion: "0.23.0-rc.2",
tag: "v0.24.0-rc.3",
expectedFilter: false,
description: "When on rc release, rc tags should not be filtered",
},
{
name: "rc version allows stable tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on rc release, stable tags should not be filtered",
},
{
name: "stable version with patch filters alpha",
currentVersion: "0.23.1",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "Stable version with patch number should filter alpha tags",
},
{
name: "stable version with patch allows stable",
currentVersion: "0.23.1",
tag: "v0.24.0",
expectedFilter: false,
description: "Stable version with patch number should allow stable tags",
},
{
name: "tag with alpha substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-alpha.1",
expectedFilter: true,
description: "Tags with alpha in version string should be filtered on stable",
},
{
name: "tag with beta substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-beta.1",
expectedFilter: true,
description: "Tags with beta in version string should be filtered on stable",
},
{
name: "tag with rc substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-rc.1",
expectedFilter: true,
description: "Tags with rc in version string should be filtered on stable",
},
{
name: "empty tag on stable version",
currentVersion: "0.23.0",
tag: "",
expectedFilter: false,
description: "Empty tags should not be filtered",
},
{
name: "dev version allows all tags",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "Dev versions should not filter any tags (pre-release allows all)",
},
{
name: "stable version filters dev tag",
currentVersion: "0.23.0",
tag: "v0.24.0-dev",
expectedFilter: true,
description: "When on stable release, dev tags should be filtered",
},
{
name: "dev version allows dev tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-dev.1",
expectedFilter: false,
description: "When on dev release, dev tags should not be filtered",
},
{
name: "dev version allows stable tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0",
expectedFilter: false,
description: "When on dev release, stable tags should not be filtered",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := filterPreReleasesIfStable(func() string { return tt.currentVersion })(tt.tag)
if result != tt.expectedFilter {
t.Errorf("%s: got %v, want %v\nDescription: %s\nCurrent version: %s, Tag: %s",
tt.name,
result,
tt.expectedFilter,
tt.description,
tt.currentVersion,
tt.tag,
)
}
})
}
}
func TestIsPreReleaseVersion(t *testing.T) {
tests := []struct {
name string
version string
expected bool
description string
}{
{
name: "stable version",
version: "0.23.0",
expected: false,
description: "Stable version should not be pre-release",
},
{
name: "alpha version",
version: "0.23.0-alpha.1",
expected: true,
description: "Alpha version should be pre-release",
},
{
name: "beta version",
version: "0.23.0-beta.1",
expected: true,
description: "Beta version should be pre-release",
},
{
name: "rc version",
version: "0.23.0-rc.1",
expected: true,
description: "RC version should be pre-release",
},
{
name: "version with alpha substring",
version: "0.23.0-alphabetical",
expected: true,
description: "Version containing 'alpha' should be pre-release",
},
{
name: "version with beta substring",
version: "0.23.0-betamax",
expected: true,
description: "Version containing 'beta' should be pre-release",
},
{
name: "dev version",
version: "0.23.0-dev",
expected: true,
description: "Dev version should be pre-release",
},
{
name: "empty version",
version: "",
expected: false,
description: "Empty version should not be pre-release",
},
{
name: "version with patch number",
version: "0.23.1",
expected: false,
description: "Stable version with patch should not be pre-release",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isPreReleaseVersion(tt.version)
if result != tt.expected {
t.Errorf("%s: got %v, want %v\nDescription: %s\nVersion: %s",
tt.name,
result,
tt.expected,
tt.description,
tt.version,
)
}
})
}
}

298
cmd/headscale/cli/routes.go Normal file
View File

@@ -0,0 +1,298 @@
package cli
import (
"fmt"
"log"
"net/netip"
"strconv"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
const (
Base10 = 10
)
func init() {
rootCmd.AddCommand(routesCmd)
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
routesCmd.AddCommand(listRoutesCmd)
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err := enableRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(enableRouteCmd)
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = disableRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(disableRouteCmd)
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
err = deleteRouteCmd.MarkFlagRequired("route")
if err != nil {
log.Fatalf(err.Error())
}
routesCmd.AddCommand(deleteRouteCmd)
}
var routesCmd = &cobra.Command{
Use: "routes",
Short: "Manage the routes of Headscale",
Aliases: []string{"r", "route"},
}
var listRoutesCmd = &cobra.Command{
Use: "list",
Short: "List all routes",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
machineID, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
var routes []*v1.Route
if machineID == 0 {
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetRoutes(), "", output)
return
}
routes = response.GetRoutes()
} else {
response, err := client.GetNodeRoutes(ctx, &v1.GetNodeRoutesRequest{
NodeId: machineID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot get routes for node %d: %s", machineID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetRoutes(), "", output)
return
}
routes = response.GetRoutes()
}
tableData := routesToPtables(routes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
return
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var enableRouteCmd = &cobra.Command{
Use: "enable",
Short: "Set a route as enabled",
Long: `This command will make as enabled a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
var disableRouteCmd = &cobra.Command{
Use: "disable",
Short: "Set as disabled a given route",
Long: `This command will make as disabled a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
var deleteRouteCmd = &cobra.Command{
Use: "delete",
Short: "Delete a given route",
Long: `This command will delete a given route.`,
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
routeID, err := cmd.Flags().GetUint64("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting machine id from flag: %s", err),
output,
)
return
}
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
RouteId: routeID,
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response, "", output)
return
}
},
}
// routesToPtables converts the list of routes to a nice table.
func routesToPtables(routes []*v1.Route) pterm.TableData {
tableData := pterm.TableData{{"ID", "Node", "Prefix", "Advertised", "Enabled", "Primary"}}
for _, route := range routes {
var isPrimaryStr string
prefix, err := netip.ParsePrefix(route.GetPrefix())
if err != nil {
log.Printf("Error parsing prefix %s: %s", route.GetPrefix(), err)
continue
}
if prefix == types.ExitRouteV4 || prefix == types.ExitRouteV6 {
isPrimaryStr = "-"
} else {
isPrimaryStr = strconv.FormatBool(route.GetIsPrimary())
}
tableData = append(tableData,
[]string{
strconv.FormatUint(route.GetId(), Base10),
route.GetNode().GetGivenName(),
route.GetPrefix(),
strconv.FormatBool(route.GetAdvertised()),
strconv.FormatBool(route.GetEnabled()),
isPrimaryStr,
})
}
return tableData
}

View File

@@ -1,13 +1,8 @@
package cli
import (
"errors"
"fmt"
"net/http"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"github.com/tailscale/squibble"
)
func init() {
@@ -21,20 +16,14 @@ var serveCmd = &cobra.Command{
return nil
},
Run: func(cmd *cobra.Command, args []string) {
app, err := newHeadscaleServerWithConfig()
app, err := getHeadscaleApp()
if err != nil {
var squibbleErr squibble.ValidationError
if errors.As(err, &squibbleErr) {
fmt.Printf("SQLite schema failed to validate:\n")
fmt.Println(squibbleErr.Diff)
}
log.Fatal().Caller().Err(err).Msg("Error initializing")
}
err = app.Serve()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
log.Fatal().Caller().Err(err).Msg("Headscale ran into an error and had to shut down.")
if err != nil {
log.Fatal().Caller().Err(err).Msg("Error starting server")
}
},
}

View File

@@ -3,54 +3,21 @@ package cli
import (
"errors"
"fmt"
"net/url"
"strconv"
survey "github.com/AlecAivazis/survey/v2"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
)
func usernameAndIDFlag(cmd *cobra.Command) {
cmd.Flags().Int64P("identifier", "i", -1, "User identifier (ID)")
cmd.Flags().StringP("name", "n", "", "Username")
}
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
// If both are empty, it will exit the program with an error.
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string) {
username, _ := cmd.Flags().GetString("name")
identifier, _ := cmd.Flags().GetInt64("identifier")
if username == "" && identifier < 0 {
err := errors.New("--name or --identifier flag is required")
ErrorOutput(
err,
"Cannot rename user: "+status.Convert(err).Message(),
"",
)
}
return uint64(identifier), username
}
func init() {
rootCmd.AddCommand(userCmd)
userCmd.AddCommand(createUserCmd)
createUserCmd.Flags().StringP("display-name", "d", "", "Display name")
createUserCmd.Flags().StringP("email", "e", "", "Email")
createUserCmd.Flags().StringP("picture-url", "p", "", "Profile picture URL")
userCmd.AddCommand(listUsersCmd)
usernameAndIDFlag(listUsersCmd)
listUsersCmd.Flags().StringP("email", "e", "", "Email")
userCmd.AddCommand(destroyUserCmd)
usernameAndIDFlag(destroyUserCmd)
userCmd.AddCommand(renameUserCmd)
usernameAndIDFlag(renameUserCmd)
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
renameNodeCmd.MarkFlagRequired("new-name")
}
var errMissingParameter = errors.New("missing parameters")
@@ -77,7 +44,7 @@ var createUserCmd = &cobra.Command{
userName := args[0]
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
@@ -85,36 +52,19 @@ var createUserCmd = &cobra.Command{
request := &v1.CreateUserRequest{Name: userName}
if displayName, _ := cmd.Flags().GetString("display-name"); displayName != "" {
request.DisplayName = displayName
}
if email, _ := cmd.Flags().GetString("email"); email != "" {
request.Email = email
}
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
if _, err := url.Parse(pictureURL); err != nil {
ErrorOutput(
err,
fmt.Sprintf(
"Invalid Picture URL: %s",
err,
),
output,
)
}
request.PictureUrl = pictureURL
}
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
response, err := client.CreateUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot create user: "+status.Convert(err).Message(),
fmt.Sprintf(
"Cannot create user: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(response.GetUser(), "User created", output)
@@ -122,61 +72,70 @@ var createUserCmd = &cobra.Command{
}
var destroyUserCmd = &cobra.Command{
Use: "destroy --identifier ID or --name NAME",
Use: "destroy NAME",
Short: "Destroys a user",
Aliases: []string{"delete"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
id, username := usernameAndIDFromFlag(cmd)
request := &v1.ListUsersRequest{
Name: username,
Id: id,
userName := args[0]
request := &v1.GetUserRequest{
Name: userName,
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
users, err := client.ListUsers(ctx, request)
_, err := client.GetUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
fmt.Sprintf("Error: %s", status.Convert(err).Message()),
output,
)
}
if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
return
}
user := users.GetUsers()[0]
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo(fmt.Sprintf(
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
user.GetName(), user.GetId(),
))
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the user '%s' and any associated preauthkeys?",
userName,
),
}
err := survey.AskOne(prompt, &confirm)
if err != nil {
return
}
}
if confirm || force {
request := &v1.DeleteUserRequest{Id: user.GetId()}
request := &v1.DeleteUserRequest{Name: userName}
response, err := client.DeleteUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot destroy user: "+status.Convert(err).Message(),
fmt.Sprintf(
"Cannot destroy user: %s",
status.Convert(err).Message(),
),
output,
)
return
}
SuccessOutput(response, "User destroyed", output)
} else {
@@ -192,48 +151,36 @@ var listUsersCmd = &cobra.Command{
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
request := &v1.ListUsersRequest{}
id, _ := cmd.Flags().GetInt64("identifier")
username, _ := cmd.Flags().GetString("name")
email, _ := cmd.Flags().GetString("email")
// filter by one param at most
switch {
case id > 0:
request.Id = uint64(id)
case username != "":
request.Name = username
case email != "":
request.Email = email
}
response, err := client.ListUsers(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot get users: "+status.Convert(err).Message(),
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
output,
)
return
}
if output != "" {
SuccessOutput(response.GetUsers(), "", output)
return
}
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
tableData := pterm.TableData{{"ID", "Name", "Created"}}
for _, user := range response.GetUsers() {
tableData = append(
tableData,
[]string{
strconv.FormatUint(user.GetId(), 10),
user.GetDisplayName(),
user.GetId(),
user.GetName(),
user.GetEmail(),
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
},
)
@@ -245,59 +192,48 @@ var listUsersCmd = &cobra.Command{
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var renameUserCmd = &cobra.Command{
Use: "rename",
Use: "rename OLD_NAME NEW_NAME",
Short: "Renames a user",
Aliases: []string{"mv"},
Args: func(cmd *cobra.Command, args []string) error {
expectedArguments := 2
if len(args) < expectedArguments {
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := getHeadscaleCLIClient()
defer cancel()
defer conn.Close()
id, username := usernameAndIDFromFlag(cmd)
listReq := &v1.ListUsersRequest{
Name: username,
Id: id,
request := &v1.RenameUserRequest{
OldName: args[0],
NewName: args[1],
}
users, err := client.ListUsers(ctx, listReq)
response, err := client.RenameUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
fmt.Sprintf(
"Cannot rename user: %s",
status.Convert(err).Message(),
),
output,
)
}
if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
}
newName, _ := cmd.Flags().GetString("new-name")
renameReq := &v1.RenameUserRequest{
OldId: id,
NewName: newName,
}
response, err := client.RenameUser(ctx, renameReq)
if err != nil {
ErrorOutput(
err,
"Cannot rename user: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(response.GetUser(), "User renamed", output)

View File

@@ -6,9 +6,11 @@ import (
"encoding/json"
"fmt"
"os"
"reflect"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
@@ -23,25 +25,40 @@ const (
SocketWritePermissions = 0o666
)
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
cfg, err := types.LoadServerConfig()
func getHeadscaleApp() (*hscontrol.Headscale, error) {
cfg, err := types.GetHeadscaleConfig()
if err != nil {
return nil, fmt.Errorf(
"loading configuration: %w",
"failed to load configuration while creating headscale instance: %w",
err,
)
}
app, err := hscontrol.NewHeadscale(cfg)
if err != nil {
return nil, fmt.Errorf("creating new headscale: %w", err)
return nil, err
}
// We are doing this here, as in the future could be cool to have it also hot-reload
if cfg.ACL.PolicyPath != "" {
aclPath := util.AbsolutePathFromConfigPath(cfg.ACL.PolicyPath)
pol, err := policy.LoadACLPolicyFromPath(aclPath)
if err != nil {
log.Fatal().
Str("path", aclPath).
Err(err).
Msg("Could not load the ACL policy")
}
app.ACLPolicy = pol
}
return app, nil
}
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg, err := types.LoadCLIConfig()
func getHeadscaleCLIClient() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
cfg, err := types.GetHeadscaleConfig()
if err != nil {
log.Fatal().
Err(err).
@@ -72,7 +89,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
// Try to give the user better feedback if we cannot write to the headscale
// socket.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) // nolint
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
if err != nil {
if os.IsPermission(err) {
log.Fatal().
@@ -130,7 +147,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
return ctx, client, conn, cancel
}
func output(result interface{}, override string, outputFormat string) string {
func SuccessOutput(result interface{}, override string, outputFormat string) {
var jsonBytes []byte
var err error
switch outputFormat {
@@ -150,34 +167,22 @@ func output(result interface{}, override string, outputFormat string) string {
log.Fatal().Err(err).Msg("failed to unmarshal output")
}
default:
// nolint
return override
//nolint
fmt.Println(override)
return
}
return string(jsonBytes)
//nolint
fmt.Println(string(jsonBytes))
}
// SuccessOutput prints the result to stdout and exits with status code 0.
func SuccessOutput(result interface{}, override string, outputFormat string) {
fmt.Println(output(result, override, outputFormat))
os.Exit(0)
}
// ErrorOutput prints an error message to stderr and exits with status code 1.
func ErrorOutput(errResult error, override string, outputFormat string) {
type errOutput struct {
Error string `json:"error"`
}
var errorMessage string
if errResult != nil {
errorMessage = errResult.Error()
} else {
errorMessage = override
}
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errorMessage}, override, outputFormat))
os.Exit(1)
SuccessOutput(errOutput{errResult.Error()}, override, outputFormat)
}
func HasMachineOutputFlag() bool {
@@ -207,3 +212,13 @@ func (t tokenAuth) GetRequestMetadata(
func (tokenAuth) RequireTransportSecurity() bool {
return true
}
func contains[T string](ts []T, t T) bool {
for _, v := range ts {
if reflect.DeepEqual(v, t) {
return true
}
}
return false
}

Some files were not shown because too many files have changed in this diff Show More