And annotate magicsock as a start.
And add localapi and debug handlers with the Prometheus-format
exporter.
Updates #3307
Change-Id: I47c5d535fe54424741df143d052760387248f8d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
github.com/go-multierror/multierror served us well.
But we need a few feature from it (implement Is),
and it's not worth maintaining a fork of such a small module.
Instead, I did a clean room implementation inspired by its API.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
testing.AllocsPerRun measures the total allocations performed
by the entire program while repeatedly executing a function f.
If some unrelated part of the rest of the program happens to
allocate a lot during that period, you end up with a test failure.
Ideally, the rest of the program would be silent while
testing.AllocsPerRun executes.
Realistically, that is often unachievable.
AllocsPerRun attempts to mitigate this by setting GOMAXPROCS to 1,
but that doesn't prevent other code from running;
it only makes it less likely.
You can also mitigate this by passing a large iteration count to
AllocsPerRun, but that is unreliable and needlessly expensive.
Unlike most of package testing, AllocsPerRun doesn't use any
toolchain magic, so we can just write a replacement.
One wild idea is to change how we count mallocs.
Instead of using runtime.MemStats, turn on memory profiling with a
memprofilerate of 1. Discard all samples from the profile whose stack
does not contain testing.AllocsPerRun. Count the remaining samples to
determine the number of mallocs.
That's fun, but overkill.
Instead, this change adds a simple API that attempts to get f to
run at least once with a target number of allocations.
This is useful when you know that f should allocate consistently.
We can then assume that any iterations with too many allocations
are probably due to one-time costs or background noise.
This suits most uses of AllocsPerRun.
Ratcheting tests tend to be significantly less flaky,
because they are biased towards success.
They can also be faster, because they can exit early,
once success has been reached.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
For the service, all we need to do is handle the `svc.SessionChange` command.
Upon receipt of a `windows.WTS_SESSION_UNLOCK` event, we fire off a goroutine to flush the DNS cache.
(Windows expects responses to service requests to be quick, so we don't want to do that synchronously.)
This is gated on an integral registry value named `FlushDNSOnSessionUnlock`,
whose value we obtain during service initialization.
(See [this link](https://docs.microsoft.com/en-us/windows/win32/api/winsvc/nc-winsvc-lphandler_function_ex) for information re: handling `SERVICE_CONTROL_SESSIONCHANGE`.)
Fixes#2956
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
This adds support for tailscaled to be an HTTP proxy server.
It shares the same backend dialing code as the SOCK5 server, but the
client protocol is HTTP (including CONNECT), rather than SOCKS.
Fixes#2289
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Updates #2781 (might even fix it, but its real issue is that
SetPrivateKey starts a ReSTUN goroutines which then logs, and
that bug and data race existed prior to MemLogger existing)
* Revert "Revert "types/key: add MachinePrivate and MachinePublic.""
This reverts commit 61c3b98a24.
Signed-off-by: David Anderson <danderson@tailscale.com>
* types/key: add ControlPrivate, with custom serialization.
ControlPrivate is just a MachinePrivate that serializes differently
in JSON, to be compatible with how the Tailscale control plane
historically serialized its private key.
Signed-off-by: David Anderson <danderson@tailscale.com>
Plumb throughout the codebase as a replacement for the mixed use of
tailcfg.MachineKey and wgkey.Private/Public.
Signed-off-by: David Anderson <danderson@tailscale.com>
I have seen this once in the VM test (caused by an EOF, I believe on
shutdown) that didn't need to cause the test to fail.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
The tests build fine on other Unix's, they just can't run there.
But there is already a t.Skip by default, so `go test` ends up
working fine elsewhere and checks the code compiles.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
This uses a neat little tool to dump the output of DNS queries to
standard out. This is the first end-to-end test of DNS that runs against
actual linux systems. The /etc/resolv.conf test may look superflous,
however this will help for correlating system state if one of the DNS
tests fails.
Signed-off-by: Christine Dodrill <xe@tailscale.com>
Fix a few test printing issues when tests fail.
Qemu console output is super useful when something is wrong in the
harness and we cannot even bring up the tests.
Also useful for figuring out where all the time goes in tests.
A little noisy, but not too noisy as long as you're only running one VM
as part of the tests, which is my plan.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
Also remove extra distros for now.
We can bring them back later if useful.
Though our most important distros are these two Ubuntu, debian stable,
and Raspbian (not currently supported).
And before doing more Linux, we should do Windows.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
The VM test has two tailscaled instances running and interleaves the
logs. Without a prefix it is impossible to figure out what is going on.
It might be even better to include the [ABCD] node prefix here as well.
Unfortunately lots of interesting logs happen before tailscaled has a
node key, so it wouldn't be a replacement for a short ID.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
By default httptest listens only on the loopback adapter.
Instead, listen on the IP the user asked for.
The VM test needs this, as it wants to start DERP and STUN
servers on the host that can be reached by guest VMs.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
This is useful for manual performance testing
of networks with many nodes.
I imagine it'll grow more knobs over time.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>