This will spin up a few vms and then try and make them connect to a
testcontrol server.
Updates #1988
Signed-off-by: Christine Dodrill <xe@tailscale.com>
When tailscaled starts up, these lines run:
func run() error {
// ...
pol := logpolicy.New("tailnode.log.tailscale.io")
pol.SetVerbosityLevel(args.verbose)
// ...
}
If there are old log entries present, they immediate start getting uploaded. This races with the call to pol.SetVerbosityLevel.
This manifested itself as a test failure in tailscale.com/tstest/integration
when run with -race:
WARNING: DATA RACE
Read at 0x00c0001bc970 by goroutine 24:
tailscale.com/logtail.(*Logger).Write()
/Users/josh/t/corp/oss/logtail/logtail.go:517 +0x27c
log.(*Logger).Output()
/Users/josh/go/ts/src/log/log.go:184 +0x2b8
log.Printf()
/Users/josh/go/ts/src/log/log.go:323 +0x94
tailscale.com/logpolicy.newLogtailTransport.func1()
/Users/josh/t/corp/oss/logpolicy/logpolicy.go:509 +0x36c
net/http.(*Transport).dial()
/Users/josh/go/ts/src/net/http/transport.go:1168 +0x238
net/http.(*Transport).dialConn()
/Users/josh/go/ts/src/net/http/transport.go:1606 +0x21d0
net/http.(*Transport).dialConnFor()
/Users/josh/go/ts/src/net/http/transport.go:1448 +0xe4
Previous write at 0x00c0001bc970 by main goroutine:
tailscale.com/logtail.(*Logger).SetVerbosityLevel()
/Users/josh/t/corp/oss/logtail/logtail.go:131 +0x98
tailscale.com/logpolicy.(*Policy).SetVerbosityLevel()
/Users/josh/t/corp/oss/logpolicy/logpolicy.go:463 +0x60
main.run()
/Users/josh/t/corp/oss/cmd/tailscaled/tailscaled.go:178 +0x50
main.main()
/Users/josh/t/corp/oss/cmd/tailscaled/tailscaled.go:163 +0x71c
Goroutine 24 (running) created at:
net/http.(*Transport).queueForDial()
/Users/josh/go/ts/src/net/http/transport.go:1417 +0x4d8
net/http.(*Transport).getConn()
/Users/josh/go/ts/src/net/http/transport.go:1371 +0x5b8
net/http.(*Transport).roundTrip()
/Users/josh/go/ts/src/net/http/transport.go:585 +0x7f4
net/http.(*Transport).RoundTrip()
/Users/josh/go/ts/src/net/http/roundtrip.go:17 +0x30
net/http.send()
/Users/josh/go/ts/src/net/http/client.go:251 +0x4f0
net/http.(*Client).send()
/Users/josh/go/ts/src/net/http/client.go:175 +0x148
net/http.(*Client).do()
/Users/josh/go/ts/src/net/http/client.go:717 +0x1d0
net/http.(*Client).Do()
/Users/josh/go/ts/src/net/http/client.go:585 +0x358
tailscale.com/logtail.(*Logger).upload()
/Users/josh/t/corp/oss/logtail/logtail.go:367 +0x334
tailscale.com/logtail.(*Logger).uploading()
/Users/josh/t/corp/oss/logtail/logtail.go:289 +0xec
Rather than complicate the logpolicy API,
allow the verbosity to be adjusted concurrently.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Pull in the latest version of wireguard-windows.
Switch to upstream wireguard-go.
This requires reverting all of our import paths.
Unfortunately, this has to happen at the same time.
The wireguard-go change is very low risk,
as that commit matches our fork almost exactly.
(The only changes are import paths, CI files, and a go.mod entry.)
So if there are issues as a result of this commit,
the first place to look is wireguard-windows changes.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We repeat many peers each time we call SetPeers.
Instead of constructing strings for them from scratch every time,
keep strings alive across iterations.
name old time/op new time/op delta
SetPeers-8 3.58µs ± 1% 2.41µs ± 1% -32.60% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
SetPeers-8 2.53kB ± 0% 1.30kB ± 0% -48.73% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
SetPeers-8 99.0 ± 0% 16.0 ± 0% -83.84% (p=0.000 n=10+10)
We could reduce alloc/op 12% and allocs/op 23% if strs had
type map[string]strCache instead of map[string]*strCache,
but that wipes out the execution time impact.
Given that re-use is the most common scenario, let's optimize for it.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
e66d4e4c81 added AppendTo methods
to some key types. Their marshaled form is longer than 64 bytes.
name old time/op new time/op delta
Hash-8 15.5µs ± 1% 14.8µs ± 1% -4.17% (p=0.000 n=9+9)
name old alloc/op new alloc/op delta
Hash-8 1.18kB ± 0% 0.47kB ± 0% -59.87% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Hash-8 12.0 ± 0% 6.0 ± 0% -50.00% (p=0.000 n=10+10)
This is still a bit worse than explicitly handling the types,
but much nicer.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Add MarshalText-like appending variants. Like:
https://pkg.go.dev/inet.af/netaddr#IP.AppendTo
To be used by @josharian's pending deephash optimizations.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
All netaddr types that we are concerned with now implement AppendTo.
Use the AppendTo method if available, and remove all references to netaddr.
This is slower but cleaner, and more readily re-usable by others.
name old time/op new time/op delta
Hash-8 12.6µs ± 0% 14.8µs ± 1% +18.05% (p=0.000 n=8+10)
HashMapAcyclic-8 21.4µs ± 1% 21.9µs ± 1% +2.39% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
Hash-8 408B ± 0% 408B ± 0% ~ (p=1.000 n=10+10)
HashMapAcyclic-8 1.00B ± 0% 1.00B ± 0% ~ (all equal)
name old allocs/op new allocs/op delta
Hash-8 6.00 ± 0% 6.00 ± 0% ~ (all equal)
HashMapAcyclic-8 0.00 0.00 ~ (all equal)
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
These exist so we can use the optimized MapIter APIs
while still working with released versions of Go.
They're pretty simple, but some docs won't hurt.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Reduce to just a single external endpoint.
Convert from a variadic number of interfaces to a slice there.
name old time/op new time/op delta
Hash-8 14.4µs ± 0% 14.0µs ± 1% -3.08% (p=0.000 n=9+9)
name old alloc/op new alloc/op delta
Hash-8 873B ± 0% 793B ± 0% -9.16% (p=0.000 n=9+6)
name old allocs/op new allocs/op delta
Hash-8 18.0 ± 0% 14.0 ± 0% -22.22% (p=0.000 n=10+10)
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Slightly slower, but lots less garbage.
We will recover the speed lost in a follow-up commit.
name old time/op new time/op delta
Hash-8 13.5µs ± 1% 14.3µs ± 0% +5.84% (p=0.000 n=10+9)
name old alloc/op new alloc/op delta
Hash-8 1.46kB ± 0% 0.87kB ± 0% -40.10% (p=0.000 n=7+10)
name old allocs/op new allocs/op delta
Hash-8 43.0 ± 0% 18.0 ± 0% -58.14% (p=0.000 n=10+10)
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This requires changes to the Go toolchain.
The changes are upstream at https://golang.org/cl/320929.
They haven't been pulled into our fork yet.
No need to allocate new iteration scratch values for every map.
name old time/op new time/op delta
Hash-8 13.6µs ± 0% 13.5µs ± 0% -1.01% (p=0.008 n=5+5)
HashMapAcyclic-8 21.2µs ± 1% 21.1µs ± 2% ~ (p=0.310 n=5+5)
name old alloc/op new alloc/op delta
Hash-8 1.58kB ± 0% 1.46kB ± 0% -7.60% (p=0.008 n=5+5)
HashMapAcyclic-8 152B ± 0% 128B ± 0% -15.79% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Hash-8 49.0 ± 0% 43.0 ± 0% -12.24% (p=0.008 n=5+5)
HashMapAcyclic-8 4.00 ± 0% 2.00 ± 0% -50.00% (p=0.008 n=5+5)
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
To get the benefit of this optimization requires help from the Go toolchain.
The changes are upstream at https://golang.org/cl/320929,
and have been pulled into the Tailscale fork at
728ecc58fd.
It also requires building with the build tag tailscale_go.
name old time/op new time/op delta
Hash-8 14.0µs ± 0% 13.6µs ± 0% -2.88% (p=0.008 n=5+5)
HashMapAcyclic-8 24.3µs ± 1% 21.2µs ± 1% -12.47% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
Hash-8 2.16kB ± 0% 1.58kB ± 0% -27.01% (p=0.008 n=5+5)
HashMapAcyclic-8 2.53kB ± 0% 0.15kB ± 0% -93.99% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
Hash-8 77.0 ± 0% 49.0 ± 0% -36.36% (p=0.008 n=5+5)
HashMapAcyclic-8 202 ± 0% 4 ± 0% -98.02% (p=0.008 n=5+5)
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
setkey
The acyclic map code interacts badly with netaddr.IPs.
One of the netaddr.IP fields is an *intern.Value,
and we use a few sentinel values.
Those sentinel values make many of the netaddr data structures appear cyclic.
One option would be to replace the cycle-detection code with
a Floyd-Warshall style algorithm. The downside is that this will take
longer to detect cycles, particularly if the cycle is long.
This problem is exacerbated by the fact that the acyclic cycle detection
code shares a single visited map for the entire data structure,
not just the subsection of the data structure localized to the map.
Unfortunately, the extra allocations and work (and code) to use per-map
visited maps make this option not viable.
Instead, continue to special-case netaddr data types.
name old time/op new time/op delta
Hash-8 22.4µs ± 0% 14.0µs ± 0% -37.59% (p=0.008 n=5+5)
HashMapAcyclic-8 23.8µs ± 0% 24.3µs ± 1% +1.75% (p=0.008 n=5+5)
name old alloc/op new alloc/op delta
Hash-8 2.49kB ± 0% 2.16kB ± 0% ~ (p=0.079 n=4+5)
HashMapAcyclic-8 2.53kB ± 0% 2.53kB ± 0% ~ (all equal)
name old allocs/op new allocs/op delta
Hash-8 86.0 ± 0% 77.0 ± 0% -10.47% (p=0.008 n=5+5)
HashMapAcyclic-8 202 ± 0% 202 ± 0% ~ (all equal)
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Hash and xor each entry instead, then write final xor'ed result.
name old time/op new time/op delta
Hash-4 33.6µs ± 4% 34.6µs ± 3% +3.03% (p=0.013 n=10+9)
name old alloc/op new alloc/op delta
Hash-4 1.86kB ± 0% 1.77kB ± 0% -5.10% (p=0.000 n=10+9)
name old allocs/op new allocs/op delta
Hash-4 51.0 ± 0% 49.0 ± 0% -3.92% (p=0.000 n=10+10)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
At the start of a dev cycle we'll upgrade all dependencies.
Done with:
$ for Dep in $(cat go.mod | perl -ne '/(\S+) v/ and print "$1\n"'); do go get $Dep@upgrade; done
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Our wireguard-go fork used different values from upstream for
package device's memory limits on iOS.
This was the last blocker to removing our fork.
These values are now vars rather than consts for iOS.
c27ff9b9f6
Adjust them on startup to our preferred values.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Typical maps in production are considerably longer.
This helps benchmarks more accurately reflect the costs per key
vs the costs per map in deephash.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
A couple of code paths in ipnserver use a NewBackendServer with a nil
backend just to call the callback with an encapsulated error message.
This covers a panic case seen in logs.
For #1920
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
This leads to a cleaner separation of intent vs. implementation
(Routes is now the only place specifying who handles DNS requests),
and allows for cleaner expression of a configuration that creates
MagicDNS records without serving them to the OS.
Signed-off-by: David Anderson <danderson@tailscale.com>
* Added new Addresses / AllowedIPs fields to testcontrol when creating new &tailcfg.Node
Signed-off-by: Simeng He <simeng@tailscale.com>
* Added single node test to check Addresses and AllowedIPs
Signed-off-by: Simeng He <simeng@tailscale.com>
Co-authored-by: Simeng He <simeng@tailscale.com>
The script detects one of the supported OS/version combos, and issues
the right install instructions for it.
Co-authored-by: Christine Dodrill <xe@tailscale.com>
Signed-off-by: David Anderson <danderson@tailscale.com>
If --until-direct is set, the goal is to make a direct connection.
If we failed at that, say so, and exit with an error.
RELNOTE=tailscale ping --until-direct (the default) now exits with
a non-zero exit code if no direct connection was established.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This code path is very tricky since it was originally designed for the
"re-authenticate to refresh my keys" use case, which didn't want to
lose the original session even if the refresh cycle failed. This is why
it acts differently from the Logout(); Login(); case.
Maybe that's too fancy, considering that it probably never quite worked
at all, for switching between users without logging out first. But it
works now.
This was more invasive than I hoped, but the necessary fixes actually
removed several other suspicious BUG: lines from state_test.go, so I'm
pretty confident this is a significant net improvement.
Fixestailscale/corp#1756.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
If the engine was shutting down from a previous session
(e.closing=true), it would return an error code when trying to get
status. In that case, ipnlocal would never unblock any callers that
were waiting on the status.
Not sure if this ever happened in real life, but I accidentally
triggered it while writing a test.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>