We had a long-standing bug in which our TUN events channel
was being received from simultaneously in two places.
The first is wireguard-go.
At wgengine/userspace.go:366, we pass e.tundev to wireguard-go,
which starts a goroutine (RoutineTUNEventReader)
that receives from that channel and uses events to adjust the MTU
and bring the device up/down.
At wgengine/userspace.go:374, we launch a goroutine that
receives from e.tundev, logs MTU changes, and triggers
state updates when up/down changes occur.
Events were getting delivered haphazardly between the two of them.
We don't really want wireguard-go to receive the up/down events;
we control the state of the device explicitly by calling device.Up.
And the userspace.go loop MTU logging duplicates logging that
wireguard-go does when it received MTU updates.
So this change splits the single TUN events channel into up/down
and other (aka MTU), and sends them to the parties that ought
to receive them.
I'm actually a bit surprised that this hasn't caused more visible trouble.
If a down event went to wireguard-go but the subsequent up event
went to userspace.go, we could end up with the wireguard-go device disappearing.
I believe that this may also (somewhat accidentally) be a fix for #1790.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
This tries to generate traffic at a rate that will saturate the
receiver, without overdoing it, even in the event of packet loss. It's
unrealistically more aggressive than TCP (which will back off quickly
in case of packet loss) but less silly than a blind test that just
generates packets as fast as it can (which can cause all the CPU to be
absorbed by the transmitter, giving an incorrect impression of how much
capacity the total system has).
Initial indications are that a syscall about every 10 packets (TCP bulk
delivery) is roughly the same speed as sending every packet through a
channel. A syscall per packet is about 5x-10x slower than that.
The whole tailscale wireguard-go + magicsock + packet filter
combination is about 4x slower again, which is better than I thought
we'd do, but probably has room for improvement.
Note that in "full" tailscale, there is also a tundev read/write for
every packet, effectively doubling the syscall overhead per packet.
Given these numbers, it seems like read/write syscalls are only 25-40%
of the total CPU time used in tailscale proper, so we do have
significant non-syscall optimization work to do too.
Sample output:
$ GOMAXPROCS=2 go test -bench . -benchtime 5s ./cmd/tailbench
goos: linux
goarch: amd64
pkg: tailscale.com/cmd/tailbench
cpu: Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz
BenchmarkTrivialNoAlloc/32-2 56340248 93.85 ns/op 340.98 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivialNoAlloc/124-2 57527490 99.27 ns/op 1249.10 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivialNoAlloc/1024-2 52537773 111.3 ns/op 9200.39 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/32-2 41878063 135.6 ns/op 236.04 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/124-2 41270439 138.4 ns/op 896.02 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/1024-2 36337252 154.3 ns/op 6635.30 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkBlockingChannel/32-2 12171654 494.3 ns/op 64.74 MB/s 0 %lost 1791 B/op 0 allocs/op
BenchmarkBlockingChannel/124-2 12149956 507.8 ns/op 244.17 MB/s 0 %lost 1792 B/op 1 allocs/op
BenchmarkBlockingChannel/1024-2 11034754 528.8 ns/op 1936.42 MB/s 0 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/32-2 8960622 2195 ns/op 14.58 MB/s 8.825 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/124-2 3014614 2224 ns/op 55.75 MB/s 11.18 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/1024-2 3234915 1688 ns/op 606.53 MB/s 3.765 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/32-2 8457559 764.1 ns/op 41.88 MB/s 5.945 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/124-2 5497726 1030 ns/op 120.38 MB/s 12.14 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/1024-2 7985656 1360 ns/op 752.86 MB/s 13.57 %lost 1792 B/op 1 allocs/op
BenchmarkUDP/32-2 1652134 3695 ns/op 8.66 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkUDP/124-2 1621024 3765 ns/op 32.94 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkUDP/1024-2 1553750 3825 ns/op 267.72 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkTCP/32-2 11056336 503.2 ns/op 63.60 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTCP/124-2 11074869 533.7 ns/op 232.32 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTCP/1024-2 8934968 671.4 ns/op 1525.20 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkWireGuardTest/32-2 1403702 4547 ns/op 7.04 MB/s 14.37 %lost 467 B/op 3 allocs/op
BenchmarkWireGuardTest/124-2 780645 7927 ns/op 15.64 MB/s 1.537 %lost 420 B/op 3 allocs/op
BenchmarkWireGuardTest/1024-2 512671 11791 ns/op 86.85 MB/s 0.5206 %lost 411 B/op 3 allocs/op
PASS
ok tailscale.com/wgengine/bench 195.724s
Updates #414.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
NetworkManager fixed the bug that forced us to use NetworkManager
if it's programming systemd-resolved, and in the same release also
made NetworkManager ignore DNS settings provided for unmanaged
interfaces... Which breaks what we used to do. So, with versions
1.26.6 and above, we MUST NOT use NetworkManager to indirectly
program systemd-resolved, but thankfully we can talk to resolved
directly and get the right outcome.
Fixes#1788
Signed-off-by: David Anderson <danderson@tailscale.com>
Clear LLMNR and mdns flags, update reasoning for our settings,
and set our override priority harder than before when we want
to be primary resolver.
Signed-off-by: David Anderson <danderson@tailscale.com>
Debian resolvconf is not legacy, it's alive and well,
just historically before the other implementations.
Signed-off-by: David Anderson <danderson@tailscale.com>
This allows split-DNS configurations to not break clients on OSes that
haven't yet been ported to understand split DNS, by falling back to quad-9
as a global resolver when handed an "impossible to implement"
split-DNS config.
Part of #953. Needs to be removed before shipping 1.8.
Signed-off-by: David Anderson <danderson@tailscale.com>
With this change, all OSes can sort-of do split DNS, except that the
default upstream is hardcoded to 8.8.8.8 pending further plumbing.
Additionally, Windows 8-10 can do split DNS fully correctly, without
the 8.8.8.8 hack.
Part of #953.
Signed-off-by: David Anderson <danderson@tailscale.com>
It seems that all the setups that support split DNS understand
this distinction, and it's an important one when translating
high-level configuration.
Part of #953.
Signed-off-by: David Anderson <danderson@tailscale.com>
Correctly reports that Win7 cannot do split DNS, and has a helper to
discover the "base" resolvers for the system.
Part of #953
Signed-off-by: David Anderson <danderson@tailscale.com>
OS implementations are going to support split DNS soon.
Until they're all in place, hardcode Primary=true to get
the old behavior.
Signed-off-by: David Anderson <danderson@tailscale.com>
This is usually the same as the requested interface, but on some
unixes can vary based on device number allocation, and on Windows
it's the GUID instead of the pretty name, since everything relating
to configuration wants the GUID.
Signed-off-by: David Anderson <danderson@tailscale.com>
wgengine/router.CallbackRouter needs to support both the Router
and OSConfigurator interfaces, so the setters can't both be called
Set.
Signed-off-by: David Anderson <danderson@tailscale.com>
They need some rework to do the right thing, in the meantime the direct
and resolvconf managers will work out.
The resolved implementation was never selected due to control-side settings.
The networkmanager implementation mostly doesn't get selected due to
unforeseen interactions with `resolvconf` on many platforms.
Both implementations also need rework to support the various routing modes
they're capable of.
Signed-off-by: David Anderson <danderson@tailscale.com>
It's only use to skip some optional initialization during cleanup,
but that work is very minor anyway, and about to change drastically.
Signed-off-by: David Anderson <danderson@tailscale.com>
It's currently unused, and no longer makes sense with the upcoming
DNS infrastructure. Keep it in tailcfg for now, since we need protocol
compat for a bit longer.
Signed-off-by: David Anderson <danderson@tailscale.com>
The resolver still only supports a single upstream config, and
ipn/wgengine still have to split up the DNS config, but this moves
closer to unifying the DNS configs.
As a handy side-effect of the refactor, IPv6 MagicDNS records exist
now.
Signed-off-by: David Anderson <danderson@tailscale.com>
They're only used internally and in tests, and have surprising
semantics in that they only resolve MagicDNS names, not upstream
resolver queries.
Signed-off-by: David Anderson <danderson@tailscale.com>
Work around https://github.com/google/gvisor/issues/5732
by trying to read /proc/net/route with a larger bufsize if
it fails the first time.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
IPv6 Unique Local Addresses are sometimes used with Network
Prefix Translation to reach the Internet. In that respect
their use is similar to the private IPv4 address ranges
10/8, 172.16/12, and 192.168/16.
Treat them as sufficient for AnyInterfaceUp(), but specifically
exclude Tailscale's own IPv6 ULA prefix to avoid mistakenly
trying to bootstrap Tailscale using Tailscale.
This helps in supporting Google Cloud Run, where the addresses
are 169.254.8.1/32 and fddf:3978:feb1:d745::c001/128 on eth1.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
For discovery when an explicit hostname/IP is known. We'll still
also send it via control for finding peers by a list.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>