Because it showed up on hello profiles.
Cycle through some moderate-sized sets of peers.
This should cover the "small tweaks to netmap"
and the "up/down cycle" cases.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Yes, it printed, but that was an implementation detail for hashing.
And coming optimization will make it print even less.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
On benchmark completion, we shut down the wgengine.
If we happen to poll for status during shutdown,
we get an "engine closing" error.
It doesn't hurt anything; ignore it.
Fixestailscale/corp#1776
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Without any synchronization here, the "first packet" callback can
be delayed indefinitely, while other work continues.
Since the callback starts the benchmark timer, this could skew results.
Worse, if the benchmark manages to complete before the benchmark timer begins,
it'll cause a data race with the benchmark shutdown performed by package testing.
That is what is reported in #1881.
This is a bit unfortunate, in that it means that users of TrafficGen have
to be careful to keep this callback speedy and lightweight and to avoid deadlocks.
Fixes#1881
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It is possible to get multiple status callbacks from an Engine.
We need to wait for at least one from each Engine.
Without limiting to one per Engine,
wait.Wait can exit early or can panic due to a negative counter.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This reduces the speed with which these benchmarks exhaust their supply fds.
Not to zero unfortunately, but it's still helpful when doing long runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Fields rename only.
Part of the general effort to make our code agnostic about endpoint formatting.
It's just a name, but it will soon be a misleading one; be more generic.
Do this as a separate commit because it generates a lot of whitespace changes.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Upstream wireguard-go renamed the interface method
from CreateEndpoint to ParseEndpoint.
I missed some comments. Fix them.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
By using conn.NewDefaultBind, this test requires that our endpoints
be comprehensible to wireguard-go. Instead, use a no-op bind that
treats endpoints as opaque strings.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Legacy endpoints (addrSet) currently reconstruct their dst string when requested.
Instead, store the dst string we were given to begin with.
In addition to being simpler and cheaper, this makes less code
aware of how to interpret endpoint strings.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Prefer the error from the actual wireguard-go device method call,
not {To,From}UAPI, as those tend to be less interesting I/O errors.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
When wireguard-go's UAPI interface fails with an error, ReconfigDevice hangs.
Fix that by buffering the channel and closing the writer after the call.
The code now matches the corresponding code in DeviceConfig, where I got it right.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It is unused, and has been since early Feb 2021 (Tailscale 1.6).
We can't get delete the DeviceOptions entirely yet;
first #1831 and #1839 need to go in, along with some wireguard-go changes.
Deleting this chunk of code now will make the later commits more clearly correct.
Pingers can now go too.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The earlier eb06ec172f fixed
the flaky SSH issue (tailscale/corp#1725) by making sure that packets
addressed to Tailscale IPs in hybrid netstack mode weren't delivered
to netstack, but another issue remained:
All traffic handled by netstack was also potentially being handled by
the host networking stack, as the filter hook returned "Accept", which
made it keep processing. This could lead to various random racey chaos
as a function of OS/firewalls/routes/etc.
Instead, once we inject into netstack, stop our caller's packet
processing.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Prior to wireguard-go using printf-style logging,
all wireguard-go logging occurred using format string "%s".
We fixed that but continued to use %s when we rewrote
peer identifiers into Tailscale style.
This commit removes that %sl, which makes rate limiting work correctly.
As a happy side-benefit, it should generate less garbage.
Instead of replacing all wireguard-go peer identifiers
that might occur anywhere in a fully formatted log string,
assume that they only come from args.
Check all args for things that look like *device.Peers
and replace them with appropriately reformatted strings.
There is a variety of ways that this could go wrong
(unusual format verbs or modifiers, peer identifiers
occurring as part of a larger printed object, future API changes),
but none of them occur now, are likely to be added,
or would be hard to work around if they did.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
The "stop phrases" we use all occur in wireguard-go in the format string.
We can avoid doing a bunch of fmt.Sprintf work when they appear.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
For historical reasons, we ended up with two near-duplicate
copies of curve25519 key types, one in the wireguard-go module
(wgcfg) and one in the tailscale module (types/wgkey).
Then we moved wgcfg to the tailscale module.
We can now remove the wgcfg key type in favor of wgkey.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
One of the consequences of the bind refactoring in 6f23087175
is that attempting to bind an IPv6 socket will always
result in c.pconn6.pconn being non-nil.
If the bind fails, it'll be set to a placeholder packet conn
that blocks forever.
As a result, we can always run ReceiveIPv6 and health check it.
This removes IPv4/IPv6 asymmetry and also will allow health checks
to detect any IPv6 receive func failures.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
It must be an IP address; enforce that at the type level.
Suggested-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
We had two separate code paths for the initial UDP listener bind
and any subsequent rebinds.
IPv6 got left out of the rebind code.
Rather than duplicate it there, unify the two code paths.
Then improve the resulting code:
* Rebind had nested listen attempts to try the user-specified port first,
and then fall back to :0 if that failed. Convert that into a loop.
* Initial bind tried only the user-specified port.
Rebind tried the user-specified port and 0.
But there are actually three ports of interest:
The one the user specified, the most recent port in use, and 0.
We now try all three in order, as appropriate.
* In the extremely rare case in which binding to port 0 fails,
use a dummy net.PacketConn whose reads block until close.
This will keep the wireguard-go receive func goroutine alive.
As a pleasant side-effect of this, if we decide that
we need to resuscitate #1796, it will now be much easier.
Fixes#1799
Co-authored-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
Assume it'll stay at 0 forever, so hard-code it
and delete code conditional on it being non-0.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
It was set to context.Background by all callers, for the same reasons.
Set it locally instead, to simplify call sites.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
The old implementation knew too much about how wireguard-go worked.
As a result, it missed genuine problems that occurred due to unrelated bugs.
This fourth attempt to fix the health checks takes a black box approach.
A receive func is healthy if one (or both) of these conditions holds:
* It is currently running and blocked.
* It has been executed recently.
The second condition is required because receive functions
are not continuously executing. wireguard-go calls them and then
processes their results before calling them again.
There is a theoretical false positive if wireguard-go go takes
longer than one minute to process the results of a receive func execution.
If that happens, we have other problems.
Updates #1790
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
They were not doing their job.
They need yet another conceptual re-think.
Start by clearing the decks.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
We had a long-standing bug in which our TUN events channel
was being received from simultaneously in two places.
The first is wireguard-go.
At wgengine/userspace.go:366, we pass e.tundev to wireguard-go,
which starts a goroutine (RoutineTUNEventReader)
that receives from that channel and uses events to adjust the MTU
and bring the device up/down.
At wgengine/userspace.go:374, we launch a goroutine that
receives from e.tundev, logs MTU changes, and triggers
state updates when up/down changes occur.
Events were getting delivered haphazardly between the two of them.
We don't really want wireguard-go to receive the up/down events;
we control the state of the device explicitly by calling device.Up.
And the userspace.go loop MTU logging duplicates logging that
wireguard-go does when it received MTU updates.
So this change splits the single TUN events channel into up/down
and other (aka MTU), and sends them to the parties that ought
to receive them.
I'm actually a bit surprised that this hasn't caused more visible trouble.
If a down event went to wireguard-go but the subsequent up event
went to userspace.go, we could end up with the wireguard-go device disappearing.
I believe that this may also (somewhat accidentally) be a fix for #1790.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
The old decay-based one took a while to converge. This new one (based
very loosely on TCP BBR) seems to converge quickly on what seems to be
the best speed.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
This tries to generate traffic at a rate that will saturate the
receiver, without overdoing it, even in the event of packet loss. It's
unrealistically more aggressive than TCP (which will back off quickly
in case of packet loss) but less silly than a blind test that just
generates packets as fast as it can (which can cause all the CPU to be
absorbed by the transmitter, giving an incorrect impression of how much
capacity the total system has).
Initial indications are that a syscall about every 10 packets (TCP bulk
delivery) is roughly the same speed as sending every packet through a
channel. A syscall per packet is about 5x-10x slower than that.
The whole tailscale wireguard-go + magicsock + packet filter
combination is about 4x slower again, which is better than I thought
we'd do, but probably has room for improvement.
Note that in "full" tailscale, there is also a tundev read/write for
every packet, effectively doubling the syscall overhead per packet.
Given these numbers, it seems like read/write syscalls are only 25-40%
of the total CPU time used in tailscale proper, so we do have
significant non-syscall optimization work to do too.
Sample output:
$ GOMAXPROCS=2 go test -bench . -benchtime 5s ./cmd/tailbench
goos: linux
goarch: amd64
pkg: tailscale.com/cmd/tailbench
cpu: Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz
BenchmarkTrivialNoAlloc/32-2 56340248 93.85 ns/op 340.98 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivialNoAlloc/124-2 57527490 99.27 ns/op 1249.10 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivialNoAlloc/1024-2 52537773 111.3 ns/op 9200.39 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/32-2 41878063 135.6 ns/op 236.04 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/124-2 41270439 138.4 ns/op 896.02 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/1024-2 36337252 154.3 ns/op 6635.30 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkBlockingChannel/32-2 12171654 494.3 ns/op 64.74 MB/s 0 %lost 1791 B/op 0 allocs/op
BenchmarkBlockingChannel/124-2 12149956 507.8 ns/op 244.17 MB/s 0 %lost 1792 B/op 1 allocs/op
BenchmarkBlockingChannel/1024-2 11034754 528.8 ns/op 1936.42 MB/s 0 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/32-2 8960622 2195 ns/op 14.58 MB/s 8.825 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/124-2 3014614 2224 ns/op 55.75 MB/s 11.18 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/1024-2 3234915 1688 ns/op 606.53 MB/s 3.765 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/32-2 8457559 764.1 ns/op 41.88 MB/s 5.945 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/124-2 5497726 1030 ns/op 120.38 MB/s 12.14 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/1024-2 7985656 1360 ns/op 752.86 MB/s 13.57 %lost 1792 B/op 1 allocs/op
BenchmarkUDP/32-2 1652134 3695 ns/op 8.66 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkUDP/124-2 1621024 3765 ns/op 32.94 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkUDP/1024-2 1553750 3825 ns/op 267.72 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkTCP/32-2 11056336 503.2 ns/op 63.60 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTCP/124-2 11074869 533.7 ns/op 232.32 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTCP/1024-2 8934968 671.4 ns/op 1525.20 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkWireGuardTest/32-2 1403702 4547 ns/op 7.04 MB/s 14.37 %lost 467 B/op 3 allocs/op
BenchmarkWireGuardTest/124-2 780645 7927 ns/op 15.64 MB/s 1.537 %lost 420 B/op 3 allocs/op
BenchmarkWireGuardTest/1024-2 512671 11791 ns/op 86.85 MB/s 0.5206 %lost 411 B/op 3 allocs/op
PASS
ok tailscale.com/wgengine/bench 195.724s
Updates #414.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
The existing implementation was completely, embarrassingly conceptually broken.
We aren't able to see whether wireguard-go's receive function goroutines
are running or not. All we can do is model that based on what we have done.
This commit fixes that model.
Fixes#1781
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
Avery reported a sub-ms health transition from "receiveIPv4 not running" to "ok".
To avoid these transient false-positives, be more precise about
the expected lifetime of receive funcs. The problematic case is one in which
they were started but exited prior to a call to connBind.Close.
Explicitly represent started vs running state, taking care with the order of updates.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
The connection failure diagnostic code was never updated enough for
exit nodes, so disable its misleading output when the node it picks
(incorrectly) to diagnose is only an exit node.
Fixes#1754
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I've spent two days searching for a theoretical wireguard-go bug
around receive functions exiting early.
I've found many bugs, but none of the flavor we're looking for.
Restore wireguard-go's logging around starting and stopping receive functions,
so that we can definitively rule in or out this particular theory.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
I see a bunch of these in some logs I'm looking at,
separated only by a few seconds.
Log the error so we can tell what's going on here.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
These were getting rate-limited for nodes with many peers.
Consolate the output into single lines, which are nicer anyway.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
We were accidentally logging oldPort -> oldPort.
Log oldPort as well as c.port; if we failed to get the preferred port
in a previous rebind, oldPort might differ from c.port.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
Track endpoints internally with a new tailcfg.Endpoint type that
includes a typed netaddr.IPPort (instead of just a string) and
includes a type for how that endpoint was discovered (STUN, local,
etc).
Use []tailcfg.Endpoint instead of []string internally.
At the last second, send it to the control server as the existing
[]string for endpoints, but also include a new parallel
MapRequest.EndpointType []tailcfg.EndpointType, so the control server
can start filtering out less-important endpoint changes from
new-enough clients. Notably, STUN-discovered endpoints can be filtered
out from 1.6+ clients, as they can discover them amongst each other
via CallMeMaybe disco exchanges started over DERP. And STUN endpoints
change a lot, causing a lot of MapResposne updates. But portmapped
endpoints are worth keeping for now, as they they work right away
without requiring the firewall traversal extra RTT dance.
End result will be less control->client bandwidth. (despite negligible
increase in client->control bandwidth)
Updates tailscale/corp#1543
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
On FreeBSD, we add the interface IP as a /48 to work around a kernel
bug, so we mustn't then try to add a /48 route to the Tailscale ULA,
since that will fail as a dupe.
Signed-off-by: David Anderson <danderson@tailscale.com>
It existed to work around the frequent opening and closing
of the conn.Bind done by wireguard-go.
The preceding commit removed that behavior,
so we can simply close the connections
when we are done with them.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We don't use the port that wireguard-go passes to us (via magicsock.connBind.Open).
We ignore it entirely and use the port we selected.
When we tell wireguard-go that we're changing the listen_port,
it calls connBind.Close and then connBind.Open.
And in the meantime, it stops calling the receive functions,
which means that we stop receiving and processing UDP and DERP packets.
And that is Very Bad.
That was never a problem prior to b3ceca1dd7,
because we passed the SkipBindUpdate flag to our wireguard-go fork,
which told wireguard-go not to re-bind on listen_port changes.
That commit eliminated the SkipBindUpdate flag.
We could write a bunch of code to work around the gap.
We could add background readers that process UDP and DERP packets when wireguard-go isn't.
But it's simpler to never create the conditions in which wireguard-go rebinds.
The other scenario in which wireguard-go re-binds is device.Down.
Conveniently, we never call device.Down. We go from device.Up to device.Close,
and the latter only when we're shutting down a magicsock.Conn completely.
Rubber-ducked-by: Avery Pennarun <apenwarr@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The shim implements both network and DNS configurators,
and feeds both into a single callback that receives
both configs.
Signed-off-by: David Anderson <danderson@tailscale.com>
Upstream wireguard-go has changed its receive model.
NewDevice now accepts a conn.Bind interface.
The conn.Bind is stateless; magicsock.Conns are stateful.
To work around this, we add a connBind type that supports
cheap teardown and bring-up, backed by a Conn.
The new conn.Bind allows us to specify a set of receive functions,
rather than having to shoehorn everything into ReceiveIPv4 and ReceiveIPv6.
This lets us plumbing DERP messages directly into wireguard-go,
instead of having to mux them via ReceiveIPv4.
One consequence of the new conn.Bind layer is that
closing the wireguard-go device is now indistinguishable
from the routine bring-up and tear-down normally experienced
by a conn.Bind. We thus have to explicitly close the magicsock.Conn
when the close the wireguard-go device.
One downside of this change is that we are reliant on wireguard-go
to call receiveDERP to process DERP messages. This is fine for now,
but is perhaps something we should fix in the future.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The code is not obviously better or worse, but this makes the little warning
triangle in my editor go away, and the distraction removal is worth it.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Google Cloud Run does not implement NETLINK_ROUTE RTMGRP.
If initialization of the netlink socket or group membership
fails, fall back to a polling implementation.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
The resolver still only supports a single upstream config, and
ipn/wgengine still have to split up the DNS config, but this moves
closer to unifying the DNS configs.
As a handy side-effect of the refactor, IPv6 MagicDNS records exist
now.
Signed-off-by: David Anderson <danderson@tailscale.com>
The call to appendEndpoint updates cpeer.Endpoints.
Then it is overwritten in the next line.
The only errors from appendEndpoint occur when
the host/port pair is malformed, but that cannot happen.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It can end up executing an a new goroutine,
at which point instead of immediately stopping test execution, it hangs.
Since this is unexpected anyway, panic instead.
As a bonus, it makes call sites nicer and removes a kludge comment.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
So we have a documented & tested way to check whether we're in
netstack mode. To be used by future commits.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
For discovery when an explicit hostname/IP is known. We'll still
also send it via control for finding peers by a list.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
"Fake" doesn't mean a lot any more, given that many components
of the engine can be faked out, including in valid production
configurations like userspace-networking.
Signed-off-by: David Anderson <danderson@tailscale.com>
This makes setup more explicit in prod codepaths, without
requiring a bunch of arguments or helpers for tests and
userspace mode.
Signed-off-by: David Anderson <danderson@tailscale.com>
This works around the close syscall being slow.
We can revert this if we find a fix or if Apple makes close fast again.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The tstun packagen contains both constructors for generic tun
Devices, and a wrapper that provides additional functionality.
Signed-off-by: David Anderson <danderson@tailscale.com>
Now callers (wgengine/monitor) don't need to mutate the state to remove
boring interfaces before calling State.Equal. Instead, the methods
to remove boring interfaces from the State are removed, as is
the reflect-using Equal method itself, and in their place is
a new EqualFiltered method that takes a func predicate to match
interfaces to compare.
And then the FilterInteresting predicate is added for use
with EqualFiltered to do the job that that wgengine/monitor
previously wanted.
Now wgengine/monitor can keep the full interface state around,
including the "boring" interfaces, which we'll need for peerapi on
macOS/iOS to bind to the interface index of the utunN device.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
e.g.
$ tailscale ping 1.1.1.1
exit node found but not enabled
$ tailscale ping 10.2.200.2
node "tsbfvlan2" found, but not using its 10.2.200.0/24 route
$ sudo tailscale up --accept-routes
$ tailscale ping 10.2.200.2
pong from tsbfvlan2 (100.124.196.94) via 10.2.200.34:41641 in 1ms
$ tailscale ping mon.ts.tailscale.com
pong from monitoring (100.88.178.64) via DERP(sfo) in 83ms
pong from monitoring (100.88.178.64) via DERP(sfo) in 21ms
pong from monitoring (100.88.178.64) via [2604:a880:4:d1::37:d001]:41641 in 22ms
This necessarily moves code up from magicsock to wgengine, so we can
look at the actual wireguard config.
Fixes#1564
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Add proto to flowtrack.Tuple.
Add types/ipproto leaf package to break a cycle.
Server-side ACL work remains.
Updates #1516
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This reverts the revert commit 84aba349d9.
And changes us to use inet.af/netstack.
Updates #1518
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Should help iOS battery life on NEProvider.wake/skip events
with useless route updates that shouldn't cause re-STUNs.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
gVisor fixed their google/gvisor#1446 so we can include gVisor mode
on 32-bit machines.
A few minor upstream API changes, as normal.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>