* move probing out of netcheck into new net/portmapper package
* use PCP ANNOUNCE op codes for PCP discovery, rather than causing
short-lived (sub-second) side effects with a 1-second-expiring map +
delete.
* track when we heard things from the router so we can be less wasteful
in querying the router's port mapping services in the future
* use portmapper from magicsock to map a public port
Fixes#1298Fixes#1080Fixes#1001
Updates #864
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
When a handshake race occurs, a queued data packet can get lost.
TestTwoDevicePing expected that the very first data packet would arrive.
This caused occasional flakes.
Change TestTwoDevicePing to repeatedly re-send packets
and succeed when one of them makes it through.
This is acceptable (vs making WireGuard not drop the packets)
because this only affects communication with extremely old clients.
And those extremely old clients will eventually connect,
because the kernel will retry sends on timeout.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We modified the standard net package to not allocate a *net.UDPAddr
during a call to (*net.UDPConn).ReadFromUDP if the caller's use
of the *net.UDPAddr does not cause it to escape.
That is https://golang.org/cl/291390.
This is the companion change to magicsock.
There are two changes required.
First, call ReadFromUDP instead of ReadFrom, if possible.
ReadFrom returns a net.Addr, which is an interface, which always allocates.
Second, reduce the lifetime of the returned *net.UDPAddr.
We do this by immediately converting it into a netaddr.IPPort.
We left the existing RebindingUDPConn.ReadFrom method in place,
as it is required to satisfy the net.PacketConn interface.
With the upstream change and both of these fixes in place,
we have removed one large allocation per packet received.
name old time/op new time/op delta
ReceiveFrom-8 16.7µs ± 5% 16.4µs ± 8% ~ (p=0.310 n=5+5)
name old alloc/op new alloc/op delta
ReceiveFrom-8 112B ± 0% 64B ± 0% -42.86% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
ReceiveFrom-8 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.008 n=5+5)
Co-authored-by: Sonia Appasamy <sonia@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
addrSet maintained duplicate lists of netaddr.IPPorts and net.UDPAddrs.
Unify to use the netaddr type only.
This makes (*Conn).ReceiveIPvN a bit uglier,
but that'll be cleaned up in a subsequent commit.
This is preparatory work to remove an allocation from ReceiveIPv4.
Co-authored-by: Sonia Appasamy <sonia@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
I based my estimation of the required timeout based on locally
observed behavior. But CI machines are worse than my local machine.
16s was enough to reduce flakiness but not eliminate it. Bump it up again.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We removed the "fast retry" code from our wireguard-go fork.
As a result, pings can take longer to transit when retries are required.
Allow that.
Fixes#1277
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The fix can make this test run unconditionally.
This moves code from 5c619882bc for
testability but doesn't fix it yet. The #1282 problem remains (when I
wrote its wake-up mechanism, I forgot there were N DERP readers
funneling into 1 UDP reader, and the code just isn't correct at all
for that case).
Also factor out some test helper code from BenchmarkReceiveFrom.
The refactoring in magicsock.go for testability should have no
behavior change.
Magicsock started dropping all traffic internally when Tailscale is
shut down, to avoid spurious wireguard logspam. This made the benchmark
not receive anything. Setting a dummy private key is sufficient to get
magicsock to pass traffic for benchmarking purposes.
Fixes#1270.
Signed-off-by: David Anderson <danderson@tailscale.com>
Use tb.Cleanup to simplify both the API and the implementation.
One behavior change: When the number of goroutines shrinks, don't log.
I've never found these logs to be useful, and they frequently add noise.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is mostly code movement from the wireguard-go repo.
Most of the new wgcfg package corresponds to the wireguard-go wgcfg package.
wgengine/wgcfg/device{_test}.go was device/config{_test}.go.
There were substantive but simple changes to device_test.go to remove
internal package device references.
The API of device.Config (now wgcfg.DeviceConfig) grew an error return;
we previously logged the error and threw it away.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Rewrite log lines on the fly, based on the set of known peers.
This enables us to use upstream wireguard-go logging,
but maintain the Tailscale-style peer public key identifiers
that the rest of our systems (and people) expect.
Fixes#1183
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
22507adf54 stopped relying on
our fork of wireguard-go's UpdateDst callback.
As a result, we can unwind that code,
and the extra return value of ReceiveIPv{4,6}.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
TwoDevicePing is explicitly testing the behavior of the legacy codepath, everything
else is happy to assume that code no longer exists.
Signed-off-by: David Anderson <danderson@tailscale.com>
Previously, this benchmark relied on behavior of the legacy
receive codepath, which I changed in 22507adf. With this
change, the benchmark instead relies on the new active discovery
path.
Signed-off-by: David Anderson <danderson@tailscale.com>
This eliminates a dependency on wgcfg.Endpoint,
as part of the effort to eliminate our wireguard-go fork.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This makes connectivity between ancient and new tailscale nodes slightly
worse in some cases, but only in cases where the ancient version would
likely have failed to get connectivity anyway.
Signed-off-by: David Anderson <danderson@tailscale.com>
In sendDiscoMessage there is a check of whether the connection is
closed, which is not being reliably exercised by other tests.
This shows up in code coverage reports, the lines of code in
sendDiscoMessage are alternately added and subtracted from
code coverage.
Add a test to specifically exercise and verify this code path.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
In derpWriteChanOfAddr when we call derphttp.NewRegionClient(),
there is a check of whether the connection is already errored and
if so it returns before grabbing the lock. The lock might already
be held and would be a deadlock.
This corner case is not being reliably exercised by other tests.
This shows up in code coverage reports, the lines of code in
derpWriteChanOfAddr are alternately added and subtracted from
code coverage.
Add a test to specifically exercise this code path, and verify that
it doesn't deadlock.
This is the best tradeoff I could come up with:
+ the moment code calls Err() to check if there is an error, we
grab the lock to make sure it would deadlock if it tries to grab
the lock itself.
+ if a new call to Err() is added in this code path, only the
first one will be covered and the rest will not be tested.
+ this test doesn't verify whether code is checking for Err() in
the right place, which ideally I guess it would.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
This is a replacement for the key-related parts
of the wireguard-go wgcfg package.
This is almost a straight copy/paste from the wgcfg package.
I have slightly changed some of the exported functions and types
to avoid stutter, added and tweaked some comments,
and removed some now-unused code.
To avoid having wireguard-go depend on this new package,
wgcfg will keep its key types.
We translate into and out of those types at the last minute.
These few remaining uses will be eliminated alongside
the rest of the wgcfg package.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We still use the packet.* alloc-free types in the data path, but
the compilation from netaddr to packet happens within the filter
package.
Signed-off-by: David Anderson <danderson@tailscale.com>
Updates #654. See that issue for a discussion of why
this timeout reduces flakiness, and what next steps are.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
1) we weren't waking up a discoEndpoint that once existed and
went idle for 5 minutes and then got a disco message again.
2) userspaceEngine.noteReceiveActivity had a buggy check; fixed
and added a test
There is a race in natlab where we might start shutdown while natlab is still running
a goroutine or two to deliver packets. This adds a small grace period to try and receive
it before continuing shutdown.
Signed-off-by: David Anderson <danderson@tailscale.com>
The first packet to transit may take several seconds to do so, because
setup rates in wgengine may result in the initial WireGuard handshake
init to get dropped. So, we have to wait at least long enough for a
retransmit to correct the fault.
Signed-off-by: David Anderson <danderson@tailscale.com>
Active discovery lets us introspect the state of the network stack precisely
enough that it's unnecessary, and dropping the initial DERP packets greatly
slows down tests. Additionally, it's unrealistic since our production network
will never deliver _only_ discovery packets, it'll be all or nothing.
Signed-off-by: David Anderson <danderson@tailscale.com>
Uses natlab only, because the point of this active discovery test is going to be
that it should get through a lot of obstacles.
Signed-off-by: David Anderson <danderson@tailscale.com>
wireguard-go uses 3 goroutines per peer (with reasonably large stacks
& buffers).
Rather than tell wireguard-go about all our peers, only tell it about
peers we're actively communicating with. That means we need hooks into
magicsock's packet receiving path and tstun's packet sending path to
lazily create a wireguard peer on demand from the network map.
This frees up lots of memory for iOS (where we have almost nothing
left for larger domains with many users).
We should ideally do this in wireguard-go itself one day, but that'd
be a pretty big change.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The new interface lets implementors more precisely distinguish
local traffic from forwarded traffic, and applies different
forwarding logic within Machines for each type. This allows
Machines to be packet forwarders, which didn't quite work
with the implementation of Inject.
Signed-off-by: David Anderson <danderson@tailscale.com>
The test demonstrates that magicsock can traverse two stateful
firewalls facing each other, that each require localhost to
initiate connections.
Signed-off-by: David Anderson <danderson@tailscale.com>
This adds a new magicsock endpoint type only used when both sides
support discovery (that is, are advertising a discovery
key). Otherwise the old code is used.
So far the new code only communicates over DERP as proof that the new
code paths are wired up. None of the actually discovery messaging is
implemented yet.
Support for discovery (generating and advertising a key) are still
behind an environment variable for now.
Updates #483
And track known peers.
Doesn't yet do anything with the messages. (nor does it send any yet)
Start of docs on the message format. More will come in subsequent changes.
Updates #483
This allows tailscaled's own traffic to bypass Tailscale-managed routes,
so that things like tailscale-provided default routes don't break
tailscaled itself.
Progress on #144.
Signed-off-by: David Anderson <danderson@tailscale.com>
Instead of hard-coding the DERP map (except for cmd/tailscale netcheck
for now), get it from the control server at runtime.
And make the DERP map support multiple nodes per region with clients
picking the first one that's available. (The server will balance the
order presented to clients for load balancing)
This deletes the stunner package, merging it into the netcheck package
instead, to minimize all the config hooks that would've been
required.
Also fix some test flakes & races.
Fixes#387 (Don't hard-code the DERP map)
Updates #388 (Add DERP region support)
Fixes#399 (wgengine: flaky tests)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This didn't catch anything yet, but it's good practice for detecting
goroutine leaks that we might not find otherwise.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
If a test calls log.Printf, 'go test' horrifyingly rearranges the
output to no longer be in chronological order, which makes debugging
virtually impossible. Let's stop that from happening by making
log.Printf panic if called from any module, no matter how deep, during
tests.
This required us to change the default error handler in at least one
http.Server, as well as plumbing a bunch of logf functions around,
especially in magicsock and wgengine, but also in logtail and backoff.
To add insult to injury, 'go test' also rearranges the output when a
parent test has multiple sub-tests (all the sub-test's t.Logf is always
printed after all the parent tests t.Logf), so we need to screw around
with a special Logf that can point at the "current" t (current_t.Logf)
in some places. Probably our entire way of using subtests is wrong,
since 'go test' would probably like to run them all in parallel if you
called t.Parallel(), but it definitely can't because the're all
manipulating the shared state created by the parent test. They should
probably all be separate toplevel tests instead, with common
setup/teardown logic. But that's a job for another time.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
Right now, filtering and packet injection in wgengine depend
on a patch to wireguard-go that probably isn't suitable for upstreaming.
This need not be the case: wireguard-go/tun.Device is an interface.
For example, faketun.go implements it to mock a TUN device for testing.
This patch implements the same interface to provide filtering
and packet injection at the tunnel device level,
at which point the wireguard-go patch should no longer be necessary.
This patch has the following performance impact on i7-7500U @ 2.70GHz,
tested in the following namespace configuration:
┌────────────────┐ ┌─────────────────────────────────┐ ┌────────────────┐
│ $ns1 │ │ $ns0 │ │ $ns2 │
│ client0 │ │ tailcontrol, logcatcher │ │ client1 │
│ ┌─────┐ │ │ ┌──────┐ ┌──────┐ │ │ ┌─────┐ │
│ │vethc│───────┼────┼──│vethrc│ │vethrs│──────┼─────┼──│veths│ │
│ ├─────┴─────┐ │ │ ├──────┴────┐ ├──────┴────┐ │ │ ├─────┴─────┐ │
│ │10.0.0.2/24│ │ │ │10.0.0.1/24│ │10.0.1.1/24│ │ │ │10.0.1.2/24│ │
│ └───────────┘ │ │ └───────────┘ └───────────┘ │ │ └───────────┘ │
└────────────────┘ └─────────────────────────────────┘ └────────────────┘
Before:
---------------------------------------------------
| TCP send | UDP send |
|------------------------|------------------------|
| 557.0 (±8.5) Mbits/sec | 3.03 (±0.02) Gbits/sec |
---------------------------------------------------
After:
---------------------------------------------------
| TCP send | UDP send |
|------------------------|------------------------|
| 544.8 (±1.6) Mbits/sec | 3.13 (±0.02) Gbits/sec |
---------------------------------------------------
The impact on receive performance is similar.
Signed-off-by: Dmytro Shynkevych <dmytro@tailscale.com>
It used to make assumptions based on having Anycast IPs that are super
near. Now we're intentionally going to a bunch of different distant
IPs to measure latency.
Also, optimize how the hairpin detection works. No need to STUN on
that socket. Just use that separate socket for sending, once we know
the other UDP4 socket's endpoint. The trick is: make our test probe
also a STUN packet, so it fits through magicsock's existing STUN
routing.
This drops netcheck from ~5 seconds to ~250-500ms.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Failure to do this leads to fd exhaustion at -count=10000,
and increasingly poor execution north of -count=100.
Signed-off-by: David Anderson <danderson@tailscale.com>
Failure to do so triggers either a data race or a panic
in the testing package, due to racey use of t.Logf.
Signed-off-by: David Anderson <danderson@tailscale.com>