Uses natlab only, because the point of this active discovery test is going to be
that it should get through a lot of obstacles.
Signed-off-by: David Anderson <danderson@tailscale.com>
wireguard-go uses 3 goroutines per peer (with reasonably large stacks
& buffers).
Rather than tell wireguard-go about all our peers, only tell it about
peers we're actively communicating with. That means we need hooks into
magicsock's packet receiving path and tstun's packet sending path to
lazily create a wireguard peer on demand from the network map.
This frees up lots of memory for iOS (where we have almost nothing
left for larger domains with many users).
We should ideally do this in wireguard-go itself one day, but that'd
be a pretty big change.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The new interface lets implementors more precisely distinguish
local traffic from forwarded traffic, and applies different
forwarding logic within Machines for each type. This allows
Machines to be packet forwarders, which didn't quite work
with the implementation of Inject.
Signed-off-by: David Anderson <danderson@tailscale.com>
The test demonstrates that magicsock can traverse two stateful
firewalls facing each other, that each require localhost to
initiate connections.
Signed-off-by: David Anderson <danderson@tailscale.com>
This adds a new magicsock endpoint type only used when both sides
support discovery (that is, are advertising a discovery
key). Otherwise the old code is used.
So far the new code only communicates over DERP as proof that the new
code paths are wired up. None of the actually discovery messaging is
implemented yet.
Support for discovery (generating and advertising a key) are still
behind an environment variable for now.
Updates #483
And track known peers.
Doesn't yet do anything with the messages. (nor does it send any yet)
Start of docs on the message format. More will come in subsequent changes.
Updates #483
This allows tailscaled's own traffic to bypass Tailscale-managed routes,
so that things like tailscale-provided default routes don't break
tailscaled itself.
Progress on #144.
Signed-off-by: David Anderson <danderson@tailscale.com>
Instead of hard-coding the DERP map (except for cmd/tailscale netcheck
for now), get it from the control server at runtime.
And make the DERP map support multiple nodes per region with clients
picking the first one that's available. (The server will balance the
order presented to clients for load balancing)
This deletes the stunner package, merging it into the netcheck package
instead, to minimize all the config hooks that would've been
required.
Also fix some test flakes & races.
Fixes#387 (Don't hard-code the DERP map)
Updates #388 (Add DERP region support)
Fixes#399 (wgengine: flaky tests)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This didn't catch anything yet, but it's good practice for detecting
goroutine leaks that we might not find otherwise.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
If a test calls log.Printf, 'go test' horrifyingly rearranges the
output to no longer be in chronological order, which makes debugging
virtually impossible. Let's stop that from happening by making
log.Printf panic if called from any module, no matter how deep, during
tests.
This required us to change the default error handler in at least one
http.Server, as well as plumbing a bunch of logf functions around,
especially in magicsock and wgengine, but also in logtail and backoff.
To add insult to injury, 'go test' also rearranges the output when a
parent test has multiple sub-tests (all the sub-test's t.Logf is always
printed after all the parent tests t.Logf), so we need to screw around
with a special Logf that can point at the "current" t (current_t.Logf)
in some places. Probably our entire way of using subtests is wrong,
since 'go test' would probably like to run them all in parallel if you
called t.Parallel(), but it definitely can't because the're all
manipulating the shared state created by the parent test. They should
probably all be separate toplevel tests instead, with common
setup/teardown logic. But that's a job for another time.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
Right now, filtering and packet injection in wgengine depend
on a patch to wireguard-go that probably isn't suitable for upstreaming.
This need not be the case: wireguard-go/tun.Device is an interface.
For example, faketun.go implements it to mock a TUN device for testing.
This patch implements the same interface to provide filtering
and packet injection at the tunnel device level,
at which point the wireguard-go patch should no longer be necessary.
This patch has the following performance impact on i7-7500U @ 2.70GHz,
tested in the following namespace configuration:
┌────────────────┐ ┌─────────────────────────────────┐ ┌────────────────┐
│ $ns1 │ │ $ns0 │ │ $ns2 │
│ client0 │ │ tailcontrol, logcatcher │ │ client1 │
│ ┌─────┐ │ │ ┌──────┐ ┌──────┐ │ │ ┌─────┐ │
│ │vethc│───────┼────┼──│vethrc│ │vethrs│──────┼─────┼──│veths│ │
│ ├─────┴─────┐ │ │ ├──────┴────┐ ├──────┴────┐ │ │ ├─────┴─────┐ │
│ │10.0.0.2/24│ │ │ │10.0.0.1/24│ │10.0.1.1/24│ │ │ │10.0.1.2/24│ │
│ └───────────┘ │ │ └───────────┘ └───────────┘ │ │ └───────────┘ │
└────────────────┘ └─────────────────────────────────┘ └────────────────┘
Before:
---------------------------------------------------
| TCP send | UDP send |
|------------------------|------------------------|
| 557.0 (±8.5) Mbits/sec | 3.03 (±0.02) Gbits/sec |
---------------------------------------------------
After:
---------------------------------------------------
| TCP send | UDP send |
|------------------------|------------------------|
| 544.8 (±1.6) Mbits/sec | 3.13 (±0.02) Gbits/sec |
---------------------------------------------------
The impact on receive performance is similar.
Signed-off-by: Dmytro Shynkevych <dmytro@tailscale.com>
It used to make assumptions based on having Anycast IPs that are super
near. Now we're intentionally going to a bunch of different distant
IPs to measure latency.
Also, optimize how the hairpin detection works. No need to STUN on
that socket. Just use that separate socket for sending, once we know
the other UDP4 socket's endpoint. The trick is: make our test probe
also a STUN packet, so it fits through magicsock's existing STUN
routing.
This drops netcheck from ~5 seconds to ~250-500ms.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Failure to do this leads to fd exhaustion at -count=10000,
and increasingly poor execution north of -count=100.
Signed-off-by: David Anderson <danderson@tailscale.com>
Failure to do so triggers either a data race or a panic
in the testing package, due to racey use of t.Logf.
Signed-off-by: David Anderson <danderson@tailscale.com>
wireguard-go closes magicsock, and expects this to unblock reads
so that its internal goroutines can wind down. We were incorrectly
blocking the read indefinitey and breaking this contract.
Signed-off-by: David Anderson <danderson@tailscale.com>
It's extremely flaky in several dimensions, as well as very slow.
It's making the CI completely red all the time without telling us
useful information.
Set RUN_CURSED_TESTS=1 to run locally.
This change just alters the semantics of the one flaky test, without
trying to speed up timeouts on the others. Empirically, speeding up
the timeouts causes _more_ flakes right now :(
The remaining flake occurs due to a mysterious packet loss. This
doesn't affect normal tailscaled operations, so until I track down
where the loss occurs and fix it, the flaky test is going to be
lenient about packet loss (but not about whether the spray logic
worked).
Signed-off-by: David Anderson <danderson@tailscale.com>
It previously passed incorrectly due to bugs. With those fixed,
it becomes flaky for 2 reasons. One of them is the wireguard handshake
race, which can eat the 1st sprayed packet and prevent roamAddr
discovery. This change fixes that failure, by spreading the test
traffic out enough that additional spraying occurs.
Signed-Off-By: David Anderson <danderson@tailscale.com>