Consider:
Hard NAT (A) <---> Hard NAT w/ mapped port (B)
If A sends a packet to B's mapped port, A can disco ping B directly,
with low latency, without DERP.
But B couldn't establish a path back to A and needed to use DERP,
despite already logging about A's endpoint and adding a mapping to it
for other purposes (the wireguard conn.Endpoint lookup also needed
it).
This adds the tracking to discoEndpoint too so it'll be used for
finding a path back.
Fixestailscale/corp#556
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
For example:
$ tailscale ping -h
USAGE
ping <hostname-or-IP>
FLAGS
-c 10 max number of pings to send
-stop-once-direct true stop once a direct path is established
-verbose false verbose output
$ tailscale ping mon.ts.tailscale.com
pong from monitoring (100.88.178.64) via DERP(sfo) in 65ms
pong from monitoring (100.88.178.64) via DERP(sfo) in 252ms
pong from monitoring (100.88.178.64) via [2604:a880:2:d1::36:d001]:41641 in 33ms
Fixes#661
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
1) we weren't waking up a discoEndpoint that once existed and
went idle for 5 minutes and then got a disco message again.
2) userspaceEngine.noteReceiveActivity had a buggy check; fixed
and added a test
This removes the atomic bool that tried to track whether we needed to acquire
the lock on a future recursive call back into magicsock. Unfortunately that
hack doesn't work because we also had a lock ordering issue between magicsock
and userspaceEngine (see issue). This documents that too.
Fixes#644
If a node is behind a hard NAT and is using an explicit local port
number, assume they might've mapped a port and add their public IPv4
address with the local tailscaled's port number as a candidate endpoint.
A comparison operator was backwards.
The bad case went:
* device A send packet to B at t=1s
* B gets added to A's wireguard config
* B gets packet
(5 minutes pass)
* some other activity happens, causing B to expire
to be removed from A's network map, since it's
been over 5 minutes since sent or received activity
* device A sends packet to B at t=5m1s
* normally, B would get added back, but the old send
time was not zero (we sent earlier!) and the time
comparison was backwards, so we never regenerated
the wireguard config.
This also refactors the code for legibility and moves constants up
top, with comments.
The OS (tries) to send these but we drop them. No need to worry the
user with spam that we're dropping it.
Fixes#402
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Starting with fe68841dc7, some e2e tests
got flaky. Rather than debug them (they're gnarly), just revert to the old
behavior as far as those tests are concerned. The tests were somehow
using magicsock without a private key and expecting it to do ... something.
My goal with fe68841dc7 was to stop log spam
and unnecessary work I saw on the iOS app when when stopping the app.
Instead, only stop doing that work on any transition from
once-had-a-private-key to no-longer-have-a-private-key. That fixes
what I wanted to fix while still making the mysterious e2e tests
happy.
There is a race in natlab where we might start shutdown while natlab is still running
a goroutine or two to deliver packets. This adds a small grace period to try and receive
it before continuing shutdown.
Signed-off-by: David Anderson <danderson@tailscale.com>
The first packet to transit may take several seconds to do so, because
setup rates in wgengine may result in the initial WireGuard handshake
init to get dropped. So, we have to wait at least long enough for a
retransmit to correct the fault.
Signed-off-by: David Anderson <danderson@tailscale.com>
Active discovery lets us introspect the state of the network stack precisely
enough that it's unnecessary, and dropping the initial DERP packets greatly
slows down tests. Additionally, it's unrealistic since our production network
will never deliver _only_ discovery packets, it'll be all or nothing.
Signed-off-by: David Anderson <danderson@tailscale.com>
Uses natlab only, because the point of this active discovery test is going to be
that it should get through a lot of obstacles.
Signed-off-by: David Anderson <danderson@tailscale.com>
The deadlock was:
* Conn.Close was called, which acquired c.mu
* Then this goroutine scheduled:
if firstDerp {
startGate = c.derpStarted
go func() {
dc.Connect(ctx)
close(c.derpStarted)
}()
}
* The getRegion hook for that derphttp.Client then ran, which also
tries to acquire c.mu.
This change makes that hook first see if we're already in a closing
state and then it can pretend that region doesn't exist.