If a node is behind a hard NAT and is using an explicit local port
number, assume they might've mapped a port and add their public IPv4
address with the local tailscaled's port number as a candidate endpoint.
NetworkMap text diffs being empty were currently used to short-circuit
calling magicsock's SetNetworkMap (via Engine.SetNetworkMap), but that
went away in c7582dc2 (0.100.0-230)
Prior to c7582dc2 (notably, in 0.100.0-225 and below, down to
0.100.0), a change in only disco key (as when a node restarts) but
without endpoint changes (as would happen for a client not behind a
NAT with random ports) could result in a "netmap diff: (none)" being
printed, as well as Engine.SetNetworkMap being skipped, leading to
broken discovery endpoints.
c7582dc2 fixed the Engine.SetNetworkMap skippage.
This change fixes the "netmap diff: (none)" print so we'll actually see when a peer
restarts with identical endpoints but a new discovery key.
SIGPIPE can be generated when CLIs disconnect from tailscaled. This
should not terminate the process.
Signed-off-by: David Anderson <danderson@tailscale.com>
tailscaled receives a SIGPIPE when CLIs disconnect from it. We shouldn't
shut down in that case.
This reverts commit 43b271cb26.
Signed-off-by: David Anderson <danderson@tailscale.com>
ORder of operations to trigger a problem:
- Start an already authed tailscaled, verify you can ping stuff.
- Run `tailscale up`. Notice you can no longer ping stuff.
The problem is that `tailscale up` stops the IPN state machine before
restarting it, which zeros out the packet filter but _not_ the packet
filter hash. Then, upon restarting IPN, the uncleared hash incorrectly
makes the code conclude that the filter doesn't need updating, and so
we stay with a zero filter (reject everything) for ever.
The fix is simply to update the filterHash correctly in all cases,
so that running -> stopped -> running correctly changes the filter
at every transition.
Signed-off-by: David Anderson <danderson@tailscale.com>
A comparison operator was backwards.
The bad case went:
* device A send packet to B at t=1s
* B gets added to A's wireguard config
* B gets packet
(5 minutes pass)
* some other activity happens, causing B to expire
to be removed from A's network map, since it's
been over 5 minutes since sent or received activity
* device A sends packet to B at t=5m1s
* normally, B would get added back, but the old send
time was not zero (we sent earlier!) and the time
comparison was backwards, so we never regenerated
the wireguard config.
This also refactors the code for legibility and moves constants up
top, with comments.
It appears that systemd has sensible defaults for limiting
crash loops:
DefaultStartLimitIntervalSec=10s
DefaultStartLimitBurst=5
Remove our insta-restart configuration so that it works.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
f81233524f changed a use of package 'path' to 'filepath'.
Restore it back to 'path', with a comment.
Also, use the os.Executable-based fallback name in the case where the
binary itself doesn't have Go module information. That was overlooked in
the original code.
What I was probably actually hitting was exe caching issues where the
binary was updated on a SMB shared drive and I tried to run it with
the GUI exe still open, so Windows blends the two pages together and
causes all sorts of random corruption. I didn't know about that at the time.
Now, just call tryFixLogStateLocation unconditionally. The func itself will
bail out early on non-applicable OSes. (And rearrange it to return even a bit
earlier.)
We need to emit Prefs when it *has* changed, not when it hasn't.
Test is added in our e2e test, separately.
Fixes: #620
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
We were using the Go 'path' module, which apparently doesn't handle
backslashes correctly. path/filepath does.
However, the main bug turned out to be that we were not calling .Base()
on the path if version.ReadExe() fails, which it seems to do at least
on Windows 7. As a result, our logfile persistence was not working on
Windows, and logids would be regenerated on every restart.
Affects: #620
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
So a backend in server-an-error state (as used by Windows) can try to
create a new Engine again each time somebody re-connects, relaunching
the GUI app.
(The proper fix is actually fixing Windows issues, but this makes things better
in the short term)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The OS (tries) to send these but we drop them. No need to worry the
user with spam that we're dropping it.
Fixes#402
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Starting with fe68841dc7, some e2e tests
got flaky. Rather than debug them (they're gnarly), just revert to the old
behavior as far as those tests are concerned. The tests were somehow
using magicsock without a private key and expecting it to do ... something.
My goal with fe68841dc7 was to stop log spam
and unnecessary work I saw on the iOS app when when stopping the app.
Instead, only stop doing that work on any transition from
once-had-a-private-key to no-longer-have-a-private-key. That fixes
what I wanted to fix while still making the mysterious e2e tests
happy.
There is a race in natlab where we might start shutdown while natlab is still running
a goroutine or two to deliver packets. This adds a small grace period to try and receive
it before continuing shutdown.
Signed-off-by: David Anderson <danderson@tailscale.com>
The first packet to transit may take several seconds to do so, because
setup rates in wgengine may result in the initial WireGuard handshake
init to get dropped. So, we have to wait at least long enough for a
retransmit to correct the fault.
Signed-off-by: David Anderson <danderson@tailscale.com>