Instead of iterating over the map to determine the preferred forwarder
on every packet (which could happen concurrently with map mutations),
store it separately in an atomic variable.
Fixes#6445
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
The io/ioutil package has been deprecated as of Go 1.16 [1]. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Reference: https://golang.org/doc/go1.16#ioutil
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
In prep for a future change to have client ping derp connections
when their state is questionable, rather than aggressively tearing
them down and doing a heavy reconnect when their state is unknown.
We already support ping/pong in the other direction (servers probing
clients) so we already had the two frame types, but I'd never finished
this direction.
Updates #3619
Change-Id: I024b815d9db1bc57c20f82f80f95fb55fc9e2fcc
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
On about 1 out of 500 runs, TestSendFreeze failed:
derp_test.go:416: bob: unexpected message type derp.PeerGoneMessage
Closing alice before bob created a race.
If bob closed promptly, the test passed.
If bob closed slowly, and alice's disappearance caused
bob to receive a PeerGoneMessage before closing, the test failed.
Deflake the test by closing bob first.
With this fix, the test passed 12,000 times locally.
Fixes#2668
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
A public key should only have max one connection to a given
DERP node (or really: one connection to a node in a region).
But if people clone their machine keys (e.g. clone their VM, Raspbery
Pi SD card, etc), then we can get into a situation where a public key
is connected multiple times.
Originally, the DERP server handled this by just kicking out a prior
connections whenever a new one came. But this led to reconnect fights
where 2+ nodes were in hard loops trying to reconnect and kicking out
their peer.
Then a909d37a59 tried to add rate
limiting to how often that dup-kicking can happen, but empirically it
just doesn't work and ~leaks a bunch of goroutines and TCP
connections, tying them up for hour+ while more and more accumulate
and waste memory. Mostly because we were doing a time.Sleep forever
while not reading from their TCP connections.
Instead, just accept multiple connections per public key but track
which is the most recent. And if two both are writing back & forth,
then optionally disable them both. That last part is only enabled in
tests for now. The current default policy is just last-sender-wins
while we gather the next round of stats.
Updates #2751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
If a peer is connected to multiple nodes in a region (so
multiForwarder is in use) and then a node restarts and re-sends all
its additions, this bug about whether an element is in the
multiForwarder could cause a one-time flip in the which peer node we
forward to. Note a huge deal, but not written as intended.
Thanks to @lewgun for the bug report in #2141.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This adds a handler on the DERP server for logging bytes send and received by clients of the
server, by holding open a connection and recording if there is a difference between the number
of bytes sent and received. It sends a JSON marshalled object if there is an increase in the
number of bytes.
Signed-off-by: julianknodt <julianknodt@gmail.com>
No server support yet, but we want Tailscale 1.6 clients to be able to respond
to them when the server can do it.
Updates #1310
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
* advertise server's DERP public key following its ServerHello
* have client look for that DEPR public key in the response
PeerCertificates
* let client advertise it's going into a "fast start" mode
if it finds it
* modify server to support that fast start mode, just not
sending the HTTP response header
Cuts down another round trip, bringing the latency of being able to
write our first DERP frame from SF to Bangalore from ~725ms
(3 RTT) to ~481ms (2 RTT: TCP and TLS).
Fixes#693
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It just has a version number in it and it's not really needed.
Instead just return it as a normal Recv message type for those
that care (currently only tests).
Updates #150 (in that it shares the same goal: initial DERP latency)
Updates #199 (in that it removes some DERP versioning)
These aren't particularly performance critical,
but since I have an optimization pending for them,
it's worth having a corresponding benchmark.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
This benchmark is far from perfect: It mixes together
client and server. Still, it provides a starting point
for easy profiling.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
The magicsock derpReader was holding onto 65KB for each DERP
connection forever, just in case.
Make the derp{,http}.Client be in charge of memory instead. It can
reuse its bufio.Reader buffer space.
(The NewMeshClient constructor I added recently was gross in
retrospect at call sites, especially when it wasn't obvious that a
meshKey empty string meant a regular client)
This lets a trusted DERP client that knows a pre-shared key subscribe
to the connection list. Upon subscribing, they get the current set
of connected public keys, and then all changes over time.
This lets a set of DERP server peers within a region all stay connected to
each other and know which clients are connected to which nodes.
Updates #388
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Without the recent write deadline introduction, this test fails.
They still do interfere, but the interference is now bound by
the write deadline. Many improvements are possible.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
I broke an invariant in 11048b8932 (it was even nicely
documented then).
Also clean up the test a bit from while I was debugging it.
Fixes#84
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>