This benchmark is far from perfect: It mixes together
client and server. Still, it provides a starting point
for easy profiling.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
This will make it easier for a human to tell what
version is deployed, for (say) correlating line numbers
in profiles or panics to corresponding source code.
It'll also let us observe version changes in prometheus.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Active discovery lets us introspect the state of the network stack precisely
enough that it's unnecessary, and dropping the initial DERP packets greatly
slows down tests. Additionally, it's unrealistic since our production network
will never deliver _only_ discovery packets, it'll be all or nothing.
Signed-off-by: David Anderson <danderson@tailscale.com>
For various reasons (mostly during rollouts or config changes on our
side), nodes may end up connecting to a fallback DERP node in a
region, rather than the primary one we tell them about in the DERP
map.
Connecting to the "wrong" node is fine, but it's in our best interest
for all nodes in a domain to connect to the same node, to reduce
intra-region packet forwarding.
This adds a privileged frame type used by the control system that can
kick off a client connection when they're connected to the wrong node
in a region. Then they hopefully reconnect immediately to the correct
location. (If not, we can leave them alone and stop closing them.)
Updates tailscale/corp#372
The magicsock derpReader was holding onto 65KB for each DERP
connection forever, just in case.
Make the derp{,http}.Client be in charge of memory instead. It can
reuse its bufio.Reader buffer space.
(The NewMeshClient constructor I added recently was gross in
retrospect at call sites, especially when it wasn't obvious that a
meshKey empty string meant a regular client)
This lets a trusted DERP client that knows a pre-shared key subscribe
to the connection list. Upon subscribing, they get the current set
of connected public keys, and then all changes over time.
This lets a set of DERP server peers within a region all stay connected to
each other and know which clients are connected to which nodes.
Updates #388
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
My earlier 3fa58303d0 tried to implement
the net/http.Tranhsport.DialTLSContext hook, but I didn't return a
*tls.Conn, so we ended up sending a plaintext HTTP request to an HTTPS
port. The response ended up being Go telling as such, not the
/derp/latency-check handler's response (which is currently still a
404). But we didn't even get the 404.
This happened to work well enough because Go's built-in error response
was still a valid HTTP response that we can measure for timing
purposes, but it's not a great answer. Notably, it means we wouldn't
be able to get a future handler to run server-side and count those
latency requests.
This allows tailscaled's own traffic to bypass Tailscale-managed routes,
so that things like tailscale-provided default routes don't break
tailscaled itself.
Progress on #144.
Signed-off-by: David Anderson <danderson@tailscale.com>
Instead of hard-coding the DERP map (except for cmd/tailscale netcheck
for now), get it from the control server at runtime.
And make the DERP map support multiple nodes per region with clients
picking the first one that's available. (The server will balance the
order presented to clients for load balancing)
This deletes the stunner package, merging it into the netcheck package
instead, to minimize all the config hooks that would've been
required.
Also fix some test flakes & races.
Fixes#387 (Don't hard-code the DERP map)
Updates #388 (Add DERP region support)
Fixes#399 (wgengine: flaky tests)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We don't want those extra dependencies on iOS, at least yet.
Especially since there's no way to set the relevant environment
variables so it's just bloat with no benefits. Perhaps we'll need to
do SOCKS on iOS later, but probably differently if/when so.
Updates #227
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
When unregistering a replaced client connection, move the
still-connected peers to the current client connecition. Inform
the peers that we've gone only when unregistering the active
client connection.
Signed-off-by: Dmitry Adamushko <da@stablebits.net>
I saw a test flake due to the sender goroutine logging (ultimately to
t.Logf) after the server was closed.
This makes sure the all goroutines are cleaned up before Server.Close
returns.
This is mostly prep for a few future CLs, making sure we always have a
close-on-dead done channel available to select on when doing other
channel operations.
This avoids the server blocking on misbehaving or heavily contended
clients. We attempt to drop from the head of the queue to keep
overall queueing time lower.
Also:
- fixes server->client keepalives, which weren't happening.
- removes read rate-limiter, deferring instead to kernel-level
global limiter/fair queuer.
Signed-off-by: David Anderson <dave@natulte.net>
Without the recent write deadline introduction, this test fails.
They still do interfere, but the interference is now bound by
the write deadline. Many improvements are possible.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
If Alice attempts to send a packet to Bob and the DERP server
encounters an error on the socket to Bob, we should not disconnect
Alice for that.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
I started to write a full DNS caching resolver and I realized it was
overkill and wouldn't work on Windows even in Go 1.14 yet, so I'm
doing this tiny one instead for now, just for all our netcheck STUN
derp lookups, and connections to DERP servers. (This will be caching a
exactly 8 DNS entries, all ours.)
Fixes#145 (can be better later, of course)
I broke an invariant in 11048b8932 (it was even nicely
documented then).
Also clean up the test a bit from while I was debugging it.
Fixes#84
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>