* advertise server's DERP public key following its ServerHello
* have client look for that DEPR public key in the response
PeerCertificates
* let client advertise it's going into a "fast start" mode
if it finds it
* modify server to support that fast start mode, just not
sending the HTTP response header
Cuts down another round trip, bringing the latency of being able to
write our first DERP frame from SF to Bangalore from ~725ms
(3 RTT) to ~481ms (2 RTT: TCP and TLS).
Fixes#693
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It just has a version number in it and it's not really needed.
Instead just return it as a normal Recv message type for those
that care (currently only tests).
Updates #150 (in that it shares the same goal: initial DERP latency)
Updates #199 (in that it removes some DERP versioning)
We're beginning to reference DERP region names in the admin UI, so it's
best to consolidate this information in our DERP map.
Signed-off-by: Ross Zurowski <ross@rosszurowski.com>
Strictly speaking, we don't know that it's a wireguard packet, just that
it doesn't look like a disco packet.
Signed-off-by: David Anderson <danderson@tailscale.com>
These aren't particularly performance critical,
but since I have an optimization pending for them,
it's worth having a corresponding benchmark.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
This benchmark is far from perfect: It mixes together
client and server. Still, it provides a starting point
for easy profiling.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
This will make it easier for a human to tell what
version is deployed, for (say) correlating line numbers
in profiles or panics to corresponding source code.
It'll also let us observe version changes in prometheus.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Active discovery lets us introspect the state of the network stack precisely
enough that it's unnecessary, and dropping the initial DERP packets greatly
slows down tests. Additionally, it's unrealistic since our production network
will never deliver _only_ discovery packets, it'll be all or nothing.
Signed-off-by: David Anderson <danderson@tailscale.com>
For various reasons (mostly during rollouts or config changes on our
side), nodes may end up connecting to a fallback DERP node in a
region, rather than the primary one we tell them about in the DERP
map.
Connecting to the "wrong" node is fine, but it's in our best interest
for all nodes in a domain to connect to the same node, to reduce
intra-region packet forwarding.
This adds a privileged frame type used by the control system that can
kick off a client connection when they're connected to the wrong node
in a region. Then they hopefully reconnect immediately to the correct
location. (If not, we can leave them alone and stop closing them.)
Updates tailscale/corp#372
The magicsock derpReader was holding onto 65KB for each DERP
connection forever, just in case.
Make the derp{,http}.Client be in charge of memory instead. It can
reuse its bufio.Reader buffer space.
(The NewMeshClient constructor I added recently was gross in
retrospect at call sites, especially when it wasn't obvious that a
meshKey empty string meant a regular client)
This lets a trusted DERP client that knows a pre-shared key subscribe
to the connection list. Upon subscribing, they get the current set
of connected public keys, and then all changes over time.
This lets a set of DERP server peers within a region all stay connected to
each other and know which clients are connected to which nodes.
Updates #388
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
My earlier 3fa58303d0 tried to implement
the net/http.Tranhsport.DialTLSContext hook, but I didn't return a
*tls.Conn, so we ended up sending a plaintext HTTP request to an HTTPS
port. The response ended up being Go telling as such, not the
/derp/latency-check handler's response (which is currently still a
404). But we didn't even get the 404.
This happened to work well enough because Go's built-in error response
was still a valid HTTP response that we can measure for timing
purposes, but it's not a great answer. Notably, it means we wouldn't
be able to get a future handler to run server-side and count those
latency requests.
This allows tailscaled's own traffic to bypass Tailscale-managed routes,
so that things like tailscale-provided default routes don't break
tailscaled itself.
Progress on #144.
Signed-off-by: David Anderson <danderson@tailscale.com>
Instead of hard-coding the DERP map (except for cmd/tailscale netcheck
for now), get it from the control server at runtime.
And make the DERP map support multiple nodes per region with clients
picking the first one that's available. (The server will balance the
order presented to clients for load balancing)
This deletes the stunner package, merging it into the netcheck package
instead, to minimize all the config hooks that would've been
required.
Also fix some test flakes & races.
Fixes#387 (Don't hard-code the DERP map)
Updates #388 (Add DERP region support)
Fixes#399 (wgengine: flaky tests)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We don't want those extra dependencies on iOS, at least yet.
Especially since there's no way to set the relevant environment
variables so it's just bloat with no benefits. Perhaps we'll need to
do SOCKS on iOS later, but probably differently if/when so.
Updates #227
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>