This doesn't change any behaviour for now, other than maybe running a
full netcheck more often. The intent is to start gathering data on
captive portals, and additionally, seeing this in the 'tailscale
netcheck' command should provide a bit of additional information to
users.
Updates #1634
Change-Id: I6ba08f9c584dc0200619fa97f9fde1a319f25c76
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
The io/ioutil package has been deprecated as of Go 1.16 [1]. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Reference: https://golang.org/doc/go1.16#ioutil
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Convert ParseResponse and Response to use netip.AddrPort instead of
net.IP and separate port.
Fixes#5281
Signed-off-by: Kris Brandow <kris.brandow@gmail.com>
This lets us distinguish "no IPv6 because the device's ISP doesn't
offer IPv6" from "IPv6 is unavailable/disabled in the OS".
Signed-off-by: David Anderson <danderson@tailscale.com>
Well, goimports actually (which adds the normal import grouping order we do)
Change-Id: I0ce1b1c03185f3741aad67c14a7ec91a838de389
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In cases where tailscale is operating behind a MITM proxy, we need to consider
that a lot more of the internals of our HTTP requests are visible and may be
used as part of authorization checks. As such, we need to 'behave' as closely
as possible to ideal.
- Some proxies do authorization or consistency checks based the on Host header
or HTTP URI, instead of just the IP/hostname/SNI. As such, we need to
construct a `*http.Request` with a valid URI everytime HTTP is going to be
used on the wire, even if its over TLS.
Aside from the singular instance in net/netcheck, I couldn't find anywhere
else a http.Request was constructed incorrectly.
- Some proxies may deny requests, typically by returning a 403 status code. We
should not consider these requests as a valid latency check, so netcheck
semantics have been updated to consider >299 status codes as a failed probe.
Signed-off-by: Tom DNetto <tom@tailscale.com>
A new package can also later record/report which knobs are checked and
set. It also makes the code cleaner & easier to grep for env knobs.
Change-Id: Id8a123ab7539f1fadbd27e0cbeac79c2e4f09751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Treat UDP send EPERM errors as a lost UDP packet, not something super
fatal. That's just the Linux firewall preventing it from going out.
And add a leaf package net/neterror for that (and future) policy that
all three packages can share, with tests.
Updates #3619
Change-Id: Ibdb838c43ee9efe70f4f25f7fc7fdf4607ba9c1d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
And the derper change to add a CORS endpoint for latency measurement.
And a little magicsock change to cut down some log spam on js/wasm.
Updates #3157
Change-Id: I5fd9e6f5098c815116ddc8ac90cbcd0602098a48
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
"skipping portmap; gateway range likely lacks support" is really
spammy on cloud systems, and not very useful in debugging.
Fixes https://github.com/tailscale/tailscale/issues/3034
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
On iOS (and possibly other platforms), sometimes our UDP socket would
get stuck in a state where it was bound to an invalid interface (or no
interface) after a network reconfiguration. We can detect this by
actually checking the error codes from sending our STUN packets.
If we completely fail to send any STUN packets, we know something is
very broken. So on the next STUN attempt, let's rebind the UDP socket
to try to correct any problems.
This fixes a problem where iOS would sometimes get stuck using DERP
instead of direct connections until the backend was restarted.
Fixes#2994
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
Split out of Denton's #2164, to make that diff smaller to review.
This change has no behavior changes.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
So a region can be used if needed, but won't be STUN-probed or used as
its home.
This gives us another possible debugging mechanism for #1310, or can
be used as a short-term measure against DERP flip-flops for people
equidistant between regions if our hysteresis still isn't good enough.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
* move probing out of netcheck into new net/portmapper package
* use PCP ANNOUNCE op codes for PCP discovery, rather than causing
short-lived (sub-second) side effects with a 1-second-expiring map +
delete.
* track when we heard things from the router so we can be less wasteful
in querying the router's port mapping services in the future
* use portmapper from magicsock to map a public port
Fixes#1298Fixes#1080Fixes#1001
Updates #864
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Users in Amsterdam (as one example) were flipping back and forth
between equidistant London & Frankfurt relays too much.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
At least the Apple Airport Extreme doesn't allow hairpin
sends from a private socket until it's seen traffic from
that src IP:port to something else out on the internet.
See https://github.com/tailscale/tailscale/issues/188#issuecomment-600728643
And it seems that even sending to a likely-filtered RFC 5737
documentation-only IPv4 range is enough to set up the mapping.
So do that for now. In the future we might want to classify networks
that do and don't require this separately. But for now help it.
I've confirmed that this is enough to fix the hairpin check on Avery's
home network, even using the RFC 5737 IP.
Fixes#188
My earlier 3fa58303d0 tried to implement
the net/http.Tranhsport.DialTLSContext hook, but I didn't return a
*tls.Conn, so we ended up sending a plaintext HTTP request to an HTTPS
port. The response ended up being Go telling as such, not the
/derp/latency-check handler's response (which is currently still a
404). But we didn't even get the 404.
This happened to work well enough because Go's built-in error response
was still a valid HTTP response that we can measure for timing
purposes, but it's not a great answer. Notably, it means we wouldn't
be able to get a future handler to run server-side and count those
latency requests.
This allows tailscaled's own traffic to bypass Tailscale-managed routes,
so that things like tailscale-provided default routes don't break
tailscaled itself.
Progress on #144.
Signed-off-by: David Anderson <danderson@tailscale.com>
Also:
* add -verbose flag to cmd/tailscale netcheck
* remove some API from the interfaces package
* convert some of the interfaces package to netaddr.IP
* don't even send IPv4 probes on machines with no IPv4 (or only v4
loopback)
* and once three regions have replied, stop waiting for other probes
at 2x the slowest duration.
Updates #376
Under some conditions, code would try to look things up in the maps
before the first call to updateLatency. I don't see any reason to delay
initialization of the maps, so let's just init them right away when
creating the Report instance.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>