This is temporary while we work to upstream performance work in
https://github.com/WireGuard/wireguard-go/pull/64. A replace directive
is less ideal as it breaks dependent code without duplication of the
directive.
Signed-off-by: Jordan Whited <jordan@tailscale.com>
This commit updates the wireguard-go dependency and implements the
necessary changes to the tun.Device and conn.Bind implementations to
support passing vectors of packets in tailscaled. This significantly
improves throughput performance on Linux.
Updates #414
Signed-off-by: Jordan Whited <jordan@tailscale.com>
Signed-off-by: James Tucker <james@tailscale.com>
Co-authored-by: James Tucker <james@tailscale.com>
We would replace the existing real implementation of nettype.PacketConn
with a blockForeverConn, but that violates the contract of atomic.Value
(where the type cannot change). Fix by switching to a pointer value
(atomic.Pointer[nettype.PacketConn]).
A longstanding issue, but became more prevalent when we started binding
connections to interfaces on macOS and iOS (#6566), which could lead to
the bind call failing if the interface was no longer available.
Fixes#6641
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Previously, tstun.Wrapper and magicsock.Conn managed their
own statistics data structure and relied on an external call to
Extract to extract (and reset) the statistics.
This makes it difficult to ensure a maximum size on the statistics
as the caller has no introspection into whether the number
of unique connections is getting too large.
Invert the control flow such that a *connstats.Statistics
is registered with tstun.Wrapper and magicsock.Conn.
Methods on non-nil *connstats.Statistics are called for every packet.
This allows the implementation of connstats.Statistics (in the future)
to better control when it needs to flush to ensure
bounds on maximum sizes.
The value registered into tstun.Wrapper and magicsock.Conn could
be an interface, but that has two performance detriments:
1. Method calls on interface values are more expensive since
they must go through a virtual method dispatch.
2. The implementation would need a sync.Mutex to protect the
statistics value instead of using an atomic.Pointer.
Given that methods on constats.Statistics are called for every packet,
we want reduce the CPU cost on this hot path.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
There aren't any in the wild, other than one we ran on purpose to keep
us honest, but we can bump that one forward to 0.100.
Change-Id: I129e70724b2d3f8edf3b496dc01eba3ac5a2a907
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This renames canP2P in magicsock to canP2PLocked to reflect
expectation of mutex lock, fixes a race we discovered in the meantime,
and updates the current stats.
Co-authored-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Signed-off-by: Jenny Zhang <jz@tailscale.com>
There is utility in logging traffic statistics that occurs at the physical layer.
That is, in order to send packets virtually to a particular tailscale IP address,
what physical endpoints did we need to communicate with?
This functionality logs IP addresses identical to
what had always been logged in magicsock prior to #5823,
so there is no increase in PII being logged.
ExtractStatistics returns a mapping of connections to counts.
The source is always a Tailscale IP address (without port),
while the destination is some endpoint reachable on WAN or LAN.
As a special case, traffic routed through DERP will use 127.3.3.40
as the destination address with the port being the DERP region.
This entire feature is only enabled if data-plane audit logging
is enabled on the tailnet (by default it is disabled).
Example of type of information logged:
------------------------------------ Tx[P/s] Tx[B/s] Rx[P/s] Rx[B/s]
PhysicalTraffic: 25.80 3.39Ki 38.80 5.57Ki
100.1.2.3 -> 143.11.22.33:41641 15.40 2.00Ki 23.20 3.37Ki
100.4.5.6 -> 192.168.0.100:41641 10.20 1.38Ki 15.60 2.20Ki
100.7.8.9 -> 127.3.3.40:2 0.20 6.40 0.00 0.00
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We had previously added this to the netcheck report in #5087 but never
copied it into the NetInfo struct. Additionally, add it to log lines so
it's visible to support.
Change-Id: Ib6266f7c6aeb2eb2a28922aeafd950fe1bf5627e
Signed-off-by: Andrew Dunham <andrew@tailscale.com>
Sets up new file for separate silent disco goroutine, tentatively named
pathfinder for now.
Updates #540
Co-authored-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Signed-off-by: Jenny Zhang <jz@tailscale.com>
During development of silent disco (#540), an alternate send policy
for magicsock that doesn't wake up the radio frequently with
heartbeats, we want the old & new policies to coexist, like we did
previously pre- and post-disco.
We started to do that earlier in 5c42990c2f but only set up the
env+control knob plumbing to set a bool about which path should be
used.
This starts to add a way for the silent disco code to update the send
path from a separate goroutine. (Part of the effort is going to
de-state-machinify the event based soup that is the current disco
code and make it more Go synchronous style.)
So far this does nothing. (It does add an atomic load on each send
but that should be noise in the grand scheme of things, and a even more
rare atomic store of nil on node config changes.)
Baby steps.
Updates #540
Co-authored-by: Jenny Zhang <jz@tailscale.com>
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The wireguard-go code unfortunately calls this unconditionally
even when verbose logging is disabled.
Partial revert of #5911.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This field seems seldom used and the documentation is wrong.
It is simpler to just derive its original value dynamically
when endpoint.DstToString is called.
This method is potentially used by wireguard-go,
but not in any code path is performance sensitive.
All calls to it use it in conjunction with fmt.Printf,
which is going to be slow anyways since it uses Go reflection.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
- At high data rates more buffer space is required in order to avoid
packet loss during any cause of delay.
- On slower machines more buffer space is required in order to avoid
packet loss while decryption & tun writing is underway.
- On higher latency network paths more buffer space is required in order
to overcome BDP.
- On Linux set with SO_*BUFFORCE to bypass net.core.{r,w}mem_max.
- 7MB is the current default maximum on macOS 12.6
- Windows test is omitted, as Windows does not support getsockopt for
these options.
Signed-off-by: James Tucker <james@tailscale.com>
And add a CLI/localapi and c2n mechanism to enable it for a fixed
amount of time.
Updates #1548
Change-Id: I71674aaf959a9c6761ff33bbf4a417ffd42195a7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This fixes a race condition which caused `c.muCond.Broadcast()` to
never fire in the `firstDerp` if block. It resulted in `Close()`
hanging forever.
Signed-off-by: Kyle Carberry <kyle@carberry.com>
From 5c42990c2f, not yet released in a stable build.
Caught by existing tests.
Fixes#5685
Change-Id: Ia76bb328809d9644e8b96910767facf627830600
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Baby steps towards turning off heartbeat pings entirely as per #540.
This doesn't change any current magicsock functionality and requires additional
changes to send/disco paths before the flag can be turned on.
Updates #540
Change-Id: Idc9a72748e74145b068d67e6dd4a4ffe3932efd0
Signed-off-by: Jenny Zhang <jz@tailscale.com>
Signed-off-by: Jenny Zhang <jz@tailscale.com>
Incoming disco packets are now dropped unless they match one of the
current bound ports, or have a zero port*.
The BPF filter passes all packets with a disco header to the raw packet
sockets regardless of destination port (in order to avoid needing to
reconfigure BPF on rebind).
If a BPF enabled node has just rebound, due to restart or rebind, it may
receive and reply to disco ping packets destined for ports other than
those which are presently bound. If the pong is accepted, the pinging
node will now assume that it can send WireGuard traffic to the pinged
port - such traffic will not reach the node as it is not destined for a
bound port.
*The zero port is ignored, if received. This is a speculative defense
and would indicate a problem in the receive path, or the BPF filter.
This condition is allowed to pass as it may enable traffic to flow,
however it will also enable problems with the same symptoms this patch
otherwise fixes.
Fixes#5536
Signed-off-by: James Tucker <james@tailscale.com>
1f959edeb0 introduced a regression for JS
where the initial bind no longer occurred at all for JS.
The condition is moved deeper in the call tree to avoid proliferation of
higher level conditions.
Updates #5537
Signed-off-by: James Tucker <james@tailscale.com>
Both RebindingUDPConns now always exist. the initial bind (which now
just calls rebind) now ensures that bind is called for both, such that
they both at least contain a blockForeverConn. Calling code no longer
needs to assert their state.
Signed-off-by: James Tucker <james@tailscale.com>
This is entirely optional (i.e. failing in this code is non-fatal) and
only enabled on Linux for now. Additionally, this new behaviour can be
disabled by setting the TS_DEBUG_DISABLE_AF_PACKET environment variable.
Updates #3824
Replaces #5474
Co-authored-by: Andrew Dunham <andrew@du.nham.ca>
Signed-off-by: David Anderson <danderson@tailscale.com>
The Start method was removed in 4c27e2fa22, but the comment on NewConn
still mentioned it doesn't do anything until this method is called.
Signed-off-by: Kris Brandow <kris.brandow@gmail.com>
This adds a lighter mechanism for endpoint updates from control.
Change-Id: If169c26becb76d683e9877dc48cfb35f90cc5f24
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
A new package can also later record/report which knobs are checked and
set. It also makes the code cleaner & easier to grep for env knobs.
Change-Id: Id8a123ab7539f1fadbd27e0cbeac79c2e4f09751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This fixes a deadlock on shutdown.
One goroutine is waiting to send on c.derpRecvCh before unlocking c.mu.
The other goroutine is waiting to lock c.mu before receiving from c.derpRecvCh.
#3736 has a more detailed explanation of the sequence of events.
Fixes#3736
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Turning this on at the beginning of the 1.21.x dev cycle, for 1.22.
Updates #150
Change-Id: I1de567cfe0be3df5227087de196ab88e60c9eb56
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The blockForeverConn was only using its sync.Cond one side. Looks like it
was just forgotten.
Fixes#3671
Change-Id: I4ed0191982cdd0bfd451f133139428a4fa48238c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Bigger changes coming later, but this should improve things a bit in
the meantime.
Rationale:
* 2 minutes -> 45 seconds: 2 minutes was overkill and never considered
phones/battery at the time. It was totally arbitrary. 45 seconds is
also arbitrary but is less than 2 minutes.
* heartbeat from 2 seconds to 3 seconds: in practice this meant two
packets per second (2 pings and 2 pongs every 2 seconds) because the
other side was also pinging us every 2 seconds on their own.
That's just overkill. (see #540 too)
So in the worst case before: when we sent a single packet (say: a DNS
packet), we ended up sending 61 packets over 2 minutes: the 1 DNS
query and then then 60 disco pings (2 minutes / 2 seconds) & received
the same (1 DNS response + 60 pongs). Now it's 15. In 1.22 we plan to
remove this whole timer-based heartbeat mechanism entirely.
The 5 seconds to 6.5 seconds change is just stretching out that
interval so you can still miss two heartbeats (other 3 + 3 seconds
would be greater than 5 seconds). This means that if your peer moves
without telling you, you can have a path out for 6.5 seconds
now instead of 5 seconds before disco finds a new one. That will also
improve in 1.22 when we start doing UDP+DERP at the same time
when confidence starts to go down on a UDP path.
Updates #3363
Change-Id: Ic2314bbdaf42edcdd7103014b775db9cf4facb47
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Treat UDP send EPERM errors as a lost UDP packet, not something super
fatal. That's just the Linux firewall preventing it from going out.
And add a leaf package net/neterror for that (and future) policy that
all three packages can share, with tests.
Updates #3619
Change-Id: Ibdb838c43ee9efe70f4f25f7fc7fdf4607ba9c1d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Only if the source address isn't on the currently active interface or
a ping of the DERP server fails.
Updates #3619
Change-Id: I6bf06503cff4d781f518b437c8744ac29577acc8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We only tracked the transport type (UDP vs DERP), not what they were.
Change-Id: Ia4430c1c53afd4634e2d9893d96751a885d77955
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It's been a bunch of releases now since the TailscaleIPs slice
replacement was added.
Change-Id: I3bd80e1466b3d9e4a4ac5bedba8b4d3d3e430a03
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
endpoint.discoKey is protected by endpoint.mu.
endpoint.sendDiscoMessage was reading it without holding the lock.
This showed up in a CI failure and is readily reproducible locally.
The fix is in two parts.
First, for Conn.enqueueCallMeMaybe, eliminate the one-line helper method endpoint.sendDiscoMessage; call Conn.sendDiscoMessage directly.
This makes it more natural to read endpoint.discoKey in a context
in which endpoint.mu is already held.
Second, for endpoint.sendDiscoPing, explicitly pass the disco key
as an argument. Again, this makes it easier to read endpoint.discoKey
in a context in which endpoint.mu is already held.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We can do the "maybe delete" check unilaterally:
In the case of an insert, both oldDiscoKey
and ep.discoKey will be the zero value.
And since we don't use pi again, we can skip
giving it a name, which makes scoping clearer.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
And annotate magicsock as a start.
And add localapi and debug handlers with the Prometheus-format
exporter.
Updates #3307
Change-Id: I47c5d535fe54424741df143d052760387248f8d3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
And the derper change to add a CORS endpoint for latency measurement.
And a little magicsock change to cut down some log spam on js/wasm.
Updates #3157
Change-Id: I5fd9e6f5098c815116ddc8ac90cbcd0602098a48
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Be DERP-only for now. (WebRTC can come later :))
Updates #3157
Change-Id: I56ebb3d914e37e8f4ab651306fd705b817ca381c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Now that peerMap tracks the set of nodes for a DiscoKey.
Updates #3088
Change-Id: I927bf2bdfd2b8126475f6b6acc44bc799fcb489f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Continuation of 2aa5df7ac1, remove nil
check because it can never be nil. (It previously was able to be nil.)
Change-Id: I59cd9ad611dbdcbfba680ed9b22e841b00c9d5e6
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This adds new fields (currently unused) to discoInfo to track what the
last verified (unambiguous) NodeKey a DiscoKey last mapped to, and
when.
Then on CallMeMaybe, Pong and on most Pings, we update the mapping
from DiscoKey to the current NodeKey for that DiscoKey.
Updates #3088
Change-Id: Idc4261972084dec71cf8ec7f9861fb9178eb0a4d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This lets clients quickly (sub-millisecond within a local LAN) map
from an ambiguous disco key to a node key without waiting for a
CallMeMaybe (over relatively high latency DERP).
Updates #3088
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>