Tangentially related to #987, #177, #594, #925, #505
Motivated by rebooting a launchd-controlled tailscaled and it going
into SetNetworkUp(false) mode immediately because there really is no
network up at system boot, but then it got stuck in that paused state
forever, without a monitor implementation.
The interface.State logging tried to only log interfaces which had
interesting IPs, but the what-is-interesting checks differed between
the code that gathered the interface names to print and the printing
of their addresses.
When a handshake race occurs, a queued data packet can get lost.
TestTwoDevicePing expected that the very first data packet would arrive.
This caused occasional flakes.
Change TestTwoDevicePing to repeatedly re-send packets
and succeed when one of them makes it through.
This is acceptable (vs making WireGuard not drop the packets)
because this only affects communication with extremely old clients.
And those extremely old clients will eventually connect,
because the kernel will retry sends on timeout.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We modified the standard net package to not allocate a *net.UDPAddr
during a call to (*net.UDPConn).ReadFromUDP if the caller's use
of the *net.UDPAddr does not cause it to escape.
That is https://golang.org/cl/291390.
This is the companion change to magicsock.
There are two changes required.
First, call ReadFromUDP instead of ReadFrom, if possible.
ReadFrom returns a net.Addr, which is an interface, which always allocates.
Second, reduce the lifetime of the returned *net.UDPAddr.
We do this by immediately converting it into a netaddr.IPPort.
We left the existing RebindingUDPConn.ReadFrom method in place,
as it is required to satisfy the net.PacketConn interface.
With the upstream change and both of these fixes in place,
we have removed one large allocation per packet received.
name old time/op new time/op delta
ReceiveFrom-8 16.7µs ± 5% 16.4µs ± 8% ~ (p=0.310 n=5+5)
name old alloc/op new alloc/op delta
ReceiveFrom-8 112B ± 0% 64B ± 0% -42.86% (p=0.008 n=5+5)
name old allocs/op new allocs/op delta
ReceiveFrom-8 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.008 n=5+5)
Co-authored-by: Sonia Appasamy <sonia@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
addrSet maintained duplicate lists of netaddr.IPPorts and net.UDPAddrs.
Unify to use the netaddr type only.
This makes (*Conn).ReceiveIPvN a bit uglier,
but that'll be cleaned up in a subsequent commit.
This is preparatory work to remove an allocation from ReceiveIPv4.
Co-authored-by: Sonia Appasamy <sonia@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
I based my estimation of the required timeout based on locally
observed behavior. But CI machines are worse than my local machine.
16s was enough to reduce flakiness but not eliminate it. Bump it up again.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Build tags have been updated to build native Apple M1 binaries, existing build
tags for ios have been changed from darwin,arm64 to ios,arm64.
With this change, running go build cmd/tailscale{,d}/tailscale{,d}.go on an Apple
machine with the new processor works and resulting binaries show the expected
architecture, e.g. tailscale: Mach-O 64-bit executable arm64.
Tested using go version go1.16beta1 darwin/arm64.
Updates #943
Signed-off-by: moncho <50428+moncho@users.noreply.github.com>
It only affects 'go install ./...', etc, and only on darwin/arm64 (M1 Macs) where
the go-ole package doesn't compile.
No need to build it.
Updates #943
This was in place because retrieved allowed_ips was very expensive.
Upstream changed the data structure to make them cheaper to compute.
This commit is an experiment to find out whether they're now cheap enough.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We removed the "fast retry" code from our wireguard-go fork.
As a result, pings can take longer to transit when retries are required.
Allow that.
Fixes#1277
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The fix can make this test run unconditionally.
This moves code from 5c619882bc for
testability but doesn't fix it yet. The #1282 problem remains (when I
wrote its wake-up mechanism, I forgot there were N DERP readers
funneling into 1 UDP reader, and the code just isn't correct at all
for that case).
Also factor out some test helper code from BenchmarkReceiveFrom.
The refactoring in magicsock.go for testability should have no
behavior change.
If no exit node is specified, the filter must still run to remove
offered default routes from all peers.
Signed-off-by: David Anderson <danderson@tailscale.com>
This one alone doesn't modify the global dependency map much
(depaware.txt if anything looks slightly worse), but it leave
controlclient as only containing NetworkMap:
bradfitz@tsdev:~/src/tailscale.com/ipn$ grep -F "controlclient." *.go
backend.go: NetMap *controlclient.NetworkMap // new netmap received
fake_test.go: b.notify(Notify{NetMap: &controlclient.NetworkMap{}})
fake_test.go: b.notify(Notify{NetMap: &controlclient.NetworkMap{}})
handle.go: netmapCache *controlclient.NetworkMap
handle.go:func (h *Handle) NetMap() *controlclient.NetworkMap {
Once that goes into a leaf package, then ipn doesn't depend on
controlclient at all, and then the client gets smaller.
Updates #1278
Upstream wireguard-go decided to use errors.Is(err, net.ErrClosed)
instead of checking the error string.
It also provided an unsafe linknamed version of net.ErrClosed
for clients running Go 1.15. Switch to that.
This reduces the time required for the wgengine/magicsock tests
on my machine from ~35s back to the ~13s it was before
456cf8a376.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>