This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
01b90df2fa added SCTP support before
(with explicit parsing for ports) and
69de3bf7bf tried to add support for
arbitrary IP protocols (as long as the ACL permited a port of "*",
since we might not know how to find ports from an arbitrary IP
protocol, if it even has such a concept). But apparently that latter
commit wasn't tested end-to-end enough. It had a lot of tests, but the
tests made assumptions about layering that either weren't true, or
regressed since 1.20. Notably, it didn't remove the (*Filter).pre
bidirectional filter that dropped all "unknown" protocol packets both
leaving and entering, even if there were explicit protocol matches
allowing them in.
Also, don't map all unknown protocols to 0. Keep their IP protocol
number parsed so it's matchable by later layers. Only reject illegal
things.
Fixes#6423
Updates #2162
Updates #2163
Change-Id: I9659b3ece86f4db51d644f9b34df78821758842c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The Tailscale logging service has a hard limit on the maximum
log message size that can be accepted.
We want to ensure that netlog messages never exceed
this limit otherwise a client cannot transmit logs.
Move the goroutine for periodically dumping netlog messages
from wgengine/netlog to net/connstats.
This allows net/connstats to manage when it dumps messages,
either based on time or by size.
Updates tailscale/corp#8427
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This is temporary while we work to upstream performance work in
https://github.com/WireGuard/wireguard-go/pull/64. A replace directive
is less ideal as it breaks dependent code without duplication of the
directive.
Signed-off-by: Jordan Whited <jordan@tailscale.com>
We would call parsedPacketPool.Get() for all packets received in Read/Write.
This was wasteful and not necessary, fetch a single *packet.Parsed for
all packets.
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This commit updates the wireguard-go dependency and implements the
necessary changes to the tun.Device and conn.Bind implementations to
support passing vectors of packets in tailscaled. This significantly
improves throughput performance on Linux.
Updates #414
Signed-off-by: Jordan Whited <jordan@tailscale.com>
Signed-off-by: James Tucker <james@tailscale.com>
Co-authored-by: James Tucker <james@tailscale.com>
Previously, tstun.Wrapper and magicsock.Conn managed their
own statistics data structure and relied on an external call to
Extract to extract (and reset) the statistics.
This makes it difficult to ensure a maximum size on the statistics
as the caller has no introspection into whether the number
of unique connections is getting too large.
Invert the control flow such that a *connstats.Statistics
is registered with tstun.Wrapper and magicsock.Conn.
Methods on non-nil *connstats.Statistics are called for every packet.
This allows the implementation of connstats.Statistics (in the future)
to better control when it needs to flush to ensure
bounds on maximum sizes.
The value registered into tstun.Wrapper and magicsock.Conn could
be an interface, but that has two performance detriments:
1. Method calls on interface values are more expensive since
they must go through a virtual method dispatch.
2. The implementation would need a sync.Mutex to protect the
statistics value instead of using an atomic.Pointer.
Given that methods on constats.Statistics are called for every packet,
we want reduce the CPU cost on this hot path.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The //go:build syntax was introduced in Go 1.17:
https://go.dev/doc/go1.17#build-lines
gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.
Done with:
perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The netlog.Message type is useful to depend on from other packages,
but doing so would transitively cause gvisor and other large packages
to be linked in.
Avoid this problem by moving all network logging types to a single package.
We also update staticcheck to take in:
003d277bcf
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Logger type managers a logtail.Logger for extracting
statistics from a tstun.Wrapper.
So long as Shutdown is called, it ensures that logtail
and statistic gathering resources are properly cleared up.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rename StatisticsEnable as SetStatisticsEnabled to be consistent
with other similarly named methods.
Rename StatisticsExtract as ExtractStatistics to follow
the convention where methods start with a verb.
It was originally named with Statistics as a prefix so that
statistics related methods would sort well in godoc,
but that property no longer holds.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
If Wrapper.StatisticsEnable is enabled,
then per-connection counters are maintained.
If enabled, Wrapper.StatisticsExtract must be periodically called
otherwise there is unbounded memory growth.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We were marking them as gauges, but they are only ever incremented,
thus counter is more appropriate.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This were intended to be pushed to #4408, but in my excitement I
forgot to git push :/ better late than never.
Signed-off-by: Tom DNetto <tom@tailscale.com>
This change wires netstack with a hook for traffic coming from the host
into the tun, allowing interception and handling of traffic to quad-100.
With this hook wired, magicDNS queries over UDP are now handled within
netstack. The existing logic in wgengine to handle magicDNS remains for now,
but its hook operates after the netstack hook so the netstack implementation
takes precedence. This is done in case we need to support platforms with
netstack longer than expected.
Signed-off-by: Tom DNetto <tom@tailscale.com>
A subsequent commit implements handling of magicDNS traffic via netstack.
Implementing this requires a hook for traffic originating from the host and
hitting the tun, so we make another hook to support this.
Signed-off-by: Tom DNetto <tom@tailscale.com>
Plumb the outbound injection path to allow passing netstack
PacketBuffers down to the tun Read, where they are decref'd to enable
buffer re-use. This removes one packet alloc & copy, and reduces GC
pressure by pooling outbound injected packets.
Fixes#2741
Signed-off-by: James Tucker <james@tailscale.com>
This TODO was both added and fixed in 506c727e3.
As I recall, I wasn't originally going to do it because it seemed
annoying, so I wrote the TODO, but then I felt bad about it and just
did it, but forgot to remove the TODO.
Change-Id: I8f3514809ad69b447c62bfeb0a703678c1aec9a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Now that Go 1.17 has module graph pruning
(https://go.dev/doc/go1.17#go-command), we should be able to use
upstream netstack without breaking our private repo's build
that then depends on the tailscale.com Go module.
This is that experiment.
Updates #1518 (the original bug to break out netstack to own module)
Updates #2642 (this updates netstack, but doesn't remove workaround)
Change-Id: I27a252c74a517053462e5250db09f379de8ac8ff
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
A new package can also later record/report which knobs are checked and
set. It also makes the code cleaner & easier to grep for env knobs.
Change-Id: Id8a123ab7539f1fadbd27e0cbeac79c2e4f09751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Fixes#3660
RELNOTE=MagicDNS now works over IPv6 when CGNAT IPv4 is disabled.
Change-Id: I001e983df5feeb65289abe5012dedd177b841b45
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
When this happens, it is incredibly noisy in the logs.
It accounts for about a third of all remaining
"unexpected" log lines from a recent investigation.
It's not clear that we know how to fix this,
we have a functioning workaround,
and we now have a (cheap and efficient) metric for this
that we can use for measurements.
So reduce the logging to approximately once per minute.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
There are a few remaining uses of testing.AllocsPerRun:
Two in which we only log the number of allocations,
and one in which dynamically calculate the allocations
target based on a different AllocsPerRun run.
This also allows us to tighten the "no allocs"
test in wgengine/filter.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It was in the wrong filter direction before, per CPU profiles
we now have.
Updates #1526 (maybe fixes? time will tell)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Mostly so the Linux one can use Linux-specific stuff in package
syscall and not use os/exec for uname for portability.
But also it helps deps a tiny bit on iOS.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Still very much a prototype (hard-coded IPs, etc) but should be
non-invasive enough to submit at this point and iterate from here.
Updates #2589
Co-Author: David Crawshaw <crawshaw@tailscale.com>
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
There's a call to Now once per packet.
Move to mono.Now.
Though the current implementation provides high precision,
we document it to be coarse, to preserve the ability
to switch to a coarse monotonic time later.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The handoff between tstun.Wrap's Read and poll methods
is one of the per-packet hotspots. It shows up in pprof.
Making outbound buffered increases throughput.
It is hard to measure exactly how much, because the numbers
are highly variable, but I'd estimate it at about 1%,
using the best observed max throughput across three runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The handoff between tstun.Wrap's Read and poll methods
is one of the per-packet hotspots. It shows up in pprof.
Making outbound buffered increases throughput.
It is hard to measure exactly how much, because the numbers
are highly variable, but I'd estimate it at about 1%,
using the best observed max throughput across three runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
To remove some multi-case selects, we intentionally allowed
sends on closed channels (cc23049cd2).
However, we also introduced concurrent sends and closes,
which is a data race.
This commit fixes the data race. The mutexes here are uncontended,
and thus very cheap.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Calculate whether the packet is injected directly,
rather than via an else branch.
Unify the exit paths. It is easier here than duplicating them.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Every TUN Read went through several multi-case selects.
We know from past experience with wireguard-go that these are slow
and cause scheduler churn.
The selects served two purposes: they separated errors from data and
gracefully handled shutdown. The first is fairly easy to replace by sending
errors and data over a single channel. The second, less so.
We considered a few approaches: Intricate webs of channels,
global condition variables. They all get ugly fast.
Instead, let's embrace the ugly and handle shutdown ungracefully.
It's horrible, but the horror is simple and localized.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Pull in the latest version of wireguard-windows.
Switch to upstream wireguard-go.
This requires reverting all of our import paths.
Unfortunately, this has to happen at the same time.
The wireguard-go change is very low risk,
as that commit matches our fork almost exactly.
(The only changes are import paths, CI files, and a go.mod entry.)
So if there are issues as a result of this commit,
the first place to look is wireguard-windows changes.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Whenever we dropped a packet due to ACLs, wireguard-go was logging:
Failed to write packet to TUN device: packet dropped by filter
Instead, just lie to wireguard-go and pretend everything is okay.
Fixes#1229
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We had a long-standing bug in which our TUN events channel
was being received from simultaneously in two places.
The first is wireguard-go.
At wgengine/userspace.go:366, we pass e.tundev to wireguard-go,
which starts a goroutine (RoutineTUNEventReader)
that receives from that channel and uses events to adjust the MTU
and bring the device up/down.
At wgengine/userspace.go:374, we launch a goroutine that
receives from e.tundev, logs MTU changes, and triggers
state updates when up/down changes occur.
Events were getting delivered haphazardly between the two of them.
We don't really want wireguard-go to receive the up/down events;
we control the state of the device explicitly by calling device.Up.
And the userspace.go loop MTU logging duplicates logging that
wireguard-go does when it received MTU updates.
So this change splits the single TUN events channel into up/down
and other (aka MTU), and sends them to the parties that ought
to receive them.
I'm actually a bit surprised that this hasn't caused more visible trouble.
If a down event went to wireguard-go but the subsequent up event
went to userspace.go, we could end up with the wireguard-go device disappearing.
I believe that this may also (somewhat accidentally) be a fix for #1790.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
This tries to generate traffic at a rate that will saturate the
receiver, without overdoing it, even in the event of packet loss. It's
unrealistically more aggressive than TCP (which will back off quickly
in case of packet loss) but less silly than a blind test that just
generates packets as fast as it can (which can cause all the CPU to be
absorbed by the transmitter, giving an incorrect impression of how much
capacity the total system has).
Initial indications are that a syscall about every 10 packets (TCP bulk
delivery) is roughly the same speed as sending every packet through a
channel. A syscall per packet is about 5x-10x slower than that.
The whole tailscale wireguard-go + magicsock + packet filter
combination is about 4x slower again, which is better than I thought
we'd do, but probably has room for improvement.
Note that in "full" tailscale, there is also a tundev read/write for
every packet, effectively doubling the syscall overhead per packet.
Given these numbers, it seems like read/write syscalls are only 25-40%
of the total CPU time used in tailscale proper, so we do have
significant non-syscall optimization work to do too.
Sample output:
$ GOMAXPROCS=2 go test -bench . -benchtime 5s ./cmd/tailbench
goos: linux
goarch: amd64
pkg: tailscale.com/cmd/tailbench
cpu: Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz
BenchmarkTrivialNoAlloc/32-2 56340248 93.85 ns/op 340.98 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivialNoAlloc/124-2 57527490 99.27 ns/op 1249.10 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivialNoAlloc/1024-2 52537773 111.3 ns/op 9200.39 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/32-2 41878063 135.6 ns/op 236.04 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/124-2 41270439 138.4 ns/op 896.02 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTrivial/1024-2 36337252 154.3 ns/op 6635.30 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkBlockingChannel/32-2 12171654 494.3 ns/op 64.74 MB/s 0 %lost 1791 B/op 0 allocs/op
BenchmarkBlockingChannel/124-2 12149956 507.8 ns/op 244.17 MB/s 0 %lost 1792 B/op 1 allocs/op
BenchmarkBlockingChannel/1024-2 11034754 528.8 ns/op 1936.42 MB/s 0 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/32-2 8960622 2195 ns/op 14.58 MB/s 8.825 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/124-2 3014614 2224 ns/op 55.75 MB/s 11.18 %lost 1792 B/op 1 allocs/op
BenchmarkNonlockingChannel/1024-2 3234915 1688 ns/op 606.53 MB/s 3.765 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/32-2 8457559 764.1 ns/op 41.88 MB/s 5.945 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/124-2 5497726 1030 ns/op 120.38 MB/s 12.14 %lost 1792 B/op 1 allocs/op
BenchmarkDoubleChannel/1024-2 7985656 1360 ns/op 752.86 MB/s 13.57 %lost 1792 B/op 1 allocs/op
BenchmarkUDP/32-2 1652134 3695 ns/op 8.66 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkUDP/124-2 1621024 3765 ns/op 32.94 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkUDP/1024-2 1553750 3825 ns/op 267.72 MB/s 0 %lost 176 B/op 3 allocs/op
BenchmarkTCP/32-2 11056336 503.2 ns/op 63.60 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTCP/124-2 11074869 533.7 ns/op 232.32 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkTCP/1024-2 8934968 671.4 ns/op 1525.20 MB/s 0 %lost 0 B/op 0 allocs/op
BenchmarkWireGuardTest/32-2 1403702 4547 ns/op 7.04 MB/s 14.37 %lost 467 B/op 3 allocs/op
BenchmarkWireGuardTest/124-2 780645 7927 ns/op 15.64 MB/s 1.537 %lost 420 B/op 3 allocs/op
BenchmarkWireGuardTest/1024-2 512671 11791 ns/op 86.85 MB/s 0.5206 %lost 411 B/op 3 allocs/op
PASS
ok tailscale.com/wgengine/bench 195.724s
Updates #414.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
This is usually the same as the requested interface, but on some
unixes can vary based on device number allocation, and on Windows
it's the GUID instead of the pretty name, since everything relating
to configuration wants the GUID.
Signed-off-by: David Anderson <danderson@tailscale.com>
For discovery when an explicit hostname/IP is known. We'll still
also send it via control for finding peers by a list.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The tstun packagen contains both constructors for generic tun
Devices, and a wrapper that provides additional functionality.
Signed-off-by: David Anderson <danderson@tailscale.com>