The netlog.Message type is useful to depend on from other packages,
but doing so would transitively cause gvisor and other large packages
to be linked in.
Avoid this problem by moving all network logging types to a single package.
We also update staticcheck to take in:
003d277bcf
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Logger type managers a logtail.Logger for extracting
statistics from a tstun.Wrapper.
So long as Shutdown is called, it ensures that logtail
and statistic gathering resources are properly cleared up.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rename StatisticsEnable as SetStatisticsEnabled to be consistent
with other similarly named methods.
Rename StatisticsExtract as ExtractStatistics to follow
the convention where methods start with a verb.
It was originally named with Statistics as a prefix so that
statistics related methods would sort well in godoc,
but that property no longer holds.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
If Wrapper.StatisticsEnable is enabled,
then per-connection counters are maintained.
If enabled, Wrapper.StatisticsExtract must be periodically called
otherwise there is unbounded memory growth.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We were marking them as gauges, but they are only ever incremented,
thus counter is more appropriate.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This were intended to be pushed to #4408, but in my excitement I
forgot to git push :/ better late than never.
Signed-off-by: Tom DNetto <tom@tailscale.com>
This change wires netstack with a hook for traffic coming from the host
into the tun, allowing interception and handling of traffic to quad-100.
With this hook wired, magicDNS queries over UDP are now handled within
netstack. The existing logic in wgengine to handle magicDNS remains for now,
but its hook operates after the netstack hook so the netstack implementation
takes precedence. This is done in case we need to support platforms with
netstack longer than expected.
Signed-off-by: Tom DNetto <tom@tailscale.com>
A subsequent commit implements handling of magicDNS traffic via netstack.
Implementing this requires a hook for traffic originating from the host and
hitting the tun, so we make another hook to support this.
Signed-off-by: Tom DNetto <tom@tailscale.com>
Plumb the outbound injection path to allow passing netstack
PacketBuffers down to the tun Read, where they are decref'd to enable
buffer re-use. This removes one packet alloc & copy, and reduces GC
pressure by pooling outbound injected packets.
Fixes#2741
Signed-off-by: James Tucker <james@tailscale.com>
This TODO was both added and fixed in 506c727e3.
As I recall, I wasn't originally going to do it because it seemed
annoying, so I wrote the TODO, but then I felt bad about it and just
did it, but forgot to remove the TODO.
Change-Id: I8f3514809ad69b447c62bfeb0a703678c1aec9a3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Now that Go 1.17 has module graph pruning
(https://go.dev/doc/go1.17#go-command), we should be able to use
upstream netstack without breaking our private repo's build
that then depends on the tailscale.com Go module.
This is that experiment.
Updates #1518 (the original bug to break out netstack to own module)
Updates #2642 (this updates netstack, but doesn't remove workaround)
Change-Id: I27a252c74a517053462e5250db09f379de8ac8ff
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
A new package can also later record/report which knobs are checked and
set. It also makes the code cleaner & easier to grep for env knobs.
Change-Id: Id8a123ab7539f1fadbd27e0cbeac79c2e4f09751
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Fixes#3660
RELNOTE=MagicDNS now works over IPv6 when CGNAT IPv4 is disabled.
Change-Id: I001e983df5feeb65289abe5012dedd177b841b45
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
When this happens, it is incredibly noisy in the logs.
It accounts for about a third of all remaining
"unexpected" log lines from a recent investigation.
It's not clear that we know how to fix this,
we have a functioning workaround,
and we now have a (cheap and efficient) metric for this
that we can use for measurements.
So reduce the logging to approximately once per minute.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
There are a few remaining uses of testing.AllocsPerRun:
Two in which we only log the number of allocations,
and one in which dynamically calculate the allocations
target based on a different AllocsPerRun run.
This also allows us to tighten the "no allocs"
test in wgengine/filter.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It was in the wrong filter direction before, per CPU profiles
we now have.
Updates #1526 (maybe fixes? time will tell)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Mostly so the Linux one can use Linux-specific stuff in package
syscall and not use os/exec for uname for portability.
But also it helps deps a tiny bit on iOS.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Still very much a prototype (hard-coded IPs, etc) but should be
non-invasive enough to submit at this point and iterate from here.
Updates #2589
Co-Author: David Crawshaw <crawshaw@tailscale.com>
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
There's a call to Now once per packet.
Move to mono.Now.
Though the current implementation provides high precision,
we document it to be coarse, to preserve the ability
to switch to a coarse monotonic time later.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The handoff between tstun.Wrap's Read and poll methods
is one of the per-packet hotspots. It shows up in pprof.
Making outbound buffered increases throughput.
It is hard to measure exactly how much, because the numbers
are highly variable, but I'd estimate it at about 1%,
using the best observed max throughput across three runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The handoff between tstun.Wrap's Read and poll methods
is one of the per-packet hotspots. It shows up in pprof.
Making outbound buffered increases throughput.
It is hard to measure exactly how much, because the numbers
are highly variable, but I'd estimate it at about 1%,
using the best observed max throughput across three runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
To remove some multi-case selects, we intentionally allowed
sends on closed channels (cc23049cd2).
However, we also introduced concurrent sends and closes,
which is a data race.
This commit fixes the data race. The mutexes here are uncontended,
and thus very cheap.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Calculate whether the packet is injected directly,
rather than via an else branch.
Unify the exit paths. It is easier here than duplicating them.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Every TUN Read went through several multi-case selects.
We know from past experience with wireguard-go that these are slow
and cause scheduler churn.
The selects served two purposes: they separated errors from data and
gracefully handled shutdown. The first is fairly easy to replace by sending
errors and data over a single channel. The second, less so.
We considered a few approaches: Intricate webs of channels,
global condition variables. They all get ugly fast.
Instead, let's embrace the ugly and handle shutdown ungracefully.
It's horrible, but the horror is simple and localized.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>