Right now, filtering and packet injection in wgengine depend
on a patch to wireguard-go that probably isn't suitable for upstreaming.
This need not be the case: wireguard-go/tun.Device is an interface.
For example, faketun.go implements it to mock a TUN device for testing.
This patch implements the same interface to provide filtering
and packet injection at the tunnel device level,
at which point the wireguard-go patch should no longer be necessary.
This patch has the following performance impact on i7-7500U @ 2.70GHz,
tested in the following namespace configuration:
┌────────────────┐ ┌─────────────────────────────────┐ ┌────────────────┐
│ $ns1 │ │ $ns0 │ │ $ns2 │
│ client0 │ │ tailcontrol, logcatcher │ │ client1 │
│ ┌─────┐ │ │ ┌──────┐ ┌──────┐ │ │ ┌─────┐ │
│ │vethc│───────┼────┼──│vethrc│ │vethrs│──────┼─────┼──│veths│ │
│ ├─────┴─────┐ │ │ ├──────┴────┐ ├──────┴────┐ │ │ ├─────┴─────┐ │
│ │10.0.0.2/24│ │ │ │10.0.0.1/24│ │10.0.1.1/24│ │ │ │10.0.1.2/24│ │
│ └───────────┘ │ │ └───────────┘ └───────────┘ │ │ └───────────┘ │
└────────────────┘ └─────────────────────────────────┘ └────────────────┘
Before:
---------------------------------------------------
| TCP send | UDP send |
|------------------------|------------------------|
| 557.0 (±8.5) Mbits/sec | 3.03 (±0.02) Gbits/sec |
---------------------------------------------------
After:
---------------------------------------------------
| TCP send | UDP send |
|------------------------|------------------------|
| 544.8 (±1.6) Mbits/sec | 3.13 (±0.02) Gbits/sec |
---------------------------------------------------
The impact on receive performance is similar.
Signed-off-by: Dmytro Shynkevych <dmytro@tailscale.com>
This saves a layer of translation, and saves us having to
pass in extra bits and pieces of the netmap and prefs to
wgengine. Now it gets one Wireguard config, and one OS
network stack config.
Signed-off-by: David Anderson <danderson@tailscale.com>
Defensive programming against #368 in environments other than Docker,
e.g. if you try using Tailscale in Alpine Linux directly, sans
container.
Signed-off-by: David Anderson <danderson@tailscale.com>
The iptables package we use doesn't include command output, so we're
left with guessing what went wrong most of the time. This will at
least narrow things down to which operation failed.
Signed-off-by: David Anderson <danderson@tailscale.com>
For "tailscale status" on macOS (from separately downloaded
cmd/tailscale binary against App Store IPNExtension).
(This isn't all of it, but I've had this sitting around uncommitted.)
staticcheck used to fail on macOS (and presumably windows) due to a
variable declared in a common package that was only used by the Linux
build, which would prevent `redo pr` from passing on Mac. Moved variable
declaration from the common file to the Linux-specific one to resolve
the compiler complaint.
Signed-off-by: Wendi Yu <wendi.yu@yahoo.ca>
Implement rate limiting on log messages
Addresses issue #317, where logs can get spammed with the same message
nonstop. Created a rate limiting closure on logging functions, which
limits the number of messages being logged per second based on format
string. To keep memory usage as constant as possible, the previous cache
purging at periodic time intervals has been replaced by an LRU that
discards the oldest string when the capacity of the cache is reached.
Signed-off-by: Wendi Yu <wendi.yu@yahoo.ca>
Instead, pass in only exactly the relevant configuration pieces
that the OS network stack cares about.
Signed-off-by: David Anderson <danderson@tailscale.com>
New logic installs precise filters for subnet routes,
plays nice with other users of netfilter, and lays the
groundwork for fixing routing loops via policy routing.
Signed-off-by: David Anderson <danderson@tailscale.com>
If Australia's far away and not going to be used, it's still going to
be far away a minute later. No need to send backup
just-in-case-UDP-gets-lost STUN packets to the known far away
destinations. Those are the ones most likely to trigger retries due to
delay anyway (in random 50-250ms, currently). But we'll keep sending 1
packet to them, just in case our airplane landed.
Likewise, be less aggressive with IPv6. The main point is just to see
whether IPv6 works. No need to send up to 10 packets every round. Max
two is enough (except for the first round). This does mean our STUN
traffic graphs for IPv4-vs-IPv6 will change shape. Oh well. It was a
weird eyeball metric for IPv6 connectivity anyway and we have better
metrics.
We can tweak this policy over time. It's factored out and has tests
now.
No tidy, because it doesn't work for me:
$ go mod tidy
go: finding module for package tailscale.io/control
go: finding module for package tailscale.io/control/cfgdb
tailscale.com/control/controlclient tested by
tailscale.com/control/controlclient.test imports
tailscale.io/control: cannot find module providing package tailscale.io/control: unrecognized import path "tailscale.io/control": parse https://tailscale.io/control?go-get=1: no go-import meta tags (meta tag tailscale.com did not match import path tailscale.io/control)
tailscale.com/control/controlclient tested by
tailscale.com/control/controlclient.test imports
tailscale.io/control/cfgdb: cannot find module providing package tailscale.io/control/cfgdb: unrecognized import path "tailscale.io/control/cfgdb": parse https://tailscale.io/control/cfgdb?go-get=1: no go-import meta tags (meta tag tailscale.com did not match import path tailscale.io/control/cfgdb)
Signed-off-by: Elias Naur <mail@eliasnaur.com>