We now allow some more ICMP errors to flow, specifically:
- ICMP parameter problem in both IPv4 and IPv6 (corrupt headers)
- ICMP Packet Too Big (for IPv6 PMTU)
Updates #311
Updates #8102
Updates #11002
Signed-off-by: James Tucker <james@tailscale.com>
MSS clamping for nftables was mostly not ran due to to an earlier rule in the FORWARD chain issuing accept verdict.
This commit places the clamping rule into a chain of its own to ensure that it gets ran.
Updates tailscale/tailscale#11002
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This is based on empirical testing using actual logs data.
FastestCompression only incurs a marginal <1% compression ratio hit
for a 2.25x reduction in memory use for small payloads
(which are common if log uploads happen at a decently high frequency).
The memory savings for large payloads is much lower
(less than 1.1x reduction).
LowMemory only incurs a marginal <5% hit on performance
for a 1.6-2.0x reduction in memory use for small or large payloads.
The memory gains for both settings justifies the loss of benefits,
which are arguably minimal.
tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We're tracking down a new instance of memory usage, and excessive memory usage
from sockstats is definitely not going to help with debugging, so disable it by
default on mobile.
Updates tailscale/corp#18514
Signed-off-by: James Tucker <james@tailscale.com>
When both muxes match, and one of them is a wildcard "/" pattern (which
is common in browser muxes), choose the more specific pattern.
If both are non-wildcard matches, there is a pattern overlap, so return
an error.
Updates https://github.com/tailscale/corp/issues/8027
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
We have hosts that support IPv6, but not IPv6 firewall configuration
in iptables mode.
We also have hosts that have some support for IPv6 firewall
configuration in iptables mode, but do not have iptables filter table.
We should:
- configure ip rules for all hosts that support IPv6
- only configure firewall rules in iptables mode if the host
has iptables filter table.
Updates tailscale/tailscale#11540
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Allow the use of inline styles with safeweb via an opt-in configuration
item. This will append `style-src "self" "unsafe-inline"` to the default
CSP. The `style-src` directive will be used in lieu of the fallback
`default-src "self"` directive.
Updates tailscale/corp#8027
Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
Some of our labels contain UTF-8 and get mojibaked in the browser
right now.
Updates tailscale/corp#18687
Change-Id: I6069cffd6cc8813df415f06bb308bc2fc3ab65c4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The initial control client request can get stuck in the event that a
connection is established but then lost part way through, without any
ICMP or RST. Ensure that the control client will be restarted by timing
out that initial request as well.
Fixes#11542
Signed-off-by: James Tucker <james@tailscale.com>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources
Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Updates #cleanup
Change the return type of the safeweb.RedirectHTTP method to a handler
that can be passed directly to http.Serve without any http.HandlerFunc
wrapping necessary.
Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
This CONNECT client doesn't match what Go's net/http.Transport does
(making the two values match). This makes it match.
This is all pretty unspecified but most clients & doc examples show
these matching. And some proxy implementations (such as Zscaler) care.
Updates tailscale/corp#18716
Change-Id: I135c5facbbcec9276faa772facbde1bb0feb2d26
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Updates https://github.com/tailscale/corp/issues/8027
Safeweb is a wrapper around http.Server & tsnet that encodes some
application security defaults.
Safeweb asks developers to split their HTTP routes into two
http.ServeMuxs for serving browser and API-facing endpoints
repsectively. It then wraps these HTTP routes with the
context-appropriate security controls.
safeweb.Server#Serve will serve the HTTP muxes over the provided
listener. Caller are responsible for creating and tearing down their
application's listeners. Applications being served over HTTPS that wish
to implement HTTP redirects can use the Server#HTTPRedirect handler to
do so.
Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.
Updates tailscale/corp#18202
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.
Updates tailscale/corp#18202
Signed-off-by: Percy Wegmann <percy@tailscale.com>
For example, if we get a 404 when downloading a file, we'll report access.
Also, to reduce verbosty of logs, this elides 0 length files.
Updates tailscale/corp#17818
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This change introduces some basic logging into the access and share
pathways for tailfs.
Updates tailscale/corp#17818
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
Rather than pass around a scratch buffer, put it on the Logger.
This is a baby step towards removing the background uploading
goroutine and starting it as needed.
Updates tailscale/corp#18514 (insofar as it led me to look at this code)
Change-Id: I6fd94581c28bde40fdb9fca788eb9590bcedae1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Including the double quotes (`"`) around the value made it appear like the helm chart should expect a string value for `installCRDs`.
Signed-off-by: Chris Milson-Tokunaga <chris.w.milson@gmail.com>
This implementation uses less memory than tempfork/device,
which helps avoid OOM conditions in the iOS VPN extension when
switching to a Tailnet with ExitNode routing enabled.
Updates tailscale/corp#18514
Signed-off-by: Percy Wegmann <percy@tailscale.com>
There's a vulnerability https://pkg.go.dev/vuln/GO-2024-2659 that
govulncheck flags, even though it's only reachable from tests and
cmd/sync-containers and cannot be exploited there.
Updates #cleanup
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
First we had Capabilities []string. Then
https://tailscale.com/blog/acl-grants (#4217) brought CapMap, a
superset of Capabilities. Except we never really finished the
transition inside the codebase to go all-in on CapMap. This does so.
Notably, this coverts Capabilities on the wire early to CapMap
internally so the code can only deal in CapMap, even against an old
control server.
In the process, this removes PeerChange.Capabilities support, which no
known control plane sent anyway. They can and should use
PeerChange.CapMap instead.
Updates #11508
Updates #4217
Change-Id: I872074e226b873f9a578d9603897b831d50b25d9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
From a problem we hit with how badger registers expvars; it broke
trunkd's exported metrics.
Updates tailscale/corp#1297
Change-Id: I42e1552e25f734c6f521b6e993d57a82849464b2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
When node attributes were super rare, the O(n) slice scans looking for
node attributes was more acceptable. But now more code and more users
are using increasingly more node attributes. Time to make it a map.
Noticed while working on tailscale/corp#17879
Updates #cleanup
Change-Id: Ic17c80341f418421002fbceb47490729048756d2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In the recent 20e9f3369 we made HealthChangeRequest machine requests
include a NodeKey, as it was the oddball machine request that didn't
include one. Unfortunately, that code was sometimes being called (at
least in some of our integration tests) without a node key due to its
registration with health.RegisterWatcher(direct.ReportHealthChange).
Fortunately tests in corp caught this before we cut a release. It's
possible this only affects this particular integration test's
environment, but still worth fixing.
Updates tailscale/corp#1297
Change-Id: I84046779955105763dc1be5121c69fec3c138672
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.
This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Go zstd package is not friendly for stateless zstd compression.
Passing around multiple zstd.Encoder just for stateless compression
is a waste of memory since the memory is never freed and seldom
used if no compression operations are happening.
For performance, we pool the relevant Encoder/Decoder
with the specific options set.
Functionally, this package is a wrapper over the Go zstd package
with a more ergonomic API for stateless operations.
This package can be used to cleanup various pre-existing zstd.Encoder
pools or one-off handlers spread throughout our codebases.
Performance:
BenchmarkEncode/Best 1690 610926 ns/op 25.78 MB/s 1 B/op 0 allocs/op
zstd_test.go:137: memory: 50.336 MiB
zstd_test.go:138: ratio: 3.269x
BenchmarkEncode/Better 10000 100939 ns/op 156.04 MB/s 0 B/op 0 allocs/op
zstd_test.go:137: memory: 20.399 MiB
zstd_test.go:138: ratio: 3.131x
BenchmarkEncode/Default 15775 74976 ns/op 210.08 MB/s 105 B/op 0 allocs/op
zstd_test.go:137: memory: 1.586 MiB
zstd_test.go:138: ratio: 3.064x
BenchmarkEncode/Fastest 23222 53977 ns/op 291.81 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestLowMemory 23361 50789 ns/op 310.13 MB/s 15 B/op 0 allocs/op
zstd_test.go:137: memory: 334.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestNoChecksum 23086 50253 ns/op 313.44 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.900x
BenchmarkDecode/Checksum 70794 17082 ns/op 300.96 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/NoChecksum 74935 15990 ns/op 321.51 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/LowMemory 71043 16739 ns/op 307.13 MB/s 0 B/op 0 allocs/op
zstd_test.go:163: memory: 79.347 KiB
We can see that the options are taking effect where compression ratio improves
with higher levels and compression speed diminishes.
We can also see that LowMemory takes effect where the pooled coder object
references less memory than other cases.
We can see that the pooling is taking effect as there are 0 amortized allocations.
Additional performance:
BenchmarkEncodeParallel/zstd-24 1857 619264 ns/op 1796 B/op 49 allocs/op
BenchmarkEncodeParallel/zstdframe-24 1954 532023 ns/op 4293 B/op 49 allocs/op
BenchmarkDecodeParallel/zstd-24 5288 197281 ns/op 2516 B/op 49 allocs/op
BenchmarkDecodeParallel/zstdframe-24 6441 196254 ns/op 2513 B/op 49 allocs/op
In concurrent usage, handling the pooling in this package
has a marginal benefit over the zstd package,
which relies on a Go channel as the pooling mechanism.
In particular, coders can be freed by the GC when not in use.
Coders can be shared throughout the program if they use this package
instead of multiple independent pools doing the same thing.
The allocations are unrelated to pooling as they're caused by the spawning of goroutines.
Updates #cleanup
Updates tailscale/corp#18514
Updates tailscale/corp#17653
Updates tailscale/corp#18005
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This can be used to reload a value periodically, whether from disk or
another source, while handling jitter and graceful shutdown.
Updates tailscale/corp#1297
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iee2b4385c9abae59805f642a7308837877cb5b3f