This can be used to implement a persistent pool (i.e. one that isn't
cleared like sync.Pool is) of items–e.g. database connections.
Some benchmarks vs. a naive implementation that uses a single map
iteration show a pretty meaningful improvement:
$ benchstat -col /impl ./bench.txt
goos: darwin
goarch: arm64
pkg: tailscale.com/util/pool
│ Pool │ map │
│ sec/op │ sec/op vs base │
Pool_AddDelete-10 10.56n ± 2% 15.11n ± 1% +42.97% (p=0.000 n=10)
Pool_TakeRandom-10 56.75n ± 4% 1899.50n ± 20% +3246.84% (p=0.000 n=10)
geomean 24.49n 169.4n +591.74%
Updates tailscale/corp#19900
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ie509cb65573c4726cfc3da9a97093e61c216ca18
* util/linuxfw: fix IPv6 NAT availability check for nftables
When running firewall in nftables mode,
there is no need for a separate NAT availability check
(unlike with iptables, there are no hosts that support nftables, but not IPv6 NAT - see tailscale/tailscale#11353).
This change fixes a firewall NAT availability check that was using the no-longer set ipv6NATAvailable field
by removing the field and using a method that, for nftables, just checks that IPv6 is available.
Updates tailscale/tailscale#12008
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
set.Of(1, 2, 3) is prettier than set.SetOf([]int{1, 2, 3}).
I was going to change the signature of SetOf but then I noticed its
name has stutter anyway, so I kept it for compatibility. People can
prefer to use set.Of for new code or slowly migrate.
Also add a lazy Make method, which I often find myself wanting,
without having to resort to uglier mak.Set(&set, k, struct{}{}).
Updates #cleanup
Change-Id: Ic6f3870115334efcbd65e79c437de2ad3edb7625
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We had this in a different repo, but moving it here, as this a more
fitting package.
Updates #cleanup
Change-Id: I5fb9b10e465932aeef5841c67deba4d77d473d57
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Fixestailscale/corp#19459
This PR adds the ability for users of the syspolicy handler to read string arrays from the MDM solution configured on the system.
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
This PR bumps iptables to a newer version that has a function to detect
'NotExists' errors and uses that function to determine whether errors
received on iptables rule and chain clean up are because the rule/chain
does not exist- if so don't log the error.
Updates corp#19336
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name
Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.
Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.
* cmd/k8s-operator: support ExternalName Services
Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.
Updates tailscale/tailscale#10606
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Since the tailscaled binaries that we distribute are static and don't
link cgo, we previously wouldn't fetch group IDs that are returned via
NSS. Try shelling out to the 'id' command, similar to how we call
'getent', to detect such cases.
Updates #11682
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I9bdc938bd76c71bc130d44a97cc2233064d64799
There is an undocumented 16KiB limit for text log messages.
However, the limit for JSON messages is 256KiB.
Even worse, logging JSON as text results in significant overhead
since each double quote needs to be escaped.
Instead, use logger.Logf.JSON to explicitly log the info as JSON.
We also modify osdiag to return the information as structured data
rather than implicitly have the package log on our behalf.
This gives more control to the caller on how to log.
Updates #7802
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This still generates github.com/google/uuid UUID objects, but does so
using a ChaCha8 CSPRNG from the stdlib rand/v2 package. The public API
is backed by a sync.Pool to provide good performance in highly
concurrent operation.
Under high load the read API produces a lot of extra garbage and
overhead by way of temporaries and syscalls. This implementation reduces
both to minimal levels, and avoids any long held global lock by
utilizing sync.Pool.
Updates tailscale/corp#18266
Updates tailscale/corp#19054
Signed-off-by: James Tucker <james@tailscale.com>
This removes a potentially increased boot delay for certain boot
topologies where they block on ExecStartPre that may have socket
activation dependencies on other system services (such as
systemd-resolved and NetworkManager).
Also rename cleanup to clean up in affected/immediately nearby places
per code review commentary.
Fixes#11599
Signed-off-by: James Tucker <james@tailscale.com>
Package winenv provides information about the current Windows environment.
This includes details such as whether the device is a server or workstation,
and if it is AD domain-joined, MDM-registered, or neither.
Updates tailscale/corp#18342
Signed-off-by: Nick Khyl <nickk@tailscale.com>
There are no mutations to the input,
so we can support both ~string and ~[]byte just fine.
Updates #cleanup
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Specifying a smaller window size during compression
provides a knob to tweak the tradeoff between memory usage
and the compression ratio.
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
MSS clamping for nftables was mostly not ran due to to an earlier rule in the FORWARD chain issuing accept verdict.
This commit places the clamping rule into a chain of its own to ensure that it gets ran.
Updates tailscale/tailscale#11002
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
We have hosts that support IPv6, but not IPv6 firewall configuration
in iptables mode.
We also have hosts that have some support for IPv6 firewall
configuration in iptables mode, but do not have iptables filter table.
We should:
- configure ip rules for all hosts that support IPv6
- only configure firewall rules in iptables mode if the host
has iptables filter table.
Updates tailscale/tailscale#11540
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.
Updates tailscale/corp#18202
Signed-off-by: Percy Wegmann <percy@tailscale.com>
The Go zstd package is not friendly for stateless zstd compression.
Passing around multiple zstd.Encoder just for stateless compression
is a waste of memory since the memory is never freed and seldom
used if no compression operations are happening.
For performance, we pool the relevant Encoder/Decoder
with the specific options set.
Functionally, this package is a wrapper over the Go zstd package
with a more ergonomic API for stateless operations.
This package can be used to cleanup various pre-existing zstd.Encoder
pools or one-off handlers spread throughout our codebases.
Performance:
BenchmarkEncode/Best 1690 610926 ns/op 25.78 MB/s 1 B/op 0 allocs/op
zstd_test.go:137: memory: 50.336 MiB
zstd_test.go:138: ratio: 3.269x
BenchmarkEncode/Better 10000 100939 ns/op 156.04 MB/s 0 B/op 0 allocs/op
zstd_test.go:137: memory: 20.399 MiB
zstd_test.go:138: ratio: 3.131x
BenchmarkEncode/Default 15775 74976 ns/op 210.08 MB/s 105 B/op 0 allocs/op
zstd_test.go:137: memory: 1.586 MiB
zstd_test.go:138: ratio: 3.064x
BenchmarkEncode/Fastest 23222 53977 ns/op 291.81 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestLowMemory 23361 50789 ns/op 310.13 MB/s 15 B/op 0 allocs/op
zstd_test.go:137: memory: 334.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestNoChecksum 23086 50253 ns/op 313.44 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.900x
BenchmarkDecode/Checksum 70794 17082 ns/op 300.96 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/NoChecksum 74935 15990 ns/op 321.51 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/LowMemory 71043 16739 ns/op 307.13 MB/s 0 B/op 0 allocs/op
zstd_test.go:163: memory: 79.347 KiB
We can see that the options are taking effect where compression ratio improves
with higher levels and compression speed diminishes.
We can also see that LowMemory takes effect where the pooled coder object
references less memory than other cases.
We can see that the pooling is taking effect as there are 0 amortized allocations.
Additional performance:
BenchmarkEncodeParallel/zstd-24 1857 619264 ns/op 1796 B/op 49 allocs/op
BenchmarkEncodeParallel/zstdframe-24 1954 532023 ns/op 4293 B/op 49 allocs/op
BenchmarkDecodeParallel/zstd-24 5288 197281 ns/op 2516 B/op 49 allocs/op
BenchmarkDecodeParallel/zstdframe-24 6441 196254 ns/op 2513 B/op 49 allocs/op
In concurrent usage, handling the pooling in this package
has a marginal benefit over the zstd package,
which relies on a Go channel as the pooling mechanism.
In particular, coders can be freed by the GC when not in use.
Coders can be shared throughout the program if they use this package
instead of multiple independent pools doing the same thing.
The allocations are unrelated to pooling as they're caused by the spawning of goroutines.
Updates #cleanup
Updates tailscale/corp#18514
Updates tailscale/corp#17653
Updates tailscale/corp#18005
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This can be used to reload a value periodically, whether from disk or
another source, while handling jitter and graceful shutdown.
Updates tailscale/corp#1297
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iee2b4385c9abae59805f642a7308837877cb5b3f
There are container environments such as GitHub codespaces that have
partial IPv6 support - routing support is enabled at the kernel level,
but lacking IPv6 filter support in the iptables module.
In the specific example of the codespaces environment, this also has
pre-existing legacy iptables rules in the IPv4 tables, as such the
nascent firewall mode detection will always pick iptables.
We would previously fault trying to install rules to the filter table,
this catches that condition earlier, and disables IPv6 support under
these conditions.
Updates #5621
Updates #11344
Updates #11354
Signed-off-by: James Tucker <james@tailscale.com>
Ensure that the latest DNATNonTailscaleTraffic rule
gets inserted on top of any pre-existing rules.
Updates tailscale/tailscale#11281
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Updates ENG-2133. Adds the ResetToDefaults visibility policy currently only available on macOS, so that the Windows client can read its value.
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
This package uses a count-min sketch and a heap to track the top K items
in a stream of data. Tracking a new item and adding a count to an
existing item both require no memory allocations and is at worst
O(log(k)) complexity.
Change-Id: I0553381be3fef2470897e2bd806d43396f2dbb36
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
The new math/rand/v2 package includes an m-local global random number
generator that can not be reseeded by the user, which is suitable for
most uses without the RNG pools we have in a number of areas of the code
base.
The new API still does not have an allocation-free way of performing a
seeded operations, due to the long term compiler bug around interface
parameter escapes, and the Source interface.
This change introduces the two APIs that math/rand/v2 can not yet
replace efficiently: seeded Perm() and Shuffle() operations. This
implementation chooses to use the PCG random source from math/rand/v2,
as with sufficient compiler optimization, this source should boil down
to only two on-stack registers for random state under ideal conditions.
Updates #17243
Signed-off-by: James Tucker <james@tailscale.com>
Providing a hash.Block512 is an implementation detail of how deephash
works today, but providing an opaque type with mostly equivalent API
(i.e., HashUint8, HashBytes, etc. methods) is still sensible.
Thus, define a public Hasher type that exposes exactly the API
that an implementation of SelfHasher would want to call.
This gives us freedom to change the hashing algorithm of deephash
at some point in the future.
Also, this type is likely going to be called by types that are
going to memoize their own hash results, we additionally add
a HashSum method to simplify this use case.
Add documentation to SelfHasher on how a type might implement it.
Updates: corp#16409
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
expvarx.SafeFunc wraps an expvar.Func with a time limit. On reaching the
time limit, calls to Value return nil, and no new concurrent calls to
the underlying expvar.Func will be started until the call completes.
Updates tailscale/corp#16999
Signed-off-by: James Tucker <james@tailscale.com>