In the recent 20e9f3369 we made HealthChangeRequest machine requests
include a NodeKey, as it was the oddball machine request that didn't
include one. Unfortunately, that code was sometimes being called (at
least in some of our integration tests) without a node key due to its
registration with health.RegisterWatcher(direct.ReportHealthChange).
Fortunately tests in corp caught this before we cut a release. It's
possible this only affects this particular integration test's
environment, but still worth fixing.
Updates tailscale/corp#1297
Change-Id: I84046779955105763dc1be5121c69fec3c138672
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.
This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Go zstd package is not friendly for stateless zstd compression.
Passing around multiple zstd.Encoder just for stateless compression
is a waste of memory since the memory is never freed and seldom
used if no compression operations are happening.
For performance, we pool the relevant Encoder/Decoder
with the specific options set.
Functionally, this package is a wrapper over the Go zstd package
with a more ergonomic API for stateless operations.
This package can be used to cleanup various pre-existing zstd.Encoder
pools or one-off handlers spread throughout our codebases.
Performance:
BenchmarkEncode/Best 1690 610926 ns/op 25.78 MB/s 1 B/op 0 allocs/op
zstd_test.go:137: memory: 50.336 MiB
zstd_test.go:138: ratio: 3.269x
BenchmarkEncode/Better 10000 100939 ns/op 156.04 MB/s 0 B/op 0 allocs/op
zstd_test.go:137: memory: 20.399 MiB
zstd_test.go:138: ratio: 3.131x
BenchmarkEncode/Default 15775 74976 ns/op 210.08 MB/s 105 B/op 0 allocs/op
zstd_test.go:137: memory: 1.586 MiB
zstd_test.go:138: ratio: 3.064x
BenchmarkEncode/Fastest 23222 53977 ns/op 291.81 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestLowMemory 23361 50789 ns/op 310.13 MB/s 15 B/op 0 allocs/op
zstd_test.go:137: memory: 334.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestNoChecksum 23086 50253 ns/op 313.44 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.900x
BenchmarkDecode/Checksum 70794 17082 ns/op 300.96 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/NoChecksum 74935 15990 ns/op 321.51 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/LowMemory 71043 16739 ns/op 307.13 MB/s 0 B/op 0 allocs/op
zstd_test.go:163: memory: 79.347 KiB
We can see that the options are taking effect where compression ratio improves
with higher levels and compression speed diminishes.
We can also see that LowMemory takes effect where the pooled coder object
references less memory than other cases.
We can see that the pooling is taking effect as there are 0 amortized allocations.
Additional performance:
BenchmarkEncodeParallel/zstd-24 1857 619264 ns/op 1796 B/op 49 allocs/op
BenchmarkEncodeParallel/zstdframe-24 1954 532023 ns/op 4293 B/op 49 allocs/op
BenchmarkDecodeParallel/zstd-24 5288 197281 ns/op 2516 B/op 49 allocs/op
BenchmarkDecodeParallel/zstdframe-24 6441 196254 ns/op 2513 B/op 49 allocs/op
In concurrent usage, handling the pooling in this package
has a marginal benefit over the zstd package,
which relies on a Go channel as the pooling mechanism.
In particular, coders can be freed by the GC when not in use.
Coders can be shared throughout the program if they use this package
instead of multiple independent pools doing the same thing.
The allocations are unrelated to pooling as they're caused by the spawning of goroutines.
Updates #cleanup
Updates tailscale/corp#18514
Updates tailscale/corp#17653
Updates tailscale/corp#18005
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This can be used to reload a value periodically, whether from disk or
another source, while handling jitter and graceful shutdown.
Updates tailscale/corp#1297
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iee2b4385c9abae59805f642a7308837877cb5b3f
This allows the UI to distinguish between 'no shares' versus
'not being notified about shares'.
Updates ENG-2843
Signed-off-by: Percy Wegmann <percy@tailscale.com>
Instead of just checking if a peer capmap is nil, compare the previous
state peer capmap with the new peer capmap.
Updates tailscale/corp#17516
Signed-off-by: Claire Wang <claire@tailscale.com>
To mimic sync.Map.Swap, sync/atomic.Value.Swap, etc.
Updates tailscale/corp#1297
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: If7627da1bce8b552873b21d7e5ebb98904e9a650
Fixes tailscale/corp#18441
For a few days, IsMacAppStore() has been returning `false` on App Store builds (IPN-macOS target in Xcode).
I regressed this in #11369 by introducing logic to detect the sandbox by checking for the APP_SANDBOX_CONTAINER_ID environment variable. I thought that was a more robust approach instead of checking the name of the executable. However, it appears that on recent macOS versions this environment variable is no longer getting set, so we should go back to the previous logic that checks for the executable path, or HOME containing references to macsys.
This PR also adds additional checks to the logic by also checking XPC_SERVICE_NAME in addition to HOME where possible. That environment variable is set inside the network extension, either macos or macsys and is good to look at if for any reason HOME is not set.
Mostly inconsequential minor fixes for consistency. A couple of changes
to actual JSON examples, but all still very readable, so I think it's
fine.
Updates #cleanup
Signed-off-by: Will Norris <will@tailscale.com>
Fix a bug where all proxies got configured with --accept-routes set to true.
The bug was introduced in https://github.com/tailscale/tailscale/pull/11238.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
In control there are conditions where the leaf functions are not being
optimized away (i.e. At is not inlined), resulting in undesirable time
spent copying during SliceContains. This optimization is likely
irrelevant to simpler code or smaller structures.
Updates #optimization
Signed-off-by: James Tucker <james@tailscale.com>
Enable the web client over 100.100.100.100 by default. Accepting traffic
from [tailnet IP]:5252 still requires setting the `webclient` user pref.
Updates https://github.com/tailscale/tailscale/issues/10261
Signed-off-by: Mario Minardi <mario@tailscale.com>
This was originally built for testing node expiration flows, but is also
useful for customers to force device re-auth without actually deleting
the device from the tailnet.
Updates tailscale/corp#18408
Signed-off-by: Will Norris <will@tailscale.com>
Add a disable-web-client node attribute and add handling for disabling
the web client when this node attribute is set.
Updates https://github.com/tailscale/tailscale/issues/10261
Signed-off-by: Mario Minardi <mario@tailscale.com>
This PR fixes a panic that I saw in the mac app where
parsing the env file fails but we don't get to see the
error due to the panic of using f.Name()
Fixes#11425
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
Updates ENG-2848
We can safely disable the App Sandbox for our macsys GUI, allowing us to use `tailscale ssh` and do a few other things that we've wanted to do for a while. This PR:
- allows Tailscale SSH to be used from the macsys GUI binary when called from a CLI
- tweaks the detection of client variants in prop.go, with new functions `IsMacSys()`, `IsMacSysApp()` and `IsMacAppSandboxEnabled()`
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
When a user deletes the last cluster/user/context from their
kubeconfig via 'kubectl delete-[cluster|user|context] command,
kubectx sets the relevant field in kubeconfig to 'null'.
This was breaking our conversion logic that was assuming that the field
is either non-existant or is an array.
Updates tailscale/corp#18320
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
I was running all tests while preparing a recent stable release, and
this was failing because my computer is connected to a fairly large
tailnet.
```
--- FAIL: TestGetRouteTable (0.01s)
routetable_linux_test.go:32: expected at least one default route;
...
```
```
$ ip route show table 52 | wc -l
1051
```
Updates #cleanup
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
If the client uses the default Tailscale control URL, validate that all
PopBrowserURLs are under tailscale.com or *.tailscale.com. This reduces
the risk of a compromised control plane opening phishing pages for
example.
The client trusts control for many other things, but this is one easy
way to reduce that trust a bit.
Fixes#11393
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
According to
https://learn.microsoft.com/en-us/windows/win32/msi/standard-installer-command-line-options#promptrestart,
`/promptrestart` is ignored with `/quiet` is set, so msiexec.exe can
sometimes silently trigger a reboot. The best we can do to reduce
unexpected disruption is to just prevent restarts, until the user
chooses to do it. Restarts aren't normally needed for Tailscale updates,
but there seem to be some situations where it's triggered.
Updates #18254
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
This fixes a bug that was introduced in #11258 where the handling of the
per-client limit didn't properly account for the fact that the gVisor
TCP forwarder will return 'true' to indicate that it's handled a
duplicate SYN packet, but not launch the handler goroutine.
In such a case, we neither decremented our per-client limit in the
wrapper function, nor did we do so in the handler function, leading to
our per-client limit table slowly filling up without bound.
Fix this by doing the same duplicate-tracking logic that the TCP
forwarder does so we can detect such cases and appropriately decrement
our in-flight counter.
Updates tailscale/corp#12184
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib6011a71d382a10d68c0802593f34b8153d06892
To force the problem in its worst case scenario before fixing it.
Updates tailscale/corp#17859
Change-Id: I2c8b8e5f15c7801e1ab093feeafac52ec175a763
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
There are container environments such as GitHub codespaces that have
partial IPv6 support - routing support is enabled at the kernel level,
but lacking IPv6 filter support in the iptables module.
In the specific example of the codespaces environment, this also has
pre-existing legacy iptables rules in the IPv4 tables, as such the
nascent firewall mode detection will always pick iptables.
We would previously fault trying to install rules to the filter table,
this catches that condition earlier, and disables IPv6 support under
these conditions.
Updates #5621
Updates #11344
Updates #11354
Signed-off-by: James Tucker <james@tailscale.com>
build_docker, update-flake: cleanup and apply shellcheck fixes
Was editing this file to match my needs while shellcheck warnings
bugged me out.
REV isn't getting used anywhere. Better remove it.
Updates #cleanup
Signed-off-by: Panchajanya1999 <kernel@panchajanya.dev>
Signed-off-by: James Tucker <james@tailscale.com>
- Updates API to support renaming TailFS shares.
- Adds a CLI rename subcommand for renaming a share.
- Renames the CLI subcommand 'add' to 'set' to make it clear that
this is an add or update.
- Adds a unit test for TailFS in ipnlocal
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
Previously, the configuration of which folders to share persisted across
profile changes. Now, it is tied to the user's profile.
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This pretty much always results in an outage because peers won't
discover our new home region and thus won't be able to establish
connectivity.
Updates tailscale/corp#18095
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic0d09133f198b528dd40c6383b16d7663d9d37a7
Synology requires version numbers are within int32 range. This
change updates the version logic to keep things closer within the
range, and errors on building when the range is exceeded.
Updates #cleanup
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
bump version for adding NodeAttrSuggestExitNode
remove extra s from NodeAttrSuggestExitNode
Updates tailscale/corp#17516
Signed-off-by: Claire Wang <claire@tailscale.com>
Run yarn-deduplicate on yarn.lock to dedupe packages. This is being done
to reduce the number of redundant packages fetched by yarn when existing
versions in the lockfile satisfy the version dependency we need.
See https://github.com/scinos/yarn-deduplicate for details on the tool
used to perform this deduplication.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
So we can use it in trunkd to quiet down the logs there.
Updates #5563
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ie3177dc33f5ad103db832aab5a3e0e4f128f973f