This CONNECT client doesn't match what Go's net/http.Transport does
(making the two values match). This makes it match.
This is all pretty unspecified but most clients & doc examples show
these matching. And some proxy implementations (such as Zscaler) care.
Updates tailscale/corp#18716
Change-Id: I135c5facbbcec9276faa772facbde1bb0feb2d26
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Updates https://github.com/tailscale/corp/issues/8027
Safeweb is a wrapper around http.Server & tsnet that encodes some
application security defaults.
Safeweb asks developers to split their HTTP routes into two
http.ServeMuxs for serving browser and API-facing endpoints
repsectively. It then wraps these HTTP routes with the
context-appropriate security controls.
safeweb.Server#Serve will serve the HTTP muxes over the provided
listener. Caller are responsible for creating and tearing down their
application's listeners. Applications being served over HTTPS that wish
to implement HTTP redirects can use the Server#HTTPRedirect handler to
do so.
Signed-off-by: Patrick O'Doherty <patrick@tailscale.com>
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.
Updates tailscale/corp#18202
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.
Updates tailscale/corp#18202
Signed-off-by: Percy Wegmann <percy@tailscale.com>
For example, if we get a 404 when downloading a file, we'll report access.
Also, to reduce verbosty of logs, this elides 0 length files.
Updates tailscale/corp#17818
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This change introduces some basic logging into the access and share
pathways for tailfs.
Updates tailscale/corp#17818
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
Rather than pass around a scratch buffer, put it on the Logger.
This is a baby step towards removing the background uploading
goroutine and starting it as needed.
Updates tailscale/corp#18514 (insofar as it led me to look at this code)
Change-Id: I6fd94581c28bde40fdb9fca788eb9590bcedae1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Including the double quotes (`"`) around the value made it appear like the helm chart should expect a string value for `installCRDs`.
Signed-off-by: Chris Milson-Tokunaga <chris.w.milson@gmail.com>
This implementation uses less memory than tempfork/device,
which helps avoid OOM conditions in the iOS VPN extension when
switching to a Tailnet with ExitNode routing enabled.
Updates tailscale/corp#18514
Signed-off-by: Percy Wegmann <percy@tailscale.com>
There's a vulnerability https://pkg.go.dev/vuln/GO-2024-2659 that
govulncheck flags, even though it's only reachable from tests and
cmd/sync-containers and cannot be exploited there.
Updates #cleanup
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
First we had Capabilities []string. Then
https://tailscale.com/blog/acl-grants (#4217) brought CapMap, a
superset of Capabilities. Except we never really finished the
transition inside the codebase to go all-in on CapMap. This does so.
Notably, this coverts Capabilities on the wire early to CapMap
internally so the code can only deal in CapMap, even against an old
control server.
In the process, this removes PeerChange.Capabilities support, which no
known control plane sent anyway. They can and should use
PeerChange.CapMap instead.
Updates #11508
Updates #4217
Change-Id: I872074e226b873f9a578d9603897b831d50b25d9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
From a problem we hit with how badger registers expvars; it broke
trunkd's exported metrics.
Updates tailscale/corp#1297
Change-Id: I42e1552e25f734c6f521b6e993d57a82849464b2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
When node attributes were super rare, the O(n) slice scans looking for
node attributes was more acceptable. But now more code and more users
are using increasingly more node attributes. Time to make it a map.
Noticed while working on tailscale/corp#17879
Updates #cleanup
Change-Id: Ic17c80341f418421002fbceb47490729048756d2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In the recent 20e9f3369 we made HealthChangeRequest machine requests
include a NodeKey, as it was the oddball machine request that didn't
include one. Unfortunately, that code was sometimes being called (at
least in some of our integration tests) without a node key due to its
registration with health.RegisterWatcher(direct.ReportHealthChange).
Fortunately tests in corp caught this before we cut a release. It's
possible this only affects this particular integration test's
environment, but still worth fixing.
Updates tailscale/corp#1297
Change-Id: I84046779955105763dc1be5121c69fec3c138672
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.
This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Go zstd package is not friendly for stateless zstd compression.
Passing around multiple zstd.Encoder just for stateless compression
is a waste of memory since the memory is never freed and seldom
used if no compression operations are happening.
For performance, we pool the relevant Encoder/Decoder
with the specific options set.
Functionally, this package is a wrapper over the Go zstd package
with a more ergonomic API for stateless operations.
This package can be used to cleanup various pre-existing zstd.Encoder
pools or one-off handlers spread throughout our codebases.
Performance:
BenchmarkEncode/Best 1690 610926 ns/op 25.78 MB/s 1 B/op 0 allocs/op
zstd_test.go:137: memory: 50.336 MiB
zstd_test.go:138: ratio: 3.269x
BenchmarkEncode/Better 10000 100939 ns/op 156.04 MB/s 0 B/op 0 allocs/op
zstd_test.go:137: memory: 20.399 MiB
zstd_test.go:138: ratio: 3.131x
BenchmarkEncode/Default 15775 74976 ns/op 210.08 MB/s 105 B/op 0 allocs/op
zstd_test.go:137: memory: 1.586 MiB
zstd_test.go:138: ratio: 3.064x
BenchmarkEncode/Fastest 23222 53977 ns/op 291.81 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestLowMemory 23361 50789 ns/op 310.13 MB/s 15 B/op 0 allocs/op
zstd_test.go:137: memory: 334.458 KiB
zstd_test.go:138: ratio: 2.898x
BenchmarkEncode/FastestNoChecksum 23086 50253 ns/op 313.44 MB/s 26 B/op 0 allocs/op
zstd_test.go:137: memory: 599.458 KiB
zstd_test.go:138: ratio: 2.900x
BenchmarkDecode/Checksum 70794 17082 ns/op 300.96 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/NoChecksum 74935 15990 ns/op 321.51 MB/s 4 B/op 0 allocs/op
zstd_test.go:163: memory: 316.438 KiB
BenchmarkDecode/LowMemory 71043 16739 ns/op 307.13 MB/s 0 B/op 0 allocs/op
zstd_test.go:163: memory: 79.347 KiB
We can see that the options are taking effect where compression ratio improves
with higher levels and compression speed diminishes.
We can also see that LowMemory takes effect where the pooled coder object
references less memory than other cases.
We can see that the pooling is taking effect as there are 0 amortized allocations.
Additional performance:
BenchmarkEncodeParallel/zstd-24 1857 619264 ns/op 1796 B/op 49 allocs/op
BenchmarkEncodeParallel/zstdframe-24 1954 532023 ns/op 4293 B/op 49 allocs/op
BenchmarkDecodeParallel/zstd-24 5288 197281 ns/op 2516 B/op 49 allocs/op
BenchmarkDecodeParallel/zstdframe-24 6441 196254 ns/op 2513 B/op 49 allocs/op
In concurrent usage, handling the pooling in this package
has a marginal benefit over the zstd package,
which relies on a Go channel as the pooling mechanism.
In particular, coders can be freed by the GC when not in use.
Coders can be shared throughout the program if they use this package
instead of multiple independent pools doing the same thing.
The allocations are unrelated to pooling as they're caused by the spawning of goroutines.
Updates #cleanup
Updates tailscale/corp#18514
Updates tailscale/corp#17653
Updates tailscale/corp#18005
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This can be used to reload a value periodically, whether from disk or
another source, while handling jitter and graceful shutdown.
Updates tailscale/corp#1297
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Iee2b4385c9abae59805f642a7308837877cb5b3f
This allows the UI to distinguish between 'no shares' versus
'not being notified about shares'.
Updates ENG-2843
Signed-off-by: Percy Wegmann <percy@tailscale.com>
Instead of just checking if a peer capmap is nil, compare the previous
state peer capmap with the new peer capmap.
Updates tailscale/corp#17516
Signed-off-by: Claire Wang <claire@tailscale.com>
To mimic sync.Map.Swap, sync/atomic.Value.Swap, etc.
Updates tailscale/corp#1297
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: If7627da1bce8b552873b21d7e5ebb98904e9a650
Fixes tailscale/corp#18441
For a few days, IsMacAppStore() has been returning `false` on App Store builds (IPN-macOS target in Xcode).
I regressed this in #11369 by introducing logic to detect the sandbox by checking for the APP_SANDBOX_CONTAINER_ID environment variable. I thought that was a more robust approach instead of checking the name of the executable. However, it appears that on recent macOS versions this environment variable is no longer getting set, so we should go back to the previous logic that checks for the executable path, or HOME containing references to macsys.
This PR also adds additional checks to the logic by also checking XPC_SERVICE_NAME in addition to HOME where possible. That environment variable is set inside the network extension, either macos or macsys and is good to look at if for any reason HOME is not set.
Mostly inconsequential minor fixes for consistency. A couple of changes
to actual JSON examples, but all still very readable, so I think it's
fine.
Updates #cleanup
Signed-off-by: Will Norris <will@tailscale.com>
Fix a bug where all proxies got configured with --accept-routes set to true.
The bug was introduced in https://github.com/tailscale/tailscale/pull/11238.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
In control there are conditions where the leaf functions are not being
optimized away (i.e. At is not inlined), resulting in undesirable time
spent copying during SliceContains. This optimization is likely
irrelevant to simpler code or smaller structures.
Updates #optimization
Signed-off-by: James Tucker <james@tailscale.com>
Enable the web client over 100.100.100.100 by default. Accepting traffic
from [tailnet IP]:5252 still requires setting the `webclient` user pref.
Updates https://github.com/tailscale/tailscale/issues/10261
Signed-off-by: Mario Minardi <mario@tailscale.com>
This was originally built for testing node expiration flows, but is also
useful for customers to force device re-auth without actually deleting
the device from the tailnet.
Updates tailscale/corp#18408
Signed-off-by: Will Norris <will@tailscale.com>
Add a disable-web-client node attribute and add handling for disabling
the web client when this node attribute is set.
Updates https://github.com/tailscale/tailscale/issues/10261
Signed-off-by: Mario Minardi <mario@tailscale.com>
This PR fixes a panic that I saw in the mac app where
parsing the env file fails but we don't get to see the
error due to the panic of using f.Name()
Fixes#11425
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
Updates ENG-2848
We can safely disable the App Sandbox for our macsys GUI, allowing us to use `tailscale ssh` and do a few other things that we've wanted to do for a while. This PR:
- allows Tailscale SSH to be used from the macsys GUI binary when called from a CLI
- tweaks the detection of client variants in prop.go, with new functions `IsMacSys()`, `IsMacSysApp()` and `IsMacAppSandboxEnabled()`
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
When a user deletes the last cluster/user/context from their
kubeconfig via 'kubectl delete-[cluster|user|context] command,
kubectx sets the relevant field in kubeconfig to 'null'.
This was breaking our conversion logic that was assuming that the field
is either non-existant or is an array.
Updates tailscale/corp#18320
Signed-off-by: Irbe Krumina <irbe@tailscale.com>