This change is introducing new netfilterRunner interface and moving iptables manipulation to a lower leveled iptables runner.
For #391
Signed-off-by: KevinLiang10 <kevinliang@tailscale.com>
Redo the testwrapper to track and only retry flaky tests instead
of retrying the entire pkg. It also fails early if a non-flaky test fails.
This also makes it so that the go test caches are used.
Fixes#7975
Signed-off-by: Maisem Ali <maisem@tailscale.com>
In order to improve our ability to understand the state of policies and
registry settings when troubleshooting, we enumerate all values in all subkeys.
x/sys/windows does not already offer this, so we need to call RegEnumValue
directly.
For now we're just logging this during startup, however in a future PR I plan to
also trigger this code during a bugreport. I also want to log more than just
registry.
Fixes#8141
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
The client/tailscale is a stable-ish API we try not to break. Revert
the Client.CreateKey method as it was and add a new
CreateKeyWithExpiry method to do the new thing. And document the
expiry field and enforce that the time.Duration can't be between in
range greater than 0 and less than a second.
Updates #7143
Updates #8124 (reverts it, effectively)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Adds a parameter for create key that allows a number of seconds
(less than 90) to be specified for new keys.
Fixes https://github.com/tailscale/tailscale/issues/7965
Signed-off-by: Matthew Brown <matthew@bargrove.com>
getSingleObject can return `nil, nil`, getDeviceInfo was not handling
that case which resulted in panics.
Fixes#7303
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The retry logic was pathological in the following ways:
* If we restarted the logging service, any pending uploads
would be placed in a retry-loop where it depended on backoff.Backoff,
which was too aggresive. It would retry failures within milliseconds,
taking at least 10 retries to hit a delay of 1 second.
* In the event where a logstream was rate limited,
the aggressive retry logic would severely exacerbate the problem
since each retry would also log an error message.
It is by chance that the rate of log error spam
does not happen to exceed the rate limit itself.
We modify the retry logic in the following ways:
* We now respect the "Retry-After" header sent by the logging service.
* Lacking a "Retry-After" header, we retry after a hard-coded period of
30 to 60 seconds. This avoids the thundering-herd effect when all nodes
try reconnecting to the logging service at the same time after a restart.
* We do not treat a status 400 as having been uploaded.
This is simply not the behavior of the logging service.
Updates #tailscale/corp#11213
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I noticed cmd/{cloner,viewer} didn't support structs with embedded
fields while working on a change in another repo. This adds support.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Signed-off-by: Chenyang Gao <gps949@outlook.com>
in commit 6e96744, the tsd system type has been added.
Which will cause the daemon will crash on some OSs (Windows, darwin and so on).
The root cause is that on those OSs, handleSubnetsInNetstack() will return true and set the conf.Router with a wrapper.
Later in NewUserspaceEngine() it will do subsystem set and found that early set router mismatch to current value, then panic.
This is part of an effort to clean up tailscaled initialization between
tailscaled, tailscaled Windows service, tsnet, and the mac GUI.
Updates #8036
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This holds back gvisor, kubernetes, goreleaser, and esbuild, which all
had breaking API changes.
Updates #8043
Updates #7381
Updates #8042 (updates u-root which adds deps)
Change-Id: I889759bea057cd3963037d41f608c99eb7466a5b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This change introduces address selection for wireguard only endpoints.
If a endpoint has not been used before, an address is randomly selected
to be used based on information we know about, such as if they are able
to use IPv4 or IPv6. When an address is initially selected, we also
initiate a new ICMP ping to the endpoints addresses to determine which
endpoint offers the best latency. This information is then used to
update which endpoint we should be using based on the best possible
route. If the latency is the same for a IPv4 and an IPv6 address, IPv6
will be used.
Updates #7826
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
DERP doesn't support HTTP/2. If an HTTP/2 proxy was placed in front of
a DERP server requests would fail because the connection would
be initialized with HTTP/2, which the DERP client doesn't support.
Signed-off-by: Kyle Carberry <kyle@carberry.com>
This change adds a v6conn to the pinger to enable sending pings to v6
addrs.
Updates #7826
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
We need to always specify tags when creating an AuthKey from an OAuth key.
Check for that, and reuse the `--advertise-tags` param.
Updates #7982
Signed-off-by: Maisem Ali <maisem@tailscale.com>
On some platforms (notably macOS and iOS) we look up the default
interface to bind outgoing connections to. This is both duplicated
work and results in logspam when the default interface is not available
(i.e. when a phone has no connectivity, we log an error and thus cause
more things that we will try to upload and fail).
Fixed by passing around a netmon.Monitor to more places, so that we can
use its cached interface state.
Fixes#7850
Updates #7621
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
We're using it in more and more places, and it's not really specific to
our use of Wireguard (and does more just link/interface monitoring).
Also removes the separate interface we had for it in sockstats -- it's
a small enough package (we already pull in all of its dependencies
via other paths) that it's not worth the extra complexity.
Updates #7621
Updates #7850
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This is a follow-up to #7905 that adds two more linters and fixes the corresponding findings. As per the previous PR, this only flags things that are "obviously" wrong, and fixes the issues found.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8739bdb7bc4f75666a7385a7a26d56ec13741b7c
This adds an initial and intentionally minimal configuration for
golang-ci, fixes the issues reported, and adds a GitHub Action to check
new pull requests against this linter configuration.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8f38fbc315836a19a094d0d3e986758b9313f163
Redoes the approach from #5550 and #7539 to explicitly pass in the logf
function, instead of having global state that can be overridden.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This also adds a bunch of tests for this function to ensure that we're
returning the proper IP(s) in all cases.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I0d9d57170dbab5f2bf07abdf78ecd17e0e635399
This splits Prometheus metric handlers exposed by tsweb into two
modules:
- `varz.Handler` exposes Prometheus metrics generated by our expvar
converter;
- `promvarz.Handler` combines our expvar-converted metrics and native
Prometheus metrics.
By default, tsweb will use the promvarz handler, however users can keep
using only the expvar converter. Specifically, `tailscaled` now uses
`varz.Handler` explicitly, which avoids a dependency on the
(heavyweight) Prometheus client.
Updates https://github.com/tailscale/corp/issues/10205
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
This provides an example of using native Prometheus metrics with tsweb.
Prober library seems to be the only user of PrometheusVar, so I am
removing support for it in tsweb.
Updates https://github.com/tailscale/corp/issues/10205
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
The handler will expose built-in process and Go metrics by default,
which currently duplicate some of the expvar-proxied metrics
(`goroutines` vs `go_goroutines`, `memstats` vs `go_memstats`), but as
long as their names are different, Prometheus server will just scrape
both.
This will change /debug/varz behaviour for most tsweb binaries, but
notably not for control, which configures a `tsweb.VarzHandler`
[explicitly](a5b5d5167f/cmd/tailcontrol/tailcontrol.go (L779))
Updates https://github.com/tailscale/corp/issues/10205
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
Make developing derp easier by:
1. Creating an envknob telling clients to use HTTP to connect to derp
servers, so devs don't have to acquire a valid TLS cert.
2. Creating an envknob telling clients which derp server to connect
to, so devs don't have to edit the ACLs in the admin console to add a
custom DERP map.
3. Explaining how the -dev and -a command lines args to derper
interact.
To use this:
1. Run derper with -dev.
2. Run tailscaled with TS_DEBUG_USE_DERP_HTTP=1 and
TS_DEBUG_USE_DERP_ADDR=localhost
This will result in the client connecting to derp via HTTP on port
3340.
Fixes#7700
Signed-off-by: Val <valerie@tailscale.com>
I realized that a lot of the problems that we're seeing around migration and
LocalBackend state can be avoided if we drive Windows pref migration entirely
from within tailscaled. By doing it this way, tailscaled can automatically
perform the migration as soon as the connection with the client frontend is
established.
Since tailscaled is already running as LocalSystem, it already has access to
the user's local AppData directory. The profile manager already knows which
user is connected, so we simply need to resolve the user's prefs file and read
it from there.
Of course, to properly migrate this information we need to also check system
policies. I moved a bunch of policy resolution code out of the GUI and into
a new package in util/winutil/policy.
Updates #7626
Signed-off-by: Aaron Klotz <aaron@tailscale.com>