This is a follow-up to #7905 that adds two more linters and fixes the corresponding findings. As per the previous PR, this only flags things that are "obviously" wrong, and fixes the issues found.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8739bdb7bc4f75666a7385a7a26d56ec13741b7c
Without this, the peer fails to do anything over the PeerAPI if it
has a masquerade address.
```
Apr 19 13:58:15 hydrogen tailscaled[6696]: peerapi: invalid request from <ip>:58334: 100.64.0.1/32 not found in self addresses
```
Updates #8020
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Found this when adding a test that does a ping over PeerAPI.
Our integration tests set up a trafficTrap to ensure that tailscaled
does not call out to the internet, and it does so via a HTTP_PROXY.
When adding a test for pings over PeerAPI, it triggered the trap and investigation
lead to the realization that we were not removing the Proxy when trying to
dial out to the PeerAPI.
Updates tailscale/corp#8020
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Exposes some internal state of the sockstats package via the C2N and
PeerAPI endpoints, so that it can be used for debugging. For now this
includes the estimated radio on percentage and a second-by-second view
of the times the radio was active.
Also fixes another off-by-one error in the radio on percentage that
was leading to >100% values (if n seconds have passed since we started
to monitor, there may be n + 1 possible seconds where the radio could
have been on).
Updates tailscale/corp#9230
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
When splitting the radio monitor usage array, we were splitting at now %
3600 to get values into chronological order. This caused the value for
the final second to be included at the beginning of the ordered slice
rather than the end. If there was activity during that final second, an
extra five seconds of high power usage would get recorded in some cases.
This could result in a final calculation of greater than 100% usage.
This corrects that by splitting values at (now+1 % 3600).
This also simplifies the percentage calculation by always rounding
values down, which is sufficient for our usage.
Signed-off-by: Will Norris <will@tailscale.com>
It's somewhat common (e.g. when a phone has no reception), and leads to
lots of logspam.
Updates #7850
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This adds an initial and intentionally minimal configuration for
golang-ci, fixes the issues reported, and adds a GitHub Action to check
new pull requests against this linter configuration.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8f38fbc315836a19a094d0d3e986758b9313f163
Exclude traffic with 100.100.100.100 (for IPv4) and
with fd7a:115c:a1e0::53 (for IPv6) since this traffic with the
Tailscale service running locally on the node.
This traffic never left the node.
It also happens to be a high volume amount of traffic since
DNS requests occur over UDP with each request coming from a
unique port, thus resulting in many discrete traffic flows.
Fixestailscale/corp#10554
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Redoes the approach from #5550 and #7539 to explicitly pass in the logf
function, instead of having global state that can be overridden.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
TestMonitorMode skips by default, without the --monitor flag, and then
it previously ran forever. This adds an option --monitor-duration flag
that defaults to zero (run forever) but if non-zero bounds how long
the tests runs. This means you can then also use e.g. `go test
--cpuprofile` and capture a CPU/mem profile for a minute or two.
Updates #7621
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This is a continuation of the earlier 2a67beaacf but more aggressive;
this now remembers that we failed to find the "home" router IP so we
don't try again later on the next call.
Updates #7621
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Previously, when updating endpoints we would immediately stop
advertising any endpoint that wasn't discovered during
determineEndpoints. This could result in, for example, a case where we
performed an incremental netcheck, didn't get any of our three STUN
packets back, and then dropped our STUN endpoint from the set of
advertised endpoints... which would result in clients falling back to a
DERP connection until the next call to determineEndpoints.
Instead, let's cache endpoints that we've discovered and continue
reporting them to clients until a timeout expires. In the above case
where we temporarily don't have a discovered STUN endpoint, we would
continue reporting the old value, then re-discover the STUN endpoint
again and continue reporting it as normal, so clients never see a
withdrawal.
Updates tailscale/coral#108
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I42de72e7418ab328a6c732bdefc74549708cf8b9
The comment still said *magicsock.Conn implemented wireguard-go conn.Bind.
That wasn't accurate anymore.
A doc #cleanup.
Change-Id: I7fd003b939497889cc81147bfb937b93e4f6865c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
So we're staying within the netip.Addr/AddrPort consistently and
avoiding allocs/conversions to the legacy net addr types.
Updates #5162
Change-Id: I59feba60d3de39f773e68292d759766bac98c917
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
These tests are passing locally and on CI. They had failed earlier in
the day when first fixing up CI, and it is not immediately clear why. I
have cycled IPv6 support locally, but this should not have a substantial
effect.
Updates #7876
Signed-off-by: James Tucker <jftucker@gmail.com>
This test is not regularly passing on CI, but seems to pass reliably
locally. Needs deeper debugging.
Updates #7876
Signed-off-by: James Tucker <jftucker@gmail.com>
Go artifact caching will help provided that the cache remains small
enough - we can reuse the strategy from the Windows build where we only
cache and pull the zips, but let go(1) do the many-file unpacking as it
does so faster.
The race matrix was building once without race, then running all the
tests with race, so change the matrix to incldue a `buildflags`
parameter and use that both in the build and test steps.
Updates #cleanup
Signed-off-by: James Tucker <james@tailscale.com>
This is an exact copy of the files misc/set/set{,_test}.go from
tailscale/corp@a5415daa9c, plus the
license headers.
For use in #7877
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I712d09c6d1a180c6633abe3acf8feb59b27e2866
We accidentally switched to ./tool/go in
4022796484 which resulted in no longer
running Windows builds, as this is attempting to run a bash script.
I was unable to quickly fix the various tests that have regressed, so
instead I've added skips referencing #7876, which we need to back and
fix.
Updates #7262
Updates #7876
Signed-off-by: James Tucker <james@tailscale.com>
example was missing the "-auth" type in the key prefix, which all new
keys now contain. Also update key ID to match the full key, and fix
indenting of closing braces.
Signed-off-by: Will Norris <will@tailscale.com>
To get the tree green again for other people.
Updates #7866
Change-Id: Ibdad2e1408e5f0c97e49a148bfd77aad17c2c5e5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This also adds a bunch of tests for this function to ensure that we're
returning the proper IP(s) in all cases.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I0d9d57170dbab5f2bf07abdf78ecd17e0e635399
This makes `omitempty` actually work, and saves bytes in each map response.
Updates tailscale/corp#8020
Signed-off-by: Maisem Ali <maisem@tailscale.com>
At the current unoptimized memory utilization of the various data structures,
100k IPv6 routes consumes in the ballpark of 3-4GiB, which risks OOMing our
386 test machine.
Until we have the optimizations to (drastically) reduce that consumption,
skip the test that bloats too much for 32-bit machines.
Signed-off-by: David Anderson <danderson@tailscale.com>
Using log.Printf may end up being printed out to the console, which
is not desirable. I noticed this when I was investigating some client
logs with `sockstats: trace "NetcheckClient" was overwritten by another`.
That turns to be harmless/expected (the netcheck client will fall back
to the DERP client in some cases, which does its own sockstats trace).
However, the log output could be visible to users if running the
`tailscale netcheck` CLI command, which would be needlessly confusing.
Updates tailscale/corp#9230
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
We use it to gate code that depends on custom Go toolchain, but it's
currently only passed in the corp runners. Add a set on OSS so that we
can catch regressions earlier.
To specifically test sockstats this required adding a build tag to
explicitly enable them -- they're normally on for iOS, macOS and Android
only, and we don't run tests on those platforms normally.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
It's used to control various opt-in functionality for the macOS and iOS
apps, and was lost in the migration to gocross.
Updates tailscale/tailscale#7769
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This splits Prometheus metric handlers exposed by tsweb into two
modules:
- `varz.Handler` exposes Prometheus metrics generated by our expvar
converter;
- `promvarz.Handler` combines our expvar-converted metrics and native
Prometheus metrics.
By default, tsweb will use the promvarz handler, however users can keep
using only the expvar converter. Specifically, `tailscaled` now uses
`varz.Handler` explicitly, which avoids a dependency on the
(heavyweight) Prometheus client.
Updates https://github.com/tailscale/corp/issues/10205
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
This provides an example of using native Prometheus metrics with tsweb.
Prober library seems to be the only user of PrometheusVar, so I am
removing support for it in tsweb.
Updates https://github.com/tailscale/corp/issues/10205
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
The handler will expose built-in process and Go metrics by default,
which currently duplicate some of the expvar-proxied metrics
(`goroutines` vs `go_goroutines`, `memstats` vs `go_memstats`), but as
long as their names are different, Prometheus server will just scrape
both.
This will change /debug/varz behaviour for most tsweb binaries, but
notably not for control, which configures a `tsweb.VarzHandler`
[explicitly](a5b5d5167f/cmd/tailcontrol/tailcontrol.go (L779))
Updates https://github.com/tailscale/corp/issues/10205
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
Otherwise there may be a panic if it's nil (and the control side of
the c2n call will just time out).
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Makes it more apparent in the PeerAPI endpoint that the client was
not built with the appropriate toolchain or build tags.
Updates tailscale/corp#9230
Signed-off-by: Mihai Parparita <mihai@tailscale.com>