This appears to be one of the contributors to this CI target regularly
entering a bad state with a partially written toolchain.
Updates #self
Signed-off-by: James Tucker <james@tailscale.com>
Plan9 CI is disabled. 3p dependencies do not build for the target.
Contributor enthusiasm appears to have ceased again, and no usage has
been made.
Skipped gvisor, nfpm, and k8s.
Updates #5794
Updates #8043
Signed-off-by: James Tucker <james@tailscale.com>
* cmd/k8s-operator: generate static manifests from Helm charts
This is done to ensure that there is a single source of truth
for the operator kube manifests.
Also adds linux node selector to the static manifests as
this was added as a default to the Helm chart.
Static manifests can now be generated by running
`go generate tailscale.com/cmd/k8s-operator`.
Updates tailscale/tailscale#9222
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This records test coverage for the amd64 no race tests and uploads the
results to coveralls.io.
Updates #cleanup
Signed-off-by: Ox Cart <ox.to.a.cart@gmail.com>
The branch name selector "*" doesn't match branches with a "/" in their
name. The vast majority of our PRs are against the main (or previously,
master) branch anyway, so this will have minimal impact. But in the rare
cases that we want to open a PR against a branch with a "/" in the name,
tests should still run.
```
gh pr list --limit 9999 --state all --json baseRefName | \
jq -cs '.[] | group_by(.baseRefName) |
map({ base: .[0].baseRefName, count: map(.baseRefName) | length}) |
sort_by(-.count) | .[]'
{"base":"main","count":4593}
{"base":"master","count":226}
{"base":"release-branch/1.48","count":4}
{"base":"josh-and-adrian-io_uring","count":3}
{"base":"release-branch/1.30","count":3}
{"base":"release-branch/1.32","count":3}
{"base":"release-branch/1.20","count":2}
{"base":"release-branch/1.26","count":2}
{"base":"release-branch/1.34","count":2}
{"base":"release-branch/1.38","count":2}
{"base":"Aadi/speedtest-tailscaled","count":1}
{"base":"josh/io_uring","count":1}
{"base":"maisem/hi","count":1}
{"base":"rel-144","count":1}
{"base":"release-branch/1.18","count":1}
{"base":"release-branch/1.2","count":1}
{"base":"release-branch/1.22","count":1}
{"base":"release-branch/1.24","count":1}
{"base":"release-branch/1.4","count":1}
{"base":"release-branch/1.46","count":1}
{"base":"release-branch/1.8","count":1}
{"base":"web-client-main","count":1}
```
Updates #cleanup
Signed-off-by: Will Norris <will@tailscale.com>
Previously we were just smushing together args and not trying
to parse the values at all. This resulted in the args to testwrapper
being limited and confusing.
This makes it so that testwrapper parses flags in the exact format as `go test`
command and passes them down in the provided order. It uses tesing.Init to
register flags that `go test` understands, however those are not the only
flags understood by `go test` (such as `-exec`) so we register these separately.
Updates tailscale/corp#14975
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Move the compilation of everything to its own job too, separate
from test execution.
Updates #7894
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
They're slow. Make them their own job that can run in parallel.
Also, only run them in race mode. No need to run them on 386
or non-race amd64.
Updates #7894
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Drops time by several minutes.
Also, on top of that: skip building variant CLIs on the race builder
(29s), and getting qemu (15s).
Updates #9182
Change-Id: I979e02ab8c0daeebf5200459c9e4458a1f62f728
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I'm not saying it works, but it compiles.
Updates #5794
Change-Id: I2f3c99732e67fe57a05edb25b758d083417f083e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Redo the testwrapper to track and only retry flaky tests instead
of retrying the entire pkg. It also fails early if a non-flaky test fails.
This also makes it so that the go test caches are used.
Fixes#7975
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The action cache restore process either matches the restore key pattern
exactly, or uses a matching prefix with the most recent date.
If the restore key is an exact match, then no updates are uploaded, but
if we've just computed tests executions for more recent code then we
will likely want to use those results in future runs.
Appending run_id to the cache key will give us an always new key, and
then we will be restore a recently uploaded cache that is more likely
has a higher overlap with the code being tested.
Updates #7975
Signed-off-by: James Tucker <james@tailscale.com>
Benchmark flags prevent test caching, so benchmarks are now executed
independently of tests.
Fixes#7975
Signed-off-by: James Tucker <james@tailscale.com>
Go artifact caching will help provided that the cache remains small
enough - we can reuse the strategy from the Windows build where we only
cache and pull the zips, but let go(1) do the many-file unpacking as it
does so faster.
The race matrix was building once without race, then running all the
tests with race, so change the matrix to incldue a `buildflags`
parameter and use that both in the build and test steps.
Updates #cleanup
Signed-off-by: James Tucker <james@tailscale.com>
We accidentally switched to ./tool/go in
4022796484 which resulted in no longer
running Windows builds, as this is attempting to run a bash script.
I was unable to quickly fix the various tests that have regressed, so
instead I've added skips referencing #7876, which we need to back and
fix.
Updates #7262
Updates #7876
Signed-off-by: James Tucker <james@tailscale.com>
We use it to gate code that depends on custom Go toolchain, but it's
currently only passed in the corp runners. Add a set on OSS so that we
can catch regressions earlier.
To specifically test sockstats this required adding a build tag to
explicitly enable them -- they're normally on for iOS, macOS and Android
only, and we don't run tests on those platforms normally.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Github requires explicitly listing every single job within a workflow
that is required for status checks, instead of letting you list entire
workflows. This is ludicrous, and apparently this nonsense is the
workaround.
Signed-off-by: David Anderson <danderson@tailscale.com>
armv5 because that's what we ship to most downstreams right now,
armv7 becuase that's what we want to ship more of.
Fixes https://github.com/tailscale/tailscale/issues/7269
Signed-off-by: David Anderson <danderson@tailscale.com>
CI status doesn't collapse into "everything OK" if a job gets
skipped. Instead, always run the job, but skip its only step in PRs.
Signed-off-by: David Anderson <danderson@tailscale.com>
Replaces the former shell goop, which was a shell reimplementation
of a subset of version/mkversion.
Signed-off-by: David Anderson <danderson@tailscale.com>
OSS-Fuzz doesn't update their version of Go as quickly as we do, so
we sometimes end up with OSS-Fuzz being unable to build our code for
a few weeks. We don't want CI to be red for that entire time, but
we also don't want to forget to reenable fuzzing when OSS-Fuzz does
start working again.
This change makes two configurations worthy of a CI pass:
- Fuzzing works, and we expected it to work. This is a normal
happy state.
- Fuzzing didn't compile, and we expected it to not compile. This
is the "OSS-Fuzz temporarily broken" state.
If fuzzing is unexpectedly broken, or unexpectedly not broken, that's
a CI failure because we need to either address a fuzz finding, or
update TS_FUZZ_CURRENTLY_BROKEN to reflect the state of OSS-Fuzz.
Signed-off-by: David Anderson <danderson@tailscale.com>
Github's matrix runner formats the race variant as '(amd64, true)' if we
use race=true. So, change the way the variable is defined so that it says
'(amd64, race)' even if that makes the if statements a bit more complex.
Signed-off-by: David Anderson <danderson@tailscale.com>
Instead of having a dozen files that contribute CI steps with
inconsistent configs, this one file lists out everything that,
for us, constitutes "a CI run". It also enables the slack
notification webhook to notify us exactly once on a mass breakage,
rather than once for every sub-job that fails.
Signed-off-by: David Anderson <danderson@tailscale.com>