This pulls out the clock and forceNoise443 code into methods on the
Dialer as cleanup in its own commit to make a future change less
distracting.
Updates #13597
Change-Id: I7001e57fe7b508605930c5b141a061b6fb908733
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In prep for a future port 80 MITM fix, make the 'debug ts2021' command
retry once after a failure to give it a chance to pick a new strategy.
Updates #13597
Change-Id: Icb7bad60cbf0dbec78097df4a00e9795757bc8e4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Add logic to set environment variables that match the SSH rule's
`acceptEnv` settings in the SSH session's environment.
Updates https://github.com/tailscale/corp/issues/22775
Signed-off-by: Mario Minardi <mario@tailscale.com>
* cmd/containerboot,kube,util/linuxfw: configure kube egress proxies to route to 1+ tailnet targets
This commit is first part of the work to allow running multiple
replicas of the Kubernetes operator egress proxies per tailnet service +
to allow exposing multiple tailnet services via each proxy replica.
This expands the existing iptables/nftables-based proxy configuration
mechanism.
A proxy can now be configured to route to one or more tailnet targets
via a (mounted) config file that, for each tailnet target, specifies:
- the target's tailnet IP or FQDN
- mappings of container ports to which cluster workloads will send traffic to
tailnet target ports where the traffic should be forwarded.
Example configfile contents:
{
"some-svc": {"tailnetTarget":{"fqdn":"foo.tailnetxyz.ts.net","ports"{"tcp:4006:80":{"protocol":"tcp","matchPort":4006,"targetPort":80},"tcp:4007:443":{"protocol":"tcp","matchPort":4007,"targetPort":443}}}}
}
A proxy that is configured with this config file will configure firewall rules
to route cluster traffic to the tailnet targets. It will then watch the config file
for updates as well as monitor relevant netmap updates and reconfigure firewall
as needed.
This adds a bunch of new iptables/nftables functionality to make it easier to dynamically update
the firewall rules without needing to restart the proxy Pod as well as to make
it easier to debug/understand the rules:
- for iptables, each portmapping is a DNAT rule with a comment pointing
at the 'service',i.e:
-A PREROUTING ! -i tailscale0 -p tcp -m tcp --dport 4006 -m comment --comment "some-svc:tcp:4006 -> tcp:80" -j DNAT --to-destination 100.64.1.18:80
Additionally there is a SNAT rule for each tailnet target, to mask the source address.
- for nftables, a separate prerouting chain is created for each tailnet target
and all the portmapping rules are placed in that chain. This makes it easier
to look up rules and delete services when no longer needed.
(nftables allows hooking a custom chain to a prerouting hook, so no extra work
is needed to ensure that the rules in the service chains are evaluated).
The next steps will be to get the Kubernetes Operator to generate
the configfile and ensure it is mounted to the relevant proxy nodes.
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The operator creates a non-reusable auth key for each of
the cluster proxies that it creates and puts in the tailscaled
configfile mounted to the proxies.
The proxies are always tagged, and their state is persisted
in a Kubernetes Secret, so their node keys are expected to never
be regenerated, so that they don't need to re-auth.
Some tailnet configurations however have seen issues where the auth
keys being left in the tailscaled configfile cause the proxies
to end up in unauthorized state after a restart at a later point
in time.
Currently, we have not found a way to reproduce this issue,
however this commit removes the auth key from the config once
the proxy can be assumed to have logged in.
If an existing, logged-in proxy is upgraded to this version,
its redundant auth key will be removed from the conffile.
If an existing, logged-in proxy is downgraded from this version
to a previous version, it will work as before without re-issuing key
as the previous code did not enforce that a key must be present.
Updates tailscale/tailscale#13451
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The ProxyGroup CRD specifies a set of N pods which will each be a
tailnet device, and will have M different ingress or egress services
mapped onto them. It is the mechanism for specifying how highly
available proxies need to be. This commit only adds the definition, no
controller loop, and so it is not currently functional.
This commit also splits out TailnetDevice and RecorderTailnetDevice
into separate structs because the URL field is specific to recorders,
but we want a more generic struct for use in the ProxyGroup status field.
Updates #13406
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Updates tailscale/tailscale#1634
Logs from some iOS users indicate that we're pointlessly performing captive portal detection on certain interfaces named ipsec*. These are tunnels with the cellular carrier that do not offer Internet access, and are only used to provide internet calling functionality (VoLTE / VoWiFi).
```
attempting to do captive portal detection on interface ipsec1
attempting to do captive portal detection on interface ipsec6
```
This PR excludes interfaces with the `ipsec` prefix from captive portal detection.
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
Add logic for parsing and matching against our planned format for
AcceptEnv values. Namely, this supports direct matches against string
values and matching where * and ? are treated as wildcard characters
which match against an arbitrary number of characters and a single
character respectively.
Actually using this logic in non-test code will come in subsequent
changes.
Updates https://github.com/tailscale/corp/issues/22775
Signed-off-by: Mario Minardi <mario@tailscale.com>
Like Linux, macOS will reply to sendto(2) with EPERM if the firewall is
currently blocking writes, though this behavior is like Linux
undocumented. This is often caused by a faulting network extension or
content filter from EDR software.
Updates #11710
Updates #12891
Updates #13511
Signed-off-by: James Tucker <james@tailscale.com>
This breaks its ability to be used as an expvar and is blocking a trunkd
deploy. Revert for now, and add a test to ensure that we don't break it
in a future change.
Updates #13550
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I1f1221c257c1de47b4bff0597c12f8530736116d
When querying for an exit node suggestion, occasionally it triggers a
new report concurrently with an existing report in progress. Generally,
there should always be a recent report or one in progress, so it is
redundant to start one there, and it causes concurrency issues.
Fixes#12643
Change-Id: I66ab9003972f673e5d4416f40eccd7c6676272a5
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
this commit changes usermetrics to be non-global, this is a building
block for correct metrics if a go process runs multiple tsnets or
in tests.
Updates #13420
Updates tailscale/corp#22075
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
So it doesn't delete and re-pull when switching between branches.
Updates tailscale/corp#17686
Change-Id: Iffb989781db42fcd673c5f03dbd0ce95972ede0f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Add separate builds for DSM7.2 for synology so that we can encode
separate versioning information in the INFO file to distinguish between
the two.
Fixes https://github.com/tailscale/corp/issues/22908
Signed-off-by: Mario Minardi <mario@tailscale.com>
Updates tailscale/tailscale#13326
Adds a CLI subcommand to perform DNS queries using the internal DNS forwarder and observe its internals (namely, which upstream resolvers are being used).
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
Pin re-actors/alls-green usage to latest 1.x. This was previously
pointing to `@release/v2` which pulls in the latest changes from this
branch as they are released, with the potential to break our workflows
if a breaking change or malicious version on this stream is ever pushed.
Changing this to a pinned version also means that dependabot will keep
this in the pinned version format (e.g., referencing a SHA) when it
opens a PR to bump the dependency.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Update and pin actions/upload-artifact usage to latest 4.x. These were
previously pointing to @3 which pulls in the latest v3 as they are
released, with the potential to break our workflows if a breaking change
or malicious version on the @3 stream is ever pushed.
Changing this to a pinned version also means that dependabot will keep
this in the pinned version format (e.g., referencing a SHA) when it
opens a PR to bump the dependency.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Update and pin actions/cache usage to latest 4.x. These were previously
pointing to `@3` which pulls in the latest v3 as they are released, with
the potential to break our workflows if a breaking change or malicious
version on the `@3` stream is ever pushed.
Changing this to a pinned version also means that dependabot will keep
this in the pinned version format (e.g., referencing a SHA) when it
opens a PR to bump the dependency.
The breaking change between v3 and v4 is that v4 requires Node 20 which
should be a non-issue where this is run.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Use slackapi/slack-github-action across the board and pin to latest 1.x.
Previously we were referencing the 1.27.0 tag directly which is
vulnerable to someone replacing that version tag with malicious code.
Replace usage of ruby/action-slack with slackapi/slack-github-action as
the latter is the officially supported action from slack.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Pin codeql actions usage to latest 3.x. These were previously pointing
to `@2` which pulls in the latest v2 as they are released, with the
potential to break our workflows if a breaking change or malicious
version on the `@2` stream is ever pushed.
Changing this to a pinned version also means that dependabot will keep
this in the pinend version format (e.g., referencing a SHA) when it
opens a PR to bump the dependency.
The breaking change between v2 and v3 is that v3 requires Node 20 which
is a non-issue as we are running this on ubuntu latest.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Pin actions/checkout usage to latest 5.x. These were previously pointing
to `@4` which pulls in the latest v4 as they are released, with the
potential to break our workflows if a breaking change or malicious
version on the `@4` stream is ever pushed.
Changing this to a pinned version also means that dependabot will keep
this in the pinend version format (e.g., referencing a SHA) when it
opens a PR to bump the dependency.
The breaking change between v4 and v5 is that v5 requires Node 20 which
should be a non-issue where it is used.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Pin actions/checkout usage to latest 3.x or 4.x as appropriate. These
were previously pointing to `@4` or `@3` which pull in the latest
versions at these tags as they are released, with the potential to break
our workflows if a breaking change or malicious version for either of
these streams are released.
Changing this to a pinned version also means that dependabot will keep
this in the pinend version format (e.g., referencing a SHA) when it
opens a PR to bump the dependency.
Updates #cleanup
Signed-off-by: Mario Minardi <mario@tailscale.com>
Add an `AcceptEnv` field to `SSHRule`. This will contain the collection
of environment variable names / patterns that are specified in the
`acceptEnv` block for the SSH rule within the policy file. This will be
used in the tailscale client to filter out unacceptable environment
variables.
Updates: https://github.com/tailscale/corp/issues/22775
Signed-off-by: Mario Minardi <mario@tailscale.com>
Update go.toolchain.rev for https://github.com/tailscale/go/pull/104 and
add a test that, when using the tailscale_go build tag, we use the
right Go toolchain.
We'll crank up the strictness in later commits.
Updates #13527
Change-Id: Ifb09a844858be2beb144a420e4e9dbdc5c03ae3a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
containerboot's main.go had grown to well over 1000 lines with
lots of disparate bits of functionality. This commit is pure copy-
paste to group related functionality outside of the main function
into its own set of files. Everything is still in the main package
to keep the diff incremental and reviewable.
Updates #cleanup
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
mdnsResponder at least as of macOS Sequoia does not find NXDOMAIN
responses to these dns-sd PTR queries acceptable unless they include the
question section in the response. This was found debugging #13511, once
we turned on additional diagnostic reporting from mdnsResponder we
witnessed:
```
Received unacceptable 12-byte response from 100.100.100.100 over UDP via utun6/27 -- id: 0x7F41 (32577), flags: 0x8183 (R/Query, RD, RA, NXDomain), counts: 0/0/0/0,
```
If the response includes a question section, the resposnes are
acceptable, e.g.:
```
Received acceptable 59-byte response from 8.8.8.8 over UDP via en0/17 -- id: 0x2E55 (11861), flags: 0x8183 (R/Query, RD, RA, NXDomain), counts: 1/0/0/0,
```
This may be contributing to an issue under diagnosis in #13511 wherein
some combination of conditions results in mdnsResponder no longer
answering DNS queries correctly to applications on the system for
extended periods of time (multiple minutes), while dig against quad-100
provides correct responses for those same domains. If additional debug
logging is enabled in mdnsResponder we see it reporting:
```
Penalizing server 100.100.100.100 for 60 seconds
```
It is also possible that the reason that macOS & iOS never "stopped
spamming" these queries is that they have never been replied to with
acceptable responses. It is not clear if this special case handling of
dns-sd PTR queries was ever beneficial, and given this evidence may have
always been harmful. If we subsequently observe that the queries settle
down now that they have acceptable responses, we should remove these
special cases - making upstream queries very occasionally isn't a lot of
battery, so we should be better off having to maintain less special
cases and avoid bugs of this class.
Updates #2442
Updates #3025
Updates #3363
Updates #3594
Updates #13511
Signed-off-by: James Tucker <james@tailscale.com>
Updates tailscale/tailscale#13452
Bump the Go toolchain to the latest to pick up changes required to not crash on Android 9/10.
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
In prep for upcoming flow tracking & mutex contention optimization
changes, this change refactors (subjectively simplifying) how the DERP
Server accounts for which peers have written to which other peers, to
be able to send PeerGoneReasonDisconnected messages to writes to
uncache their DRPO (DERP Return Path Optimization) routes.
Notably, this removes the Server.sentTo field which was guarded by
Server.mu and checked on all packet sends. Instead, the accounting is
moved to each sclient's sendLoop goroutine and now only needs to
acquire Server.mu for newly seen senders, the first time a peer sends
a packet to that sclient.
This change reduces the number of reasons to acquire Server.mu
per-packet from two to one. Removing the last one is the subject of an
upcoming change.
Updates #3560
Updates #150
Change-Id: Id226216d6629d61254b6bfd532887534ac38586c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>