* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name
Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.
Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.
* cmd/k8s-operator: support ExternalName Services
Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.
Updates tailscale/tailscale#10606
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,cmd/k8s-operator/deploy/manifests: optionally forward cluster traffic via ingress proxy.
If a tailscale Ingress has tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation, configure the associated ingress proxy to have its tailscale serve proxy to listen on Pod's IP address. This ensures that cluster traffic too can be forwarded via this proxy to the ingress backend(s).
In containerboot, if EXPERIMENTAL_PROXY_CLUSTER_TRAFFIC_VIA_INGRESS is set to true
and the node is Kubernetes operator ingress proxy configured via Ingress,
make sure that traffic from within the cluster can be proxied to the ingress target.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: optionally configure tailscaled with a configfile.
If EXPERIMENTAL_TS_CONFIGFILE_PATH env var is set,
only run tailscaled with the provided config file.
Do not run 'tailscale up' or 'tailscale set'.
* cmd/containerboot: store containerboot accept_dns val in bool pointer
So that we can distinguish between the value being set to
false explicitly bs being unset.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
A Tailnet node can be told to stop advertise subnets by passing
an empty string to --advertise-routes flag.
Respect an explicitly passed empty value to TS_ROUTES env var
so that users have a way to stop containerboot acting as a subnet
router without recreating it.
Distinguish between TS_ROUTES being unset and empty.
Updates tailscale/tailscale#10708
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
`tailscaled` and `tailscale` expect the socket to be at
`/var/run/tailscale/tailscaled.sock`, however containerboot
would set up the socket at `/tmp/tailscaled.sock`. This leads to a
poor UX when users try to use any `tailscale` command as they
have to prefix everything with `--socket /tmp/tailscaled.sock`.
To improve the UX, this adds a symlink to
`/var/run/tailscale/tailscaled.sock` to point to `/tmp/tailscaled.sock`.
This approach has two benefits, 1 users are able to continue to use
existing scripts without this being a breaking change. 2. users are
able to use the `tailscale` CLI without having to add the `--socket` flag.
Fixes tailscale/corp#15902
Fixes#6849Fixes#10027
Signed-off-by: Maisem Ali <maisem@tailscale.com>
* cmd/containerboot: proxy traffic to tailnet target defined by FQDN
Add a new Service annotation tailscale.com/tailnet-fqdn that
users can use to specify a tailnet target for which
an egress proxy should be deployed in the cluster.
Updates tailscale/tailscale#10280
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: shut down cleanly on SIGTERM
Make sure that tailscaled watcher returns when
SIGTERM is received and also that it shuts down
before tailscaled exits.
Updates tailscale/tailscale#10090
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This partially reverts commits a61a9ab087
and 7538f38671 and fully reverts
4823a7e591.
The goal of that commit was to reapply known config whenever the
container restarts. However, that already happens when TS_AUTH_ONCE was
false (the default back then). So we only had to selectively reapply the
config if TS_AUTH_ONCE is true, this does exactly that.
This is a little sad that we have to revert to `tailscale up`, but it
fixes the backwards incompatibility problem.
Updates tailscale/tailscale#9539
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This migrates containerboot to reuse the NetfilterRunner used
by tailscaled instead of manipulating iptables rule itself.
This has the added advantage of now working with nftables and
we can potentially drop the `iptables` command from the container
image in the future.
Updates #9310
Co-authored-by: Irbe Krumina <irbe@tailscale.com>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
1.50.0 switched containerboot from using `tailscale up`
to `tailscale login`. A side-effect is that a re-usable
authkey is now re-applied on every boot by `tailscale login`,
where `tailscale up` would ignore an authkey if already
authenticated.
Though this looks like it is changing the default, in reality
it is setting the default to match what 1.48 and all
prior releases actually implemented.
Fixes https://github.com/tailscale/tailscale/issues/9539
Fixes https://github.com/tailscale/corp/issues/14953
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
In typical k8s setups, the MTU configured on the eth0 interfaces is typically 1500 which
results in packets being dropped when they make it to proxy pods as the tailscale0 interface
has a 1280 MTU.
As the primary use of this functionality is TCP, add iptables based MSS clamping to allow
connectivity.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
First part of work for the functionality that allows users to create an egress
proxy to access Tailnet services from within Kubernetes cluster workloads.
This PR allows creating an egress proxy that can access Tailscale services over HTTP only.
Updates tailscale/tailscale#8184
Signed-off-by: irbekrm <irbekrm@gmail.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Co-authored-by: Rhea Ghosh <rhea@tailscale.com>
On k8s the serve-config secret mount is symlinked so checking against
the Name makes us miss the events.
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This watches the provided path for a JSON encoded ipn.ServeConfig.
Everytime the file changes, or the nodes FQDN changes it reapplies
the ServeConfig.
At boot time, it nils out any previous ServeConfig just like tsnet does.
As the ServeConfig requires pre-existing knowledge of the nodes FQDN to do
SNI matching, it introduces a special `${TS_CERT_DOMAIN}` value in the JSON
file which is replaced with the known CertDomain before it is applied.
Updates #502
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Previously we would not reapply changes to TS_HOSTNAME etc when
then the container restarted and TS_AUTH_ONCE was enabled.
This splits those into two steps login and set, allowing us to
only rerun the set step on restarts.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We had two implemenetations of the kube client, merge them.
containerboot was also using a raw http.Transport, this also has
the side effect of making it use a http.Client
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
We still accept the previous TS_AUTH_KEY for backwards compatibility, but the documented option name is the spelling we use everywhere else.
Updates #6321
Signed-off-by: David Anderson <danderson@tailscale.com>
In some configurations, user explicitly do not want to store
tailscale state in k8s secrets, because doing that leads to
some annoying permission issues with sidecar containers.
With this change, TS_KUBE_SECRET="" and TS_STATE_DIR=/foo
will force storage to file when running in kubernetes.
Fixes#6704.
Signed-off-by: David Anderson <danderson@tailscale.com>
We still have to shell out to `tailscale up` because the container image's
API includes "arbitrary flags to tailscale up", unfortunately. But this
should still speed up startup a little, and also enables k8s-bound containers
to update their device information as new netmap updates come in.
Fixes#6657
Signed-off-by: David Anderson <danderson@tailscale.com>
Only enable forwarding for an IP family if any forwarding is required
for that family.
Fixes#6221.
Signed-off-by: David Anderson <danderson@tailscale.com>
The //go:build syntax was introduced in Go 1.17:
https://go.dev/doc/go1.17#build-lines
gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.
Done with:
perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This implements the same functionality as the former run.sh, but in Go
and with a little better awareness of tailscaled's lifecycle.
Also adds TS_AUTH_ONCE, which fixes the unfortunate behavior run.sh had
where it would unconditionally try to reauth every time if you gave it
an authkey, rather than try to use it only if auth is actually needed.
This makes it a bit nicer to deploy these containers in automation, since
you don't have to run the container once, then go and edit its definition
to remove authkeys.
Signed-off-by: David Anderson <danderson@tailscale.com>