containerboot:
Adds 3 new environment variables for containerboot, `TS_LOCAL_ADDR_PORT` (default
`"${POD_IP}:9002"`), `TS_METRICS_ENABLED` (default `false`), and `TS_DEBUG_ADDR_PORT`
(default `""`), to configure metrics and debug endpoints. In a follow-up PR, the
health check endpoint will be updated to use the `TS_LOCAL_ADDR_PORT` if
`TS_HEALTHCHECK_ADDR_PORT` hasn't been set.
Users previously only had access to internal debug metrics (which are unstable
and not recommended) via passing the `--debug` flag to tailscaled, but can now
set `TS_METRICS_ENABLED=true` to expose the stable metrics documented at
https://tailscale.com/kb/1482/client-metrics at `/metrics` on the addr/port
specified by `TS_LOCAL_ADDR_PORT`.
Users can also now configure a debug endpoint more directly via the
`TS_DEBUG_ADDR_PORT` environment variable. This is not recommended for production
use, but exposes an internal set of debug metrics and pprof endpoints.
operator:
The `ProxyClass` CRD's `.spec.metrics.enable` field now enables serving the
stable user metrics documented at https://tailscale.com/kb/1482/client-metrics
at `/metrics` on the same "metrics" container port that debug metrics were
previously served on. To smooth the transition for anyone relying on the way the
operator previously consumed this field, we also _temporarily_ serve tailscaled's
internal debug metrics on the same `/debug/metrics` path as before, until 1.82.0
when debug metrics will be turned off by default even if `.spec.metrics.enable`
is set. At that point, anyone who wishes to continue using the internal debug
metrics (not recommended) will need to set the new `ProxyClass` field
`.spec.statefulSet.pod.tailscaleContainer.debug.enable`.
Users who wish to opt out of the transitional behaviour, where enabling
`.spec.metrics.enable` also enables debug metrics, can set
`.spec.statefulSet.pod.tailscaleContainer.debug.enable` to false (recommended).
Separately but related, the operator will no longer specify a host port for the
"metrics" container port definition. This caused scheduling conflicts when k8s
needs to schedule more than one proxy per node, and was not necessary for allowing
the pod's port to be exposed to prometheus scrapers.
Updates #11292
---------
Co-authored-by: Kristoffer Dalby <kristoffer@tailscale.com>
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Adds functionality to kube client to emit Events.
Updates kube store to emit Events when tailscaled state has been loaded, updated or if any errors where
encountered during those operations.
This should help in cases where an error related to state loading/updating caused the Pod to crash in a loop-
unlike logs of the originally failed container instance, Events associated with the Pod will still be
accessible even after N restarts.
Updates tailscale/tailscale#14080
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Currently egress Services for ProxyGroup only work for Pods and Services
with IPv4 addresses. Ensure that it works on dual stack clusters by reading
proxy Pod's IP from the .status.podIPs list that always contains both
IPv4 and IPv6 address (if the Pod has them) rather than .status.podIP that
could contain IPv6 only for a dual stack cluster.
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Rearrange conditionals to reduce indentation and make it a bit easier to read
the logic. Also makes some error message updates for better consistency
with the recent decision around capitalising resource names and the
upcoming addition of config secrets.
Updates #cleanup
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Implements the controller for the new ProxyGroup CRD, designed for
running proxies in a high availability configuration. Each proxy gets
its own config and state Secret, and its own tailscale node ID.
We are currently mounting all of the config secrets into the container,
but will stop mounting them and instead read them directly from the kube
API once #13578 is implemented.
Updates #13406
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Adds a new reconciler that reconciles ExternalName Services that define a
tailnet target that should be exposed to cluster workloads on a ProxyGroup's
proxies.
The reconciler ensures that for each such service, the config mounted to
the proxies is updated with the tailnet target definition and that
and EndpointSlice and ClusterIP Service are created for the service.
Adds a new reconciler that ensures that as proxy Pods become ready to route
traffic to a tailnet target, the EndpointSlice for the target is updated
with the Pods' endpoints.
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The AddSNATRuleForDst rule was adding a new rule each time it was called including:
- if a rule already existed
- if a rule matching the destination, but with different desired source already existed
This was causing issues especially for the in-progress egress HA proxies work,
where the rules are now refreshed more frequently, so more redundant rules
were being created.
This change:
- only creates the rule if it doesn't already exist
- if a rule for the same dst, but different source is found, delete it
- also ensures that egress proxies refresh firewall rules
if the node's tailnet IP changes
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,kube,util/linuxfw: configure kube egress proxies to route to 1+ tailnet targets
This commit is first part of the work to allow running multiple
replicas of the Kubernetes operator egress proxies per tailnet service +
to allow exposing multiple tailnet services via each proxy replica.
This expands the existing iptables/nftables-based proxy configuration
mechanism.
A proxy can now be configured to route to one or more tailnet targets
via a (mounted) config file that, for each tailnet target, specifies:
- the target's tailnet IP or FQDN
- mappings of container ports to which cluster workloads will send traffic to
tailnet target ports where the traffic should be forwarded.
Example configfile contents:
{
"some-svc": {"tailnetTarget":{"fqdn":"foo.tailnetxyz.ts.net","ports"{"tcp:4006:80":{"protocol":"tcp","matchPort":4006,"targetPort":80},"tcp:4007:443":{"protocol":"tcp","matchPort":4007,"targetPort":443}}}}
}
A proxy that is configured with this config file will configure firewall rules
to route cluster traffic to the tailnet targets. It will then watch the config file
for updates as well as monitor relevant netmap updates and reconfigure firewall
as needed.
This adds a bunch of new iptables/nftables functionality to make it easier to dynamically update
the firewall rules without needing to restart the proxy Pod as well as to make
it easier to debug/understand the rules:
- for iptables, each portmapping is a DNAT rule with a comment pointing
at the 'service',i.e:
-A PREROUTING ! -i tailscale0 -p tcp -m tcp --dport 4006 -m comment --comment "some-svc:tcp:4006 -> tcp:80" -j DNAT --to-destination 100.64.1.18:80
Additionally there is a SNAT rule for each tailnet target, to mask the source address.
- for nftables, a separate prerouting chain is created for each tailnet target
and all the portmapping rules are placed in that chain. This makes it easier
to look up rules and delete services when no longer needed.
(nftables allows hooking a custom chain to a prerouting hook, so no extra work
is needed to ensure that the rules in the service chains are evaluated).
The next steps will be to get the Kubernetes Operator to generate
the configfile and ensure it is mounted to the relevant proxy nodes.
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
containerboot's main.go had grown to well over 1000 lines with
lots of disparate bits of functionality. This commit is pure copy-
paste to group related functionality outside of the main function
into its own set of files. Everything is still in the main package
to keep the diff incremental and reviewable.
Updates #cleanup
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Rename kube/{types,client,api} -> kube/{kubetypes,kubeclient,kubeapi}
so that we don't need to rename the package on each import to
convey that it's kubernetes specific.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Further split kube package into kube/{client,api,types}. This is so that
consumers who only need constants/static types don't have to import
the client and api bits.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add functionality to optionally serve a health check endpoint
(off by default).
Users can enable health check endpoint by setting
TS_HEALTHCHECK_ADDR_PORT to [<addr>]:<port>.
Containerboot will then serve an unauthenticatd HTTP health check at
/healthz at that address. The health check returns 200 OK if the
node has at least one tailnet IP address, else returns 503.
Updates tailscale/tailscale#12898
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/containerboot,cmd/k8s-operator: enable IPv6 for fqdn egress proxies
Don't skip installing egress forwarding rules for IPv6 (as long as the host
supports IPv6), and set headless services `ipFamilyPolicy` to
`PreferDualStack` to optionally enable both IP families when possible. Note
that even with `PreferDualStack` set, testing a dual-stack GKE cluster with
the default DNS setup of kube-dns did not correctly set both A and
AAAA records for the headless service, and instead only did so when
switching the cluster DNS to Cloud DNS. For both IPv4 and IPv6 to work
simultaneously in a dual-stack cluster, we require headless services to
return both A and AAAA records.
If the host doesn't support IPv6 but the FQDN specified only has IPv6
addresses available, containerboot will exit with error code 1 and an
error message because there is no viable egress route.
Fixes#12215
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
* cmd/containerboot: store device ID before setting up proxy routes.
For containerboot instances whose state needs to be stored
in a Kubernetes Secret, we additonally store the device's
ID, FQDN and IPs.
This is used, between other, by the Kubernetes operator,
who uses the ID to delete the device when resources need
cleaning up and writes the FQDN and IPs on various kube
resource statuses for visibility.
This change shifts storing device ID earlier in the proxy setup flow,
to ensure that if proxy routing setup fails,
the device can still be deleted.
Updates tailscale/tailscale#12146
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* code review feedback
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add a new TS_EXPERIMENTAL_ENABLE_FORWARDING_OPTIMIZATIONS env var
that can be set for tailscale/tailscale container running as
a subnet router or exit node to enable UDP GRO forwarding
for improved performance.
See https://tailscale.com/kb/1320/performance-best-practices#linux-optimizations-for-subnet-routers-and-exit-nodes
This is currently considered an experimental approach;
the configuration support is partially to allow further experimentation
with containerized environments to evaluate the performance
improvements.
Updates tailscale/tailscale#12295
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Turn off stateful filtering for egress proxies to allow cluster
traffic to be forwarded to tailnet.
Allow configuring stateful filter via tailscaled config file.
Deprecate EXPERIMENTAL_TS_CONFIGFILE_PATH env var and introduce a new
TS_EXPERIMENTAL_VERSIONED_CONFIG env var that can be used to provide
containerboot a directory that should contain one or more
tailscaled config files named cap-<tailscaled-cap-version>.hujson.
Containerboot will pick the one with the newest capability version
that is not newer than its current capability version.
Proxies with this change will not work with older Tailscale
Kubernetes operator versions - users must ensure that
the deployed operator is at the same version or newer (up to
4 version skew) than the proxies.
Updates tailscale/tailscale#12061
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms
* Allow users to pre-create empty state Secrets
* Add a fake internal kube client, test functionality that has dependencies on kube client operations.
* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist
* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed
Updates tailscale/tailscale#11170
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Modifies containerboot to wait on tailscaled process
only, not on any child process of containerboot.
Waiting on any subprocess was racing with Go's
exec.Cmd.Run, used to run iptables commands and
that starts its own subprocesses and waits on them.
Containerboot itself does not run anything else
except for tailscaled, so there shouldn't be a need
to wait on anything else.
Updates tailscale/tailscale#11593
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name
Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.
Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.
* cmd/k8s-operator: support ExternalName Services
Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.
Updates tailscale/tailscale#10606
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,cmd/k8s-operator/deploy/manifests: optionally forward cluster traffic via ingress proxy.
If a tailscale Ingress has tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation, configure the associated ingress proxy to have its tailscale serve proxy to listen on Pod's IP address. This ensures that cluster traffic too can be forwarded via this proxy to the ingress backend(s).
In containerboot, if EXPERIMENTAL_PROXY_CLUSTER_TRAFFIC_VIA_INGRESS is set to true
and the node is Kubernetes operator ingress proxy configured via Ingress,
make sure that traffic from within the cluster can be proxied to the ingress target.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: optionally configure tailscaled with a configfile.
If EXPERIMENTAL_TS_CONFIGFILE_PATH env var is set,
only run tailscaled with the provided config file.
Do not run 'tailscale up' or 'tailscale set'.
* cmd/containerboot: store containerboot accept_dns val in bool pointer
So that we can distinguish between the value being set to
false explicitly bs being unset.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
A Tailnet node can be told to stop advertise subnets by passing
an empty string to --advertise-routes flag.
Respect an explicitly passed empty value to TS_ROUTES env var
so that users have a way to stop containerboot acting as a subnet
router without recreating it.
Distinguish between TS_ROUTES being unset and empty.
Updates tailscale/tailscale#10708
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
`tailscaled` and `tailscale` expect the socket to be at
`/var/run/tailscale/tailscaled.sock`, however containerboot
would set up the socket at `/tmp/tailscaled.sock`. This leads to a
poor UX when users try to use any `tailscale` command as they
have to prefix everything with `--socket /tmp/tailscaled.sock`.
To improve the UX, this adds a symlink to
`/var/run/tailscale/tailscaled.sock` to point to `/tmp/tailscaled.sock`.
This approach has two benefits, 1 users are able to continue to use
existing scripts without this being a breaking change. 2. users are
able to use the `tailscale` CLI without having to add the `--socket` flag.
Fixes tailscale/corp#15902
Fixes#6849Fixes#10027
Signed-off-by: Maisem Ali <maisem@tailscale.com>
* cmd/containerboot: proxy traffic to tailnet target defined by FQDN
Add a new Service annotation tailscale.com/tailnet-fqdn that
users can use to specify a tailnet target for which
an egress proxy should be deployed in the cluster.
Updates tailscale/tailscale#10280
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: shut down cleanly on SIGTERM
Make sure that tailscaled watcher returns when
SIGTERM is received and also that it shuts down
before tailscaled exits.
Updates tailscale/tailscale#10090
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This partially reverts commits a61a9ab087
and 7538f38671 and fully reverts
4823a7e591.
The goal of that commit was to reapply known config whenever the
container restarts. However, that already happens when TS_AUTH_ONCE was
false (the default back then). So we only had to selectively reapply the
config if TS_AUTH_ONCE is true, this does exactly that.
This is a little sad that we have to revert to `tailscale up`, but it
fixes the backwards incompatibility problem.
Updates tailscale/tailscale#9539
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This migrates containerboot to reuse the NetfilterRunner used
by tailscaled instead of manipulating iptables rule itself.
This has the added advantage of now working with nftables and
we can potentially drop the `iptables` command from the container
image in the future.
Updates #9310
Co-authored-by: Irbe Krumina <irbe@tailscale.com>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
1.50.0 switched containerboot from using `tailscale up`
to `tailscale login`. A side-effect is that a re-usable
authkey is now re-applied on every boot by `tailscale login`,
where `tailscale up` would ignore an authkey if already
authenticated.
Though this looks like it is changing the default, in reality
it is setting the default to match what 1.48 and all
prior releases actually implemented.
Fixes https://github.com/tailscale/tailscale/issues/9539
Fixes https://github.com/tailscale/corp/issues/14953
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
The test was sending SIGKILL to containerboot, which results in no
signal propagation down to the bash script that is running as a child
process, thus it leaks.
Minor changes to the test daemon script, so that it cleans up the socket
that it creates on exit, and spawns fewer processes.
Fixestailscale/corp#14833
Signed-off-by: James Tucker <james@tailscale.com>
In typical k8s setups, the MTU configured on the eth0 interfaces is typically 1500 which
results in packets being dropped when they make it to proxy pods as the tailscale0 interface
has a 1280 MTU.
As the primary use of this functionality is TCP, add iptables based MSS clamping to allow
connectivity.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Ensures that Statefulset reconciler config has only one of Cluster target IP or tailnet target IP.
Adds a test case for containerboot egress proxy mode.
Updates tailscale/tailscale#8184
Signed-off-by: irbekrm <irbekrm@gmail.com>
First part of work for the functionality that allows users to create an egress
proxy to access Tailnet services from within Kubernetes cluster workloads.
This PR allows creating an egress proxy that can access Tailscale services over HTTP only.
Updates tailscale/tailscale#8184
Signed-off-by: irbekrm <irbekrm@gmail.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Co-authored-by: Rhea Ghosh <rhea@tailscale.com>
On k8s the serve-config secret mount is symlinked so checking against
the Name makes us miss the events.
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This watches the provided path for a JSON encoded ipn.ServeConfig.
Everytime the file changes, or the nodes FQDN changes it reapplies
the ServeConfig.
At boot time, it nils out any previous ServeConfig just like tsnet does.
As the ServeConfig requires pre-existing knowledge of the nodes FQDN to do
SNI matching, it introduces a special `${TS_CERT_DOMAIN}` value in the JSON
file which is replaced with the known CertDomain before it is applied.
Updates #502
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Previously we would not reapply changes to TS_HOSTNAME etc when
then the container restarted and TS_AUTH_ONCE was enabled.
This splits those into two steps login and set, allowing us to
only rerun the set step on restarts.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We had two implemenetations of the kube client, merge them.
containerboot was also using a raw http.Transport, this also has
the side effect of making it use a http.Client
Signed-off-by: Maisem Ali <maisem@tailscale.com>