containerboot's main.go had grown to well over 1000 lines with
lots of disparate bits of functionality. This commit is pure copy-
paste to group related functionality outside of the main function
into its own set of files. Everything is still in the main package
to keep the diff incremental and reviewable.
Updates #cleanup
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Rename kube/{types,client,api} -> kube/{kubetypes,kubeclient,kubeapi}
so that we don't need to rename the package on each import to
convey that it's kubernetes specific.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Further split kube package into kube/{client,api,types}. This is so that
consumers who only need constants/static types don't have to import
the client and api bits.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add functionality to optionally serve a health check endpoint
(off by default).
Users can enable health check endpoint by setting
TS_HEALTHCHECK_ADDR_PORT to [<addr>]:<port>.
Containerboot will then serve an unauthenticatd HTTP health check at
/healthz at that address. The health check returns 200 OK if the
node has at least one tailnet IP address, else returns 503.
Updates tailscale/tailscale#12898
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/containerboot,cmd/k8s-operator: enable IPv6 for fqdn egress proxies
Don't skip installing egress forwarding rules for IPv6 (as long as the host
supports IPv6), and set headless services `ipFamilyPolicy` to
`PreferDualStack` to optionally enable both IP families when possible. Note
that even with `PreferDualStack` set, testing a dual-stack GKE cluster with
the default DNS setup of kube-dns did not correctly set both A and
AAAA records for the headless service, and instead only did so when
switching the cluster DNS to Cloud DNS. For both IPv4 and IPv6 to work
simultaneously in a dual-stack cluster, we require headless services to
return both A and AAAA records.
If the host doesn't support IPv6 but the FQDN specified only has IPv6
addresses available, containerboot will exit with error code 1 and an
error message because there is no viable egress route.
Fixes#12215
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
* cmd/containerboot: store device ID before setting up proxy routes.
For containerboot instances whose state needs to be stored
in a Kubernetes Secret, we additonally store the device's
ID, FQDN and IPs.
This is used, between other, by the Kubernetes operator,
who uses the ID to delete the device when resources need
cleaning up and writes the FQDN and IPs on various kube
resource statuses for visibility.
This change shifts storing device ID earlier in the proxy setup flow,
to ensure that if proxy routing setup fails,
the device can still be deleted.
Updates tailscale/tailscale#12146
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* code review feedback
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add a new TS_EXPERIMENTAL_ENABLE_FORWARDING_OPTIMIZATIONS env var
that can be set for tailscale/tailscale container running as
a subnet router or exit node to enable UDP GRO forwarding
for improved performance.
See https://tailscale.com/kb/1320/performance-best-practices#linux-optimizations-for-subnet-routers-and-exit-nodes
This is currently considered an experimental approach;
the configuration support is partially to allow further experimentation
with containerized environments to evaluate the performance
improvements.
Updates tailscale/tailscale#12295
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Turn off stateful filtering for egress proxies to allow cluster
traffic to be forwarded to tailnet.
Allow configuring stateful filter via tailscaled config file.
Deprecate EXPERIMENTAL_TS_CONFIGFILE_PATH env var and introduce a new
TS_EXPERIMENTAL_VERSIONED_CONFIG env var that can be used to provide
containerboot a directory that should contain one or more
tailscaled config files named cap-<tailscaled-cap-version>.hujson.
Containerboot will pick the one with the newest capability version
that is not newer than its current capability version.
Proxies with this change will not work with older Tailscale
Kubernetes operator versions - users must ensure that
the deployed operator is at the same version or newer (up to
4 version skew) than the proxies.
Updates tailscale/tailscale#12061
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms
* Allow users to pre-create empty state Secrets
* Add a fake internal kube client, test functionality that has dependencies on kube client operations.
* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist
* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed
Updates tailscale/tailscale#11170
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Modifies containerboot to wait on tailscaled process
only, not on any child process of containerboot.
Waiting on any subprocess was racing with Go's
exec.Cmd.Run, used to run iptables commands and
that starts its own subprocesses and waits on them.
Containerboot itself does not run anything else
except for tailscaled, so there shouldn't be a need
to wait on anything else.
Updates tailscale/tailscale#11593
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name
Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.
Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.
* cmd/k8s-operator: support ExternalName Services
Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.
Updates tailscale/tailscale#10606
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,cmd/k8s-operator/deploy/manifests: optionally forward cluster traffic via ingress proxy.
If a tailscale Ingress has tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation, configure the associated ingress proxy to have its tailscale serve proxy to listen on Pod's IP address. This ensures that cluster traffic too can be forwarded via this proxy to the ingress backend(s).
In containerboot, if EXPERIMENTAL_PROXY_CLUSTER_TRAFFIC_VIA_INGRESS is set to true
and the node is Kubernetes operator ingress proxy configured via Ingress,
make sure that traffic from within the cluster can be proxied to the ingress target.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: optionally configure tailscaled with a configfile.
If EXPERIMENTAL_TS_CONFIGFILE_PATH env var is set,
only run tailscaled with the provided config file.
Do not run 'tailscale up' or 'tailscale set'.
* cmd/containerboot: store containerboot accept_dns val in bool pointer
So that we can distinguish between the value being set to
false explicitly bs being unset.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
A Tailnet node can be told to stop advertise subnets by passing
an empty string to --advertise-routes flag.
Respect an explicitly passed empty value to TS_ROUTES env var
so that users have a way to stop containerboot acting as a subnet
router without recreating it.
Distinguish between TS_ROUTES being unset and empty.
Updates tailscale/tailscale#10708
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
`tailscaled` and `tailscale` expect the socket to be at
`/var/run/tailscale/tailscaled.sock`, however containerboot
would set up the socket at `/tmp/tailscaled.sock`. This leads to a
poor UX when users try to use any `tailscale` command as they
have to prefix everything with `--socket /tmp/tailscaled.sock`.
To improve the UX, this adds a symlink to
`/var/run/tailscale/tailscaled.sock` to point to `/tmp/tailscaled.sock`.
This approach has two benefits, 1 users are able to continue to use
existing scripts without this being a breaking change. 2. users are
able to use the `tailscale` CLI without having to add the `--socket` flag.
Fixes tailscale/corp#15902
Fixes#6849Fixes#10027
Signed-off-by: Maisem Ali <maisem@tailscale.com>
* cmd/containerboot: proxy traffic to tailnet target defined by FQDN
Add a new Service annotation tailscale.com/tailnet-fqdn that
users can use to specify a tailnet target for which
an egress proxy should be deployed in the cluster.
Updates tailscale/tailscale#10280
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: shut down cleanly on SIGTERM
Make sure that tailscaled watcher returns when
SIGTERM is received and also that it shuts down
before tailscaled exits.
Updates tailscale/tailscale#10090
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This partially reverts commits a61a9ab087
and 7538f38671 and fully reverts
4823a7e591.
The goal of that commit was to reapply known config whenever the
container restarts. However, that already happens when TS_AUTH_ONCE was
false (the default back then). So we only had to selectively reapply the
config if TS_AUTH_ONCE is true, this does exactly that.
This is a little sad that we have to revert to `tailscale up`, but it
fixes the backwards incompatibility problem.
Updates tailscale/tailscale#9539
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This migrates containerboot to reuse the NetfilterRunner used
by tailscaled instead of manipulating iptables rule itself.
This has the added advantage of now working with nftables and
we can potentially drop the `iptables` command from the container
image in the future.
Updates #9310
Co-authored-by: Irbe Krumina <irbe@tailscale.com>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
1.50.0 switched containerboot from using `tailscale up`
to `tailscale login`. A side-effect is that a re-usable
authkey is now re-applied on every boot by `tailscale login`,
where `tailscale up` would ignore an authkey if already
authenticated.
Though this looks like it is changing the default, in reality
it is setting the default to match what 1.48 and all
prior releases actually implemented.
Fixes https://github.com/tailscale/tailscale/issues/9539
Fixes https://github.com/tailscale/corp/issues/14953
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
The test was sending SIGKILL to containerboot, which results in no
signal propagation down to the bash script that is running as a child
process, thus it leaks.
Minor changes to the test daemon script, so that it cleans up the socket
that it creates on exit, and spawns fewer processes.
Fixestailscale/corp#14833
Signed-off-by: James Tucker <james@tailscale.com>
In typical k8s setups, the MTU configured on the eth0 interfaces is typically 1500 which
results in packets being dropped when they make it to proxy pods as the tailscale0 interface
has a 1280 MTU.
As the primary use of this functionality is TCP, add iptables based MSS clamping to allow
connectivity.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Ensures that Statefulset reconciler config has only one of Cluster target IP or tailnet target IP.
Adds a test case for containerboot egress proxy mode.
Updates tailscale/tailscale#8184
Signed-off-by: irbekrm <irbekrm@gmail.com>
First part of work for the functionality that allows users to create an egress
proxy to access Tailnet services from within Kubernetes cluster workloads.
This PR allows creating an egress proxy that can access Tailscale services over HTTP only.
Updates tailscale/tailscale#8184
Signed-off-by: irbekrm <irbekrm@gmail.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Co-authored-by: Rhea Ghosh <rhea@tailscale.com>
On k8s the serve-config secret mount is symlinked so checking against
the Name makes us miss the events.
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This watches the provided path for a JSON encoded ipn.ServeConfig.
Everytime the file changes, or the nodes FQDN changes it reapplies
the ServeConfig.
At boot time, it nils out any previous ServeConfig just like tsnet does.
As the ServeConfig requires pre-existing knowledge of the nodes FQDN to do
SNI matching, it introduces a special `${TS_CERT_DOMAIN}` value in the JSON
file which is replaced with the known CertDomain before it is applied.
Updates #502
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Previously we would not reapply changes to TS_HOSTNAME etc when
then the container restarted and TS_AUTH_ONCE was enabled.
This splits those into two steps login and set, allowing us to
only rerun the set step on restarts.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We had two implemenetations of the kube client, merge them.
containerboot was also using a raw http.Transport, this also has
the side effect of making it use a http.Client
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
We still accept the previous TS_AUTH_KEY for backwards compatibility, but the documented option name is the spelling we use everywhere else.
Updates #6321
Signed-off-by: David Anderson <danderson@tailscale.com>
In some configurations, user explicitly do not want to store
tailscale state in k8s secrets, because doing that leads to
some annoying permission issues with sidecar containers.
With this change, TS_KUBE_SECRET="" and TS_STATE_DIR=/foo
will force storage to file when running in kubernetes.
Fixes#6704.
Signed-off-by: David Anderson <danderson@tailscale.com>
We still have to shell out to `tailscale up` because the container image's
API includes "arbitrary flags to tailscale up", unfortunately. But this
should still speed up startup a little, and also enables k8s-bound containers
to update their device information as new netmap updates come in.
Fixes#6657
Signed-off-by: David Anderson <danderson@tailscale.com>