Adds a new TailscaleProxyReady condition type for use in corev1.Service
conditions.
Also switch our CRDs to use metav1.Condition instead of
ConnectorCondition. The Go structs are seralized identically, but it
updates some descriptions and validation rules. Update k8s
controller-tools and controller-runtime deps to fix the documentation
generation for metav1.Condition so that it excludes comments and
TODOs.
Stop expecting the fake client to populate TypeMeta in tests. See
kubernetes-sigs/controller-runtime#2633 for details of the change.
Finally, make some minor improvements to validation for service hostnames.
Fixes#12216
Co-authored-by: Irbe Krumina <irbe@tailscale.com>
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
* cmd/containerboot: store device ID before setting up proxy routes.
For containerboot instances whose state needs to be stored
in a Kubernetes Secret, we additonally store the device's
ID, FQDN and IPs.
This is used, between other, by the Kubernetes operator,
who uses the ID to delete the device when resources need
cleaning up and writes the FQDN and IPs on various kube
resource statuses for visibility.
This change shifts storing device ID earlier in the proxy setup flow,
to ensure that if proxy routing setup fails,
the device can still be deleted.
Updates tailscale/tailscale#12146
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* code review feedback
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This PR is in prep of adding logic to control to be able to parse
tailscale.com/cap/kubernetes grants in control:
- moves the type definition of PeerCapabilityKubernetes cap to a location
shared with control.
- update the Kubernetes cap rule definition with fields for granting
kubectl exec session recording capabilities.
- adds a convenience function to produce tailcfg.RawMessage from an
arbitrary cap rule and a test for it.
An example grant defined via ACLs:
"grants": [{
"src": ["tag:eng"],
"dst": ["tag:k8s-operator"],
"app": {
"tailscale.com/cap/kubernetes": [{
"recorder": ["tag:my-recorder"]
“enforceRecorder”: true
}],
},
}
]
This grant enforces `kubectl exec` sessions from tailnet clients,
matching `tag:eng` via API server proxy matching `tag:k8s-operator`
to be recorded and recording to be sent to a tsrecorder instance,
matching `tag:my-recorder`.
The type needs to be shared with control because we want
control to parse this cap and resolve tags to peer IPs.
Updates tailscale/corp#19821
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add a new .spec.tailscale.acceptRoutes field to ProxyClass,
that can be optionally set to true for the proxies to
accept routes advertized by other nodes on tailnet (equivalent of
setting --accept-routes to true).
Updates tailscale/tailscale#12322,tailscale/tailscale#10684
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Also removes hardcoded image repo/tag from example DNSConfig resource
as the operator now knows how to default those.
Updates tailscale/tailscale#11019
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add new fields TailnetIPs and Hostname to Connector Status. These
contain the addresses of the Tailscale node that the operator created
for the Connector to aid debugging.
Fixes#12214
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
cmd/k8s-operator,k8s-operator,go.{mod,sum}: make individual proxy images/image pull policies configurable
Allow to configure images and image pull policies for individual proxies
via ProxyClass.Spec.StatefulSet.Pod.{TailscaleContainer,TailscaleInitContainer}.Image,
and ProxyClass.Spec.StatefulSet.Pod.{TailscaleContainer,TailscaleInitContainer}.ImagePullPolicy
fields.
Document that we have images in ghcr.io on the relevant Helm chart fields.
Updates tailscale/tailscale#11675
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This is done in preparation for adding kubectl
session recording rules to this capability grant that will need to
be unmarshalled by control, so will also need to be
in a shared location.
Updates tailscale/corp#19821
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator/deploy/chart: Support image 'repo' or 'repository' keys in helm values
Fixes#12100
Signed-off-by: Michael Long <michaelongdev@gmail.com>
Turn off stateful filtering for egress proxies to allow cluster
traffic to be forwarded to tailnet.
Allow configuring stateful filter via tailscaled config file.
Deprecate EXPERIMENTAL_TS_CONFIGFILE_PATH env var and introduce a new
TS_EXPERIMENTAL_VERSIONED_CONFIG env var that can be used to provide
containerboot a directory that should contain one or more
tailscaled config files named cap-<tailscaled-cap-version>.hujson.
Containerboot will pick the one with the newest capability version
that is not newer than its current capability version.
Proxies with this change will not work with older Tailscale
Kubernetes operator versions - users must ensure that
the deployed operator is at the same version or newer (up to
4 version skew) than the proxies.
Updates tailscale/tailscale#12061
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
We are now publishing nameserver images to tailscale/k8s-nameserver,
so we can start defaulting the images if users haven't set
them explicitly, same as we already do with proxy images.
The nameserver images are currently only published for unstable
track, so we have to use the static 'unstable' tag.
Once we start publishing to stable, we can make the operator
default to its own tag (because then we'll know that for each
operator tag X there is also a nameserver tag X as we always
cut all images for a given tag.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator: optionally update dnsrecords Configmap with DNS records for proxies.
This commit adds functionality to automatically populate
DNS records for the in-cluster ts.net nameserver
to allow cluster workloads to resolve MagicDNS names
associated with operator's proxies.
The records are created as follows:
* For tailscale Ingress proxies there will be
a record mapping the MagicDNS name of the Ingress
device and each proxy Pod's IP address.
* For cluster egress proxies, configured via
tailscale.com/tailnet-fqdn annotation, there will be
a record for each proxy Pod, mapping
the MagicDNS name of the exposed
tailnet workload to the proxy Pod's IP.
No records will be created for any other proxy types.
Records will only be created if users have configured
the operator to deploy an in-cluster ts.net nameserver
by applying tailscale.com/v1alpha1.DNSConfig.
It is user's responsibility to add the ts.net nameserver
as a stub nameserver for ts.net DNS names.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-corednshttps://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns#upstream_nameservers
See also https://github.com/tailscale/tailscale/pull/11017
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.
Fixes#11947
Signed-off-by: Gabe Gorelick <gabe@hightouch.io>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Before attempting to enable IPv6 forwarding in the proxy init container
check if the relevant module is found, else the container crashes
on hosts that don't have it.
Updates#11860
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Adds a new .spec.metrics field to ProxyClass to allow users to optionally serve
client metrics (tailscaled --debug) on <Pod-IP>:9001.
Metrics cannot currently be enabled for proxies that egress traffic to tailnet
and for Ingress proxies with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation
(because they currently forward all cluster traffic to their respective backends).
The assumption is that users will want to have these metrics enabled
continuously to be able to monitor proxy behaviour (as opposed to enabling
them temporarily for debugging). Hence we expose them on Pod IP to make it
easier to consume them i.e via Prometheus PodMonitor.
Updates tailscale/tailscale#11292
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name
Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.
Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.
* cmd/k8s-operator: support ExternalName Services
Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.
Updates tailscale/tailscale#10606
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Kubernetes cluster domain defaults to 'cluster.local', but can also be customized.
We need to determine cluster domain to set up in-cluster forwarding to our egress proxies.
This was previously hardcoded to 'cluster.local', so was the egress proxies were not usable in clusters with custom domains.
This PR ensures that we attempt to determine the cluster domain by parsing /etc/resolv.conf.
In case the cluster domain cannot be determined from /etc/resolv.conf, we fall back to 'cluster.local'.
Updates tailscale/tailscale#10399,tailscale/tailscale#11445
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Adds new ProxyClass.spec.statefulSet.pod.{tailscaleContainer,tailscaleInitContainer}.Env field
that allow users to provide key, value pairs that will be set as env vars for the respective containers.
Allow overriding all containerboot env vars,
but warn that this is not supported and might break (in docs + a warning when validating ProxyClass).
Updates tailscale/tailscale#10709
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources
Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Including the double quotes (`"`) around the value made it appear like the helm chart should expect a string value for `installCRDs`.
Signed-off-by: Chris Milson-Tokunaga <chris.w.milson@gmail.com>
Fix a bug where all proxies got configured with --accept-routes set to true.
The bug was introduced in https://github.com/tailscale/tailscale/pull/11238.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This is so that if a backend Service gets created after the Ingress, it gets picked up by the operator.
Updates tailscale/tailscale#11251
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <1687799+knyar@users.noreply.github.com>
Containerboot container created for operator's ingress and egress proxies
are now always configured by passing a configfile to tailscaled
(tailscaled --config <configfile-path>.
It does not run 'tailscale set' or 'tailscale up'.
Upgrading existing setups to this version as well as
downgrading existing setups at this version works.
Updates tailscale/tailscale#10869
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add logic to autogenerate CRD docs.
.github/workflows/kubemanifests.yaml CI workflow will fail if the doc is out of date with regard to the current CRDs.
Docs can be refreshed by running make kube-generate-all.
Updates tailscale/tailscale#11023
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: introduce proxy configuration mechanism via ProxyClass custom resource.
ProxyClass custom resource can be used to specify customizations
for the proxy resources created by the operator.
Add a reconciler that validates ProxyClass resources
and sets a Ready condition to True or False with a corresponding reason and message.
This is required because some fields (labels and annotations)
require complex validations that cannot be performed at custom resource apply time.
Reconcilers that use the ProxyClass to configure proxy resources are expected to
verify that the ProxyClass is Ready and not proceed with resource creation
if configuration from a ProxyClass that is not yet Ready is required.
If a tailscale ingress/egress Service is annotated with a tailscale.com/proxy-class annotation, look up the corresponding ProxyClass and, if it is Ready, apply the configuration from the ProxyClass to the proxy's StatefulSet.
If a tailscale Ingress has a tailscale.com/proxy-class annotation
and the referenced ProxyClass custom resource is available and Ready,
apply configuration from the ProxyClass to the proxy resources
that will be created for the Ingress.
Add a new .proxyClass field to the Connector spec.
If connector.spec.proxyClass is set to a ProxyClass that is available and Ready,
apply configuration from the ProxyClass to the proxy resources created for the Connector.
Ensure that when Helm chart is packaged, the ProxyClass yaml is added to chart templates. Ensure that static manifest generator adds ProxyClass yaml to operator.yaml. Regenerate operator.yaml
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot,cmd/k8s-operator/deploy/manifests: optionally forward cluster traffic via ingress proxy.
If a tailscale Ingress has tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation, configure the associated ingress proxy to have its tailscale serve proxy to listen on Pod's IP address. This ensures that cluster traffic too can be forwarded via this proxy to the ingress backend(s).
In containerboot, if EXPERIMENTAL_PROXY_CLUSTER_TRAFFIC_VIA_INGRESS is set to true
and the node is Kubernetes operator ingress proxy configured via Ingress,
make sure that traffic from within the cluster can be proxied to the ingress target.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Do not provision resources for a tailscale Ingress that has no valid backends.
Updates tailscale/tailscale#10910
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Also perform minor cleanups on the ctxkey package itself.
Provide guidance on when to use ctxkey.Key[T] over ctxkey.New.
Also, allow for interface kinds because the value wrapping trick
also happens to fix edge cases with interfaces in Go.
Updates #cleanup
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
To reduce the likelihood of breaking users,
if we implement stricter Exact path type matching in the future.
Updates tailscale/tailscale#10730
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
So that users have predictable label values to use when configuring network policies.
Updates tailscale/tailscale#10854
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy: deploy a Tailscale IngressClass resource.
Some Ingress validating webhooks reject Ingresses with
.spec.ingressClassName for which there is no matching IngressClass.
Additionally, validate that the expected IngressClass is present,
when parsing a tailscale `Ingress`.
We currently do not utilize the IngressClass,
however we might in the future at which point
we might start requiring that the right class
for this controller instance actually exists.
Updates tailscale/tailscale#10820
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <anton@tailscale.com>
The configuration knob (that defaulted to Connector being disabled)
was added largely because the Connector CRD had to be installed in a separate step.
Now when the CRD has been added to both chart and static manifest, we can have it on by default.
Updates tailscale/tailscale#10878
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator: fix base truncating for extra long Service names
StatefulSet names for ingress/egress proxies are calculated
using Kubernetes name generator and the parent resource name
as a base.
The name generator also cuts the base, but has a higher max cap.
This commit fixes a bug where, if we get a shortened base back
from the generator, we cut off too little as the base that we
have cut will be passed into the generator again, which will
then itself cut less because the base is shorter- so we end up
with a too long name again.
Updates tailscale/tailscale#10807
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Signed-off-by: Irbe Krumina <irbekrm@gmail.com>
cmd/k8s-operator: add CRD to chart and static manifest
Add functionality to insert CRD to chart at package time.
Insert CRD to static manifests as this is where they are currently consumed from.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>