* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources
Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This is so that if a backend Service gets created after the Ingress, it gets picked up by the operator.
Updates tailscale/tailscale#11251
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <1687799+knyar@users.noreply.github.com>
Add logic to autogenerate CRD docs.
.github/workflows/kubemanifests.yaml CI workflow will fail if the doc is out of date with regard to the current CRDs.
Docs can be refreshed by running make kube-generate-all.
Updates tailscale/tailscale#11023
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: introduce proxy configuration mechanism via ProxyClass custom resource.
ProxyClass custom resource can be used to specify customizations
for the proxy resources created by the operator.
Add a reconciler that validates ProxyClass resources
and sets a Ready condition to True or False with a corresponding reason and message.
This is required because some fields (labels and annotations)
require complex validations that cannot be performed at custom resource apply time.
Reconcilers that use the ProxyClass to configure proxy resources are expected to
verify that the ProxyClass is Ready and not proceed with resource creation
if configuration from a ProxyClass that is not yet Ready is required.
If a tailscale ingress/egress Service is annotated with a tailscale.com/proxy-class annotation, look up the corresponding ProxyClass and, if it is Ready, apply the configuration from the ProxyClass to the proxy's StatefulSet.
If a tailscale Ingress has a tailscale.com/proxy-class annotation
and the referenced ProxyClass custom resource is available and Ready,
apply configuration from the ProxyClass to the proxy resources
that will be created for the Ingress.
Add a new .proxyClass field to the Connector spec.
If connector.spec.proxyClass is set to a ProxyClass that is available and Ready,
apply configuration from the ProxyClass to the proxy resources created for the Connector.
Ensure that when Helm chart is packaged, the ProxyClass yaml is added to chart templates. Ensure that static manifest generator adds ProxyClass yaml to operator.yaml. Regenerate operator.yaml
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy: deploy a Tailscale IngressClass resource.
Some Ingress validating webhooks reject Ingresses with
.spec.ingressClassName for which there is no matching IngressClass.
Additionally, validate that the expected IngressClass is present,
when parsing a tailscale `Ingress`.
We currently do not utilize the IngressClass,
however we might in the future at which point
we might start requiring that the right class
for this controller instance actually exists.
Updates tailscale/tailscale#10820
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <anton@tailscale.com>
The configuration knob (that defaulted to Connector being disabled)
was added largely because the Connector CRD had to be installed in a separate step.
Now when the CRD has been added to both chart and static manifest, we can have it on by default.
Updates tailscale/tailscale#10878
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator: add CRD to chart and static manifest
Add functionality to insert CRD to chart at package time.
Insert CRD to static manifests as this is where they are currently consumed from.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* k8s-operator,cmd/k8s-operator,Makefile,scripts,.github/workflows: add Connector kube CRD.
Connector CRD allows users to configure the Tailscale Kubernetes operator
to deploy a subnet router to expose cluster CIDRs or
other CIDRs available from within the cluster
to their tailnet.
Also adds various CRD related machinery to
generate CRD YAML, deep copy implementations etc.
Engineers will now have to run
'make kube-generate-all` after changing kube files
to ensure that all generated files are up to date.
* cmd/k8s-operator,k8s-operator: reconcile Connector resources
Reconcile Connector resources, create/delete subnetrouter resources in response to changes to Connector(s).
Connector reconciler will not be started unless
ENABLE_CONNECTOR env var is set to true.
This means that users who don't want to use the alpha
Connector custom resource don't have to install the Connector
CRD to their cluster.
For users who do want to use it the flow is:
- install the CRD
- install the operator (via Helm chart or using static manifests).
For Helm users set .values.enableConnector to true, for static
manifest users, set ENABLE_CONNECTOR to true in the static manifest.
Updates tailscale/tailscale#502
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator: generate static manifests from Helm charts
This is done to ensure that there is a single source of truth
for the operator kube manifests.
Also adds linux node selector to the static manifests as
this was added as a default to the Helm chart.
Static manifests can now be generated by running
`go generate tailscale.com/cmd/k8s-operator`.
Updates tailscale/tailscale#9222
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/containerboot: proxy traffic to tailnet target defined by FQDN
Add a new Service annotation tailscale.com/tailnet-fqdn that
users can use to specify a tailnet target for which
an egress proxy should be deployed in the cluster.
Updates tailscale/tailscale#10280
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Updates tailscale/tailscale#9222
plain k8s-operator should have hostinfo.App set to 'k8s-operator', operator with proxy should have it set to 'k8s-operator-proxy'. In proxy mode, we were setting the type after it had already been set to 'k8s-operator'
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator: users can configure operator to set firewall mode for proxies
Users can now pass PROXY_FIREWALL_MODE={nftables,auto,iptables} to operator to make it create ingress/egress proxies with that firewall mode
Also makes sure that if an invalid firewall mode gets configured, the operator will not start provisioning proxy resources, but will instead log an error and write an error event to the related Service.
Updates tailscale/tailscale#9310
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Ensure that when there is an event on a Tailscale managed Ingress or Service child resource, the right parent type gets reconciled
Updates tailscale/tailscale#502
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The kube-apiserver proxy in the operator would only run in
auth proxy mode but thats not always desirable. There are
situations where the proxy should just be a transparent
proxy and not inject auth headers, so do that using a new
env var APISERVER_PROXY and deprecate the AUTH_PROXY env.
THe new env var has three options `false`, `true` and `noauth`.
Updates #8317
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Instead of confusing users, emit an event that explicitly tells the
user that HTTPS is disabled on the tailnet and that ingress may not
work until they enable it.
Updates #9141
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Previously, the operator would only monitor Services and create
a Tailscale StatefulSet which acted as a L3 proxy which proxied
traffic inbound to the Tailscale IP onto the services ClusterIP.
This extends that functionality to also monitor Ingress resources
where the `ingressClassName=tailscale` and similarly creates a
Tailscale StatefulSet, acting as a L7 proxy instead.
Users can override the desired hostname by setting:
```
- tls
hosts:
- "foo"
```
Hostnames specified under `rules` are ignored as we only create a single
host. This is emitted as an event for users to see.
Fixes#7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
I'm not saying it works, but it compiles.
Updates #5794
Change-Id: I2f3c99732e67fe57a05edb25b758d083417f083e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I forgot to move the defer out of the func, so the tsnet.Server
immediately closed after starting.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
It was jumbled doing a lot of things, this breaks it up into
the svc reconciliation and the tailscale sts reconciliation.
Prep for future commit.
Updates #502
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The client/tailscale is a stable-ish API we try not to break. Revert
the Client.CreateKey method as it was and add a new
CreateKeyWithExpiry method to do the new thing. And document the
expiry field and enforce that the time.Duration can't be between in
range greater than 0 and less than a second.
Updates #7143
Updates #8124 (reverts it, effectively)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Adds a parameter for create key that allows a number of seconds
(less than 90) to be specified for new keys.
Fixes https://github.com/tailscale/tailscale/issues/7965
Signed-off-by: Matthew Brown <matthew@bargrove.com>
getSingleObject can return `nil, nil`, getDeviceInfo was not handling
that case which resulted in panics.
Fixes#7303
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Kubernetes uses SPDY/3.1 which is incompatible with HTTP/2, disable it
in the transport and server.
Fixes#7645Fixes#7646
Signed-off-by: Maisem Ali <maisem@tailscale.com>
"Device Authorization" was recently renamed to "Device Approval"
on the control side. This change updates the k8s operator to match.
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
This allows us to differentiate between the various tsnet apps that
we have like `golinks` and `k8s-operator`.
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
The dependency injection functionality has been deprecated a while back
and it'll be removed in the 0.15 release of Controller Runtime. This
changeset sets the Client after creating the Manager, instead of using
InjectClient.
Signed-off-by: Vince Prignano <vince@prigna.com>
The operator creates a fair bit of internal cluster state to manage proxying,
dumping it all in the default namespace is handy for development but rude
for production.
Updates #502
Signed-off-by: David Anderson <danderson@tailscale.com>
We used to need to do timed requeues in a few places in the reconcile logic,
and the easiest way to do that was to plumb reconcile.Result return values
around. But now we're purely event-driven, so the only thing we care about
is whether or not an error occurred.
Incidentally also fix a very minor bug where headless services would get
completely ignored, rather than reconciled into the correct state. This
shouldn't matter in practice because you can't transition from a headful
to a headless service without a deletion, but for consistency let's avoid
having a path that takes no definite action if a service of interest does
exist.
Updates #502.
Signed-off-by: David Anderson <danderson@tailscale.com>
Previously, we had to do blind timed requeues while waiting for
the tailscale hostname, because we looked up the hostname through
the API. But now the proxy container image writes back its hostname
to the k8s secret, so we get an event-triggered reconcile automatically
when the time is right.
Updates #502
Signed-off-by: David Anderson <danderson@tailscale.com>
As is convention in the k8s world, use zap for structured logging. For
development, OPERATOR_LOGGING=dev switches to a more human-readable output
than JSON.
Updates #502
Signed-off-by: David Anderson <danderson@tailscale.com>
Our reconcile loop gets triggered again when the StatefulSet object
finally disappears (in addition to when its deletion starts, as indicated
by DeletionTimestamp != 0). So, we don't need to queue additional
reconciliations to proceed with the remainder of the cleanup, that
happens organically.
Signed-off-by: David Anderson <danderson@tailscale.com>