* cmd/containerboot,kube,util/linuxfw: configure kube egress proxies to route to 1+ tailnet targets
This commit is first part of the work to allow running multiple
replicas of the Kubernetes operator egress proxies per tailnet service +
to allow exposing multiple tailnet services via each proxy replica.
This expands the existing iptables/nftables-based proxy configuration
mechanism.
A proxy can now be configured to route to one or more tailnet targets
via a (mounted) config file that, for each tailnet target, specifies:
- the target's tailnet IP or FQDN
- mappings of container ports to which cluster workloads will send traffic to
tailnet target ports where the traffic should be forwarded.
Example configfile contents:
{
"some-svc": {"tailnetTarget":{"fqdn":"foo.tailnetxyz.ts.net","ports"{"tcp:4006:80":{"protocol":"tcp","matchPort":4006,"targetPort":80},"tcp:4007:443":{"protocol":"tcp","matchPort":4007,"targetPort":443}}}}
}
A proxy that is configured with this config file will configure firewall rules
to route cluster traffic to the tailnet targets. It will then watch the config file
for updates as well as monitor relevant netmap updates and reconfigure firewall
as needed.
This adds a bunch of new iptables/nftables functionality to make it easier to dynamically update
the firewall rules without needing to restart the proxy Pod as well as to make
it easier to debug/understand the rules:
- for iptables, each portmapping is a DNAT rule with a comment pointing
at the 'service',i.e:
-A PREROUTING ! -i tailscale0 -p tcp -m tcp --dport 4006 -m comment --comment "some-svc:tcp:4006 -> tcp:80" -j DNAT --to-destination 100.64.1.18:80
Additionally there is a SNAT rule for each tailnet target, to mask the source address.
- for nftables, a separate prerouting chain is created for each tailnet target
and all the portmapping rules are placed in that chain. This makes it easier
to look up rules and delete services when no longer needed.
(nftables allows hooking a custom chain to a prerouting hook, so no extra work
is needed to ensure that the rules in the service chains are evaluated).
The next steps will be to get the Kubernetes Operator to generate
the configfile and ensure it is mounted to the relevant proxy nodes.
Updates tailscale/tailscale#13406
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/k8s-operator,k8s-operator,kube: Add TSRecorder CRD + controller
Deploys tsrecorder images to the operator's cluster. S3 storage is
configured via environment variables from a k8s Secret. Currently
only supports a single tsrecorder replica, but I've tried to take early
steps towards supporting multiple replicas by e.g. having a separate
secret for auth and state storage.
Example CR:
```yaml
apiVersion: tailscale.com/v1alpha1
kind: Recorder
metadata:
name: rec
spec:
enableUI: true
```
Updates #13298
Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
Rename kube/{types,client,api} -> kube/{kubetypes,kubeclient,kubeapi}
so that we don't need to rename the package on each import to
convey that it's kubernetes specific.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Further split kube package into kube/{client,api,types}. This is so that
consumers who only need constants/static types don't have to import
the client and api bits.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add an additional RecorderAddrs field to tailscale.com/cap/kubernetes
capability. RecorderAddrs will only be populated by control
with the addresses of any tsrecorder tags set via Recorder.
Updates tailscale/corp#19821
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This PR is in prep of adding logic to control to be able to parse
tailscale.com/cap/kubernetes grants in control:
- moves the type definition of PeerCapabilityKubernetes cap to a location
shared with control.
- update the Kubernetes cap rule definition with fields for granting
kubectl exec session recording capabilities.
- adds a convenience function to produce tailcfg.RawMessage from an
arbitrary cap rule and a test for it.
An example grant defined via ACLs:
"grants": [{
"src": ["tag:eng"],
"dst": ["tag:k8s-operator"],
"app": {
"tailscale.com/cap/kubernetes": [{
"recorder": ["tag:my-recorder"]
“enforceRecorder”: true
}],
},
}
]
This grant enforces `kubectl exec` sessions from tailnet clients,
matching `tag:eng` via API server proxy matching `tag:k8s-operator`
to be recorded and recording to be sent to a tsrecorder instance,
matching `tag:my-recorder`.
The type needs to be shared with control because we want
control to parse this cap and resolve tags to peer IPs.
Updates tailscale/corp#19821
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms
* Allow users to pre-create empty state Secrets
* Add a fake internal kube client, test functionality that has dependencies on kube client operations.
* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist
* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed
Updates tailscale/tailscale#11170
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
We would call Update on the secret, but that was racey and would occasionaly
fail. Instead use patch whenever we can.
Fixes errors like
```
boot: 2023/08/29 01:03:53 failed to set serve config: sending serve config: updating config: writing ServeConfig to StateStore: Operation cannot be fulfilled on secrets "ts-webdav-kfrzv-0": the object has been modified; please apply your changes to the latest version and try again
{"level":"error","ts":"2023-08-29T01:03:48Z","msg":"Reconciler error","controller":"ingress","controllerGroup":"networking.k8s.io","controllerKind":"Ingress","Ingress":{"name":"webdav","namespace":"default"},"namespace":"default","name":"webdav","reconcileID":"96f5cfed-7782-4834-9b75-b0950fd563ed","error":"failed to provision: failed to create or get API key secret: Operation cannot be fulfilled on secrets \"ts-webdav-kfrzv-0\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tsigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:324\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tsigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:265\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\tsigs.k8s.io/controller-runtime@v0.15.0/pkg/internal/controller/controller.go:226"}
```
Updates #502
Updates #7895
Signed-off-by: Maisem Ali <maisem@tailscale.com>
For stores like k8s secrets we need to dial out to the k8s API as though Tailscale
wasn't running. The issue currently only manifests when you try to use an exit node
while running inside a k8s cluster and are trying to use Kubernetes secrets as the
backing store.
This doesn't address cmd/containerboot, which I'll do in a follow up.
Updates #7695
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We had two implemenetations of the kube client, merge them.
containerboot was also using a raw http.Transport, this also has
the side effect of making it use a http.Client
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>