tailscale/docs/k8s/README.md
Irbe Krumina dff6f3377f
docs/k8s: update docs (#11307)
Update docs for static Tailscale deployments on kube
to always use firewall mode autodection when in non-userspace.
Also add a note about running multiple replicas and a few suggestions how folks could do that.

Updates#cleanup

Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <1687799+knyar@users.noreply.github.com>
2024-03-04 14:59:51 +00:00

231 lines
7.9 KiB
Markdown

# Overview
There are quite a few ways of running Tailscale inside a Kubernetes Cluster.
This doc covers creating and managing your own Tailscale node deployments in cluster.
If you want a higher level of automation, easier configuration, automated cleanup of stopped Tailscale devices, or a mechanism for exposing the [Kubernetes API](https://kubernetes.io/docs/concepts/overview/kubernetes-api/) server to the tailnet, take a look at [Tailscale Kubernetes operator](https://tailscale.com/kb/1236/kubernetes-operator).
:warning: Note that the manifests generated by the following commands are not intended for production use, and you will need to tweak them based on your environment and use case. For example, the commands to generate a standalone proxy manifest, will create a standalone `Pod`- this will not persist across cluster upgrades etc. :warning:
## Instructions
### Setup
1. (Optional) Create the following secret which will automate login.<br>
You will need to get an [auth key](https://tailscale.com/kb/1085/auth-keys/) from [Tailscale Admin Console](https://login.tailscale.com/admin/authkeys).<br>
If you don't provide the key, you can still authenticate using the url in the logs.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: tailscale-auth
stringData:
TS_AUTHKEY: tskey-...
```
1. Tailscale (v1.16+) supports storing state inside a Kubernetes Secret.
Configure RBAC to allow the Tailscale pod to read/write the `tailscale` secret.
```bash
export SA_NAME=tailscale
export TS_KUBE_SECRET=tailscale-auth
make rbac | kubectl apply -f-
```
### Sample Sidecar
Running as a sidecar allows you to directly expose a Kubernetes pod over Tailscale. This is particularly useful if you do not wish to expose a service on the public internet. This method allows bi-directional connectivity between the pod and other devices on the Tailnet. You can use [ACLs](https://tailscale.com/kb/1018/acls/) to control traffic flow.
1. Create and login to the sample nginx pod with a Tailscale sidecar
```bash
make sidecar | kubectl apply -f-
# If not using an auth key, authenticate by grabbing the Login URL here:
kubectl logs nginx ts-sidecar
```
1. Check if you can to connect to nginx over Tailscale:
```bash
curl http://nginx
```
Or, if you have [MagicDNS](https://tailscale.com/kb/1081/magicdns/) disabled:
```bash
curl "http://$(tailscale ip -4 nginx)"
```
#### Userspace Sidecar
You can also run the sidecar in userspace mode. The obvious benefit is reducing the amount of permissions Tailscale needs to run, the downside is that for outbound connectivity from the pod to the Tailnet you would need to use either the [SOCKS proxy](https://tailscale.com/kb/1112/userspace-networking) or HTTP proxy.
1. Create and login to the sample nginx pod with a Tailscale sidecar
```bash
make userspace-sidecar | kubectl apply -f-
# If not using an auth key, authenticate by grabbing the Login URL here:
kubectl logs nginx ts-sidecar
```
1. Check if you can to connect to nginx over Tailscale:
```bash
curl http://nginx
```
Or, if you have [MagicDNS](https://tailscale.com/kb/1081/magicdns/) disabled:
```bash
curl "http://$(tailscale ip -4 nginx)"
```
### Sample Proxy
Running a Tailscale proxy allows you to provide inbound connectivity to a Kubernetes Service.
1. Provide the `ClusterIP` of the service you want to reach by either:
**Creating a new deployment**
```bash
kubectl create deployment nginx --image nginx
kubectl expose deployment nginx --port 80
export TS_DEST_IP="$(kubectl get svc nginx -o=jsonpath='{.spec.clusterIP}')"
```
**Using an existing service**
```bash
export TS_DEST_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')"
```
1. Deploy the proxy pod
```bash
make proxy | kubectl apply -f-
# If not using an auth key, authenticate by grabbing the Login URL here:
kubectl logs proxy
```
1. Check if you can to connect to nginx over Tailscale:
```bash
curl http://proxy
```
Or, if you have [MagicDNS](https://tailscale.com/kb/1081/magicdns/) disabled:
```bash
curl "http://$(tailscale ip -4 proxy)"
```
### Subnet Router
Running a Tailscale [subnet router](https://tailscale.com/kb/1019/subnets/) allows you to access
the entire Kubernetes cluster network (assuming NetworkPolicies allow) over Tailscale.
1. Identify the Pod/Service CIDRs that cover your Kubernetes cluster. These will vary depending on [which CNI](https://kubernetes.io/docs/concepts/cluster-administration/networking/) you are using and on the Cloud Provider you are using. Add these to the `TS_ROUTES` variable as comma-separated values.
```bash
SERVICE_CIDR=10.20.0.0/16
POD_CIDR=10.42.0.0/15
export TS_ROUTES=$SERVICE_CIDR,$POD_CIDR
```
1. Deploy the subnet-router pod.
```bash
make subnet-router | kubectl apply -f-
# If not using an auth key, authenticate by grabbing the Login URL here:
kubectl logs subnet-router
```
1. In the [Tailscale admin console](https://login.tailscale.com/admin/machines), ensure that the
routes for the subnet-router are enabled.
1. Make sure that any client you want to connect from has `--accept-routes` enabled.
1. Check if you can connect to a `ClusterIP` or a `PodIP` over Tailscale:
```bash
# Get the Service IP
INTERNAL_IP="$(kubectl get svc <SVC_NAME> -o=jsonpath='{.spec.clusterIP}')"
# or, the Pod IP
# INTERNAL_IP="$(kubectl get po <POD_NAME> -o=jsonpath='{.status.podIP}')"
INTERNAL_PORT=8080
curl http://$INTERNAL_IP:$INTERNAL_PORT
```
## Multiple replicas
Note that if you want to use the `Pod` manifests generated by the commands above in a multi-replica setup (i.e a multi-replica `StatefulSet`) you will need to change the mechanism for storing tailscale state to ensure that multiple replicas are not attemting to use a single Kubernetes `Secret` to store their individual states.
To avoid proxy state clashes you could either store the state in memory or an `emptyDir` volume, or you could change the provided state `Secret` name to ensure that a unique name gets generated for each replica.
### Option 1: storing in an `emptyDir`
You can mount an [`emptyDir` volume](https://kubernetes.io/docs/concepts/storage/volumes/#emptydir) and configure the mount as the tailscale state store via `TS_STATE_DIR` env var.
You must also set `TS_KUBE_SECRET` to an empty string.
An example:
```yaml
kind: StatefulSet
metadata:
name: subnetrouter
spec:
replicas: 2
...
template:
...
spec:
...
volumes:
- name: tsstate
emptyDir: {}
containers:
- name: tailscale
env:
- name: TS_STATE_DIR
value: /tsstate
- name: TS_KUBE_SECRET
value: ""
volumeMounts:
- name: tsstate
mountPath: /tsstate
```
The downside of this approach is that the state will be lost when a `Pod` is
deleted. In practice this means that when you, for example, upgrade proxy
versions you will get a new set of Tailscale devices with different hostnames.
### Option 2: dynamically generating unique `Secret` names
If you run the proxy as a `StatefulSet`, the `Pod`s get [stable identifiers](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-network-id).
You can use that to pass an individual, static state `Secret` name to each proxy:
```yaml
kind: StatefulSet
metadata:
name: subnetrouter
spec:
replicas: 2
...
template:
...
spec:
...
containers:
- name: tailscale
env:
- name: TS_KUBE_SECRET
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
```
In this case, each replica will store its state in a `Secret` named the same as the `Pod` and as `Pod` names for a `StatefulSet` do not change if `Pod`s get recreated, proxy state will persist across cluster and proxy version updates etc.