Palo Alto firewalls have a typically hard NAT, but also have a mode
called Persistent DIPP that is supposed to provide consistent port
mapping suitable for STUN resolution of public ports. Persistent DIPP
works initially on most Palo Alto firewalls, but some models/software
versions have a bug which this works around.
The bug symptom presents as follows:
- STUN sessions resolve a consistent public IP:port to start with
- Much later netchecks report the same IP:Port for a subset of
sessions, most often the users active DERP, and/or the port related
to sustained traffic.
- The broader set of DERPs in a full netcheck will now consistently
observe a new IP:Port.
- After this point of observation, new inbound connections will only
succeed to the new IP:Port observed, and existing/old sessions will
only work to the old binding.
In this patch we now advertise the lowest latency global endpoint
discovered as we always have, but in addition any global endpoints that
are observed more than once in a single netcheck report. This should
provide viable endpoints for potential connection establishment across
a NAT with this behavior.
Updates tailscale/corp#19106
Signed-off-by: James Tucker <james@tailscale.com>
Fixestailscale/tailscale#10393Fixestailscale/corp#15412Fixestailscale/corp#19808
On Apple platforms, exit nodes and subnet routers have been unable to relay pings from Tailscale devices to non-Tailscale devices due to sandbox restrictions imposed on our network extensions by Apple. The sandbox prevented the code in netstack.go from spawning the `ping` process which we were using.
Replace that exec call with logic to send an ICMP echo request directly, which appears to work in userspace, and not trigger a sandbox violation in the syslog.
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
This adds a new `UserLogf` field to the `Server` struct.
When set this any logs generated by Server are logged using
`UserLogf` and all spammy backend logs are logged to `Logf`.
If it `UserLogf` is unset, we default to `log.Printf` and
if `Logf` is unset we discard all the spammy logs.
Fixes#12094
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Turn off stateful filtering for egress proxies to allow cluster
traffic to be forwarded to tailnet.
Allow configuring stateful filter via tailscaled config file.
Deprecate EXPERIMENTAL_TS_CONFIGFILE_PATH env var and introduce a new
TS_EXPERIMENTAL_VERSIONED_CONFIG env var that can be used to provide
containerboot a directory that should contain one or more
tailscaled config files named cap-<tailscaled-cap-version>.hujson.
Containerboot will pick the one with the newest capability version
that is not newer than its current capability version.
Proxies with this change will not work with older Tailscale
Kubernetes operator versions - users must ensure that
the deployed operator is at the same version or newer (up to
4 version skew) than the proxies.
Updates tailscale/tailscale#12061
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Maisem Ali <maisem@tailscale.com>
We were missing `snat-subnet-routes`, `stateful-filtering`
and `netfilter-mode`. Add those to set too.
Fixes#12061
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We are now publishing nameserver images to tailscale/k8s-nameserver,
so we can start defaulting the images if users haven't set
them explicitly, same as we already do with proxy images.
The nameserver images are currently only published for unstable
track, so we have to use the static 'unstable' tag.
Once we start publishing to stable, we can make the operator
default to its own tag (because then we'll know that for each
operator tag X there is also a nameserver tag X as we always
cut all images for a given tag.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The CLI's "up" is kinda chaotic and LocalBackend.Start is kinda
chaotic and they both need to be redone/deleted (respectively), but
this fixes some buggy behavior meanwhile. We were previously calling
StartLoginInteractive (to start the controlclient's RegisterRequest)
redundantly in some cases, causing test flakes depending on timing and
up's weird state machine.
We only need to call StartLoginInteractive in the client if Start itself
doesn't. But Start doesn't tell us that. So cheat a bit and a put the
information about whether there's a current NodeKey in the ipn.Status.
It used to be accessible over LocalAPI via GetPrefs as a private key but
we removed that for security. But a bool is fine.
So then only call StartLoginInteractive if that bool is false and don't
do it in the WatchIPNBus loop.
Fixes#12028
Updates #12042
Change-Id: I0923c3f704a9d6afd825a858eb9a63ca7c1df294
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
There was a small window in ipnserver after we assigned a LocalBackend
to the ipnserver's atomic but before we Start'ed it where our
initalization Start could conflict with API calls from the LocalAPI.
Simplify that a bit and lay out the rules in the docs.
Updates #12028
Change-Id: Ic5f5e4861e26340599184e20e308e709edec68b1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This way the default gets populated on first start, when no existing
state exists to migrate. Also fix `ipn.PrefsFromBytes` to preserve empty
fields, rather than layering `NewPrefs` values on top.
Updates https://github.com/tailscale/corp/issues/19623
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
The CLI "up" command is a historical mess, both on the CLI side and
the LocalBackend side. We're getting closer to cleaning it up, but in
the meantime it was again implicated in flaky tests.
In this case, the background goroutine running WatchIPNBus was very
occasionally running enough to get to its StartLoginInteractive call
before the original goroutine did its Start call. That meant
integration tests were very rarely but sometimes logging in with the
default control plane URL out on the internet
(controlplane.tailscale.com) instead of the localhost control server
for tests.
This also might've affected new Headscale etc users on initial "up".
Fixes#11960Fixes#11962
Change-Id: I36f8817b69267a99271b5ee78cb7dbf0fcc0bd34
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We'd like to use tsdial.Dialer.UserDial instead of SystemDial for DNS over TCP.
This is primarily necessary to properly dial internal DNS servers accessible
over Tailscale and subnet routes. However, to avoid issues when switching
between Wi-Fi and cellular, we need to ensure that we don't retain connections
to any external addresses on the old interface. Therefore, we need to determine
which dialer to use internally based on the configured routes.
This plumbs routes and localRoutes from router.Config to tsdial.Dialer,
and updates UserDial to use either the peer dialer or the system dialer,
depending on the network address and the configured routes.
Updates tailscale/corp#18725
Fixes#4529
Signed-off-by: Nick Khyl <nickk@tailscale.com>
The netcheck client, when no UDP is available, probes distance using
HTTPS.
Several problems:
* It probes using /derp/latency-check.
* But cmd/derper serves the handler at /derp/probe
* Despite the difference, it work by accident until c8f4dfc8c0
which made netcheck's probe require a 2xx status code.
* in tests, we only use derphttp.Handler, so the cmd/derper-installed
mux routes aren't preesnt, so there's no probe. That breaks
tests in airplane mode. netcheck.Client then reports "unexpected
HTTP status 426" (Upgrade Required)
This makes derp handle both /derp/probe and /derp/latency-check
equivalently, and in both cmd/derper and derphttp.Handler standalone
modes.
I notice this when wgengine/magicsock TestActiveDiscovery was failing
in airplane mode (no wifi). It still doesn't pass, but it gets
further.
Fixes#11989
Change-Id: I45213d4bd137e0f29aac8bd4a9ac92091065113f
Not buying wifi on a short flight is a good way to find tests
that require network. Whoops.
Updates #cleanup
Change-Id: Ibe678e9c755d27269ad7206413ffe9971f07d298
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
cmd/k8s-operator: optionally update dnsrecords Configmap with DNS records for proxies.
This commit adds functionality to automatically populate
DNS records for the in-cluster ts.net nameserver
to allow cluster workloads to resolve MagicDNS names
associated with operator's proxies.
The records are created as follows:
* For tailscale Ingress proxies there will be
a record mapping the MagicDNS name of the Ingress
device and each proxy Pod's IP address.
* For cluster egress proxies, configured via
tailscale.com/tailnet-fqdn annotation, there will be
a record for each proxy Pod, mapping
the MagicDNS name of the exposed
tailnet workload to the proxy Pod's IP.
No records will be created for any other proxy types.
Records will only be created if users have configured
the operator to deploy an in-cluster ts.net nameserver
by applying tailscale.com/v1alpha1.DNSConfig.
It is user's responsibility to add the ts.net nameserver
as a stub nameserver for ts.net DNS names.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configuration-of-stub-domain-and-upstream-nameserver-using-corednshttps://cloud.google.com/kubernetes-engine/docs/how-to/kube-dns#upstream_nameservers
See also https://github.com/tailscale/tailscale/pull/11017
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This configures localClient correctly during flag parsing, so that the --socket
option is effective when generating tab-completion results. For example, the
following would not connect to the system Tailscale for tab-completion results:
tailscale --socket=/tmp/tailscaled.socket switch <TAB>
Updates #3793
Signed-off-by: Paul Scott <paul@tailscale.com>
cmd/k8s-operator/deploy/chart: allow users to configure additional labels for the operator's Pod via Helm chart values.
Fixes#11947
Signed-off-by: Gabe Gorelick <gabe@hightouch.io>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Before attempting to enable IPv6 forwarding in the proxy init container
check if the relevant module is found, else the container crashes
on hosts that don't have it.
Updates#11860
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
cmd/containerboot,kube,ipn/store/kubestore: allow interactive login and empty state Secrets, check perms
* Allow users to pre-create empty state Secrets
* Add a fake internal kube client, test functionality that has dependencies on kube client operations.
* Fix an issue where interactive login was not allowed in an edge case where state Secret does not exist
* Make the CheckSecretPermissions method report whether we have permissions to create/patch a Secret if it's determined that these operations will be needed
Updates tailscale/tailscale#11170
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
In prep for most of the package funcs in net/interfaces to become
methods in a long-lived netmon.Monitor that can cache things. (Many
of the funcs are very heavy to call regularly, whereas the long-lived
netmon.Monitor can subscribe to things from the OS and remember
answers to questions it's asked regularly later)
Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299
Change-Id: Ie4e8dedb70136af2d611b990b865a822cd1797e5
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
... in prep for merging the net/interfaces package into net/netmon.
This is a no-op change that updates a bunch of the API signatures ahead of
a future change to actually move things (and remove the type alias)
Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299
Change-Id: I477613388f09389214db0d77ccf24a65bff2199c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Modifies containerboot to wait on tailscaled process
only, not on any child process of containerboot.
Waiting on any subprocess was racing with Go's
exec.Cmd.Run, used to run iptables commands and
that starts its own subprocesses and waits on them.
Containerboot itself does not run anything else
except for tailscaled, so there shouldn't be a need
to wait on anything else.
Updates tailscale/tailscale#11593
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached. But first (this change and others)
we need to make sure the one netmon.Monitor is plumbed everywhere.
Some notable bits:
* tsdial.NewDialer is added, taking a now-required netmon
* because a tsdial.Dialer always has a netmon, anything taking both
a Dialer and a NetMon is now redundant; take only the Dialer and
get the NetMon from that if/when needed.
* netmon.NewStatic is added, primarily for tests
Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299
Change-Id: I877f9cb87618c4eb037cee098241d18da9c01691
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This has been a TODO for ages. Time to do it.
The goal is to move more network state accessors to netmon.Monitor
where they can be cheaper/cached.
Updates tailscale/corp#10910
Updates tailscale/corp#18960
Updates #7967
Updates #3299
Change-Id: I60fc6508cd2d8d079260bda371fc08b6318bcaf1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Adds a new .spec.metrics field to ProxyClass to allow users to optionally serve
client metrics (tailscaled --debug) on <Pod-IP>:9001.
Metrics cannot currently be enabled for proxies that egress traffic to tailnet
and for Ingress proxies with tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation
(because they currently forward all cluster traffic to their respective backends).
The assumption is that users will want to have these metrics enabled
continuously to be able to monitor proxy behaviour (as opposed to enabling
them temporarily for debugging). Hence we expose them on Pod IP to make it
easier to consume them i.e via Prometheus PodMonitor.
Updates tailscale/tailscale#11292
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This adds a health.Tracker to tsd.System, accessible via
a new tsd.System.HealthTracker method.
In the future, that new method will return a tsd.System-specific
HealthTracker, so multiple tsnet.Servers in the same process are
isolated. For now, though, it just always returns the temporary
health.Global value. That permits incremental plumbing over a number
of changes. When the second to last health.Global reference is gone,
then the tsd.System.HealthTracker implementation can return a private
Tracker.
The primary plumbing this does is adding it to LocalBackend and its
dozen and change health calls. A few misc other callers are also
plumbed. Subsequent changes will flesh out other parts of the tree
(magicsock, controlclient, etc).
Updates #11874
Updates #4136
Change-Id: Id51e73cfc8a39110425b6dc19d18b3975eac75ce
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
* cmd/containerboot,util/linuxfw: support proxy backends specified by DNS name
Adds support for optionally configuring containerboot to proxy
traffic to backends configured by passing TS_EXPERIMENTAL_DEST_DNS_NAME env var
to containerboot.
Containerboot will periodically (every 10 minutes) attempt to resolve
the DNS name and ensure that all traffic sent to the node's
tailnet IP gets forwarded to the resolved backend IP addresses.
Currently:
- if the firewall mode is iptables, traffic will be load balanced
accross the backend IP addresses using round robin. There are
no health checks for whether the IPs are reachable.
- if the firewall mode is nftables traffic will only be forwarded
to the first IP address in the list. This is to be improved.
* cmd/k8s-operator: support ExternalName Services
Adds support for exposing endpoints, accessible from within
a cluster to the tailnet via DNS names using ExternalName Services.
This can be done by annotating the ExternalName Service with
tailscale.com/expose: "true" annotation.
The operator will deploy a proxy configured to route tailnet
traffic to the backend IPs that service.spec.externalName
resolves to. The backend IPs must be reachable from the operator's
namespace.
Updates tailscale/tailscale#10606
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
There is an undocumented 16KiB limit for text log messages.
However, the limit for JSON messages is 256KiB.
Even worse, logging JSON as text results in significant overhead
since each double quote needs to be escaped.
Instead, use logger.Logf.JSON to explicitly log the info as JSON.
We also modify osdiag to return the information as structured data
rather than implicitly have the package log on our behalf.
This gives more control to the caller on how to log.
Updates #7802
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Creates new QNAP builder target, which builds go binaries then uses
docker to build into QNAP packages. Much of the docker/script code
here is pulled over from https://github.com/tailscale/tailscale-qpkg,
with adaptation into our builder structures.
The qnap/Tailscale folder contains static resources needed to build
Tailscale qpkg packages, and is an exact copy of the existing folder
in the tailscale-qpkg repo.
Builds can be run with:
```
sudo ./tool/go run ./cmd/dist build qnap
```
Updates tailscale/tailscale-qpkg#135
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
Kubernetes cluster domain defaults to 'cluster.local', but can also be customized.
We need to determine cluster domain to set up in-cluster forwarding to our egress proxies.
This was previously hardcoded to 'cluster.local', so was the egress proxies were not usable in clusters with custom domains.
This PR ensures that we attempt to determine the cluster domain by parsing /etc/resolv.conf.
In case the cluster domain cannot be determined from /etc/resolv.conf, we fall back to 'cluster.local'.
Updates tailscale/tailscale#10399,tailscale/tailscale#11445
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Some editions of Windows server share the same build number as their
client counterpart; we must use an additional field found in the OS
version information to distinguish between them.
Even though "Distro" has Linux connotations, it is the most appropriate
hostinfo field. What is Windows Server if not an alternate distribution
of Windows? This PR populates Distro with "Server" when applicable.
Fixes#11785
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
The approach is lifted from cobra: `tailscale completion bash` emits a bash
script for configuring the shell's autocomplete:
. <( tailscale completion bash )
so that typing:
tailscale st<TAB>
invokes:
tailscale completion __complete -- st
RELNOTE=tailscale CLI now supports shell tab-completion
Fixes#3793
Signed-off-by: Paul Scott <paul@tailscale.com>
Seems to deflake tstest/integration tests. I can't reproduce it
anymore on one of my VMs that was consistently flaking after a dozen
runs before. Now I can run hundreds of times.
Updates #11649Fixes#7036
Change-Id: I2f7d4ae97500d507bdd78af9e92cd1242e8e44b8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Adds new ProxyClass.spec.statefulSet.pod.{tailscaleContainer,tailscaleInitContainer}.Env field
that allow users to provide key, value pairs that will be set as env vars for the respective containers.
Allow overriding all containerboot env vars,
but warn that this is not supported and might break (in docs + a warning when validating ProxyClass).
Updates tailscale/tailscale#10709
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
At least in userspace-networking mode.
Fixes#11361
Change-Id: I78d33f0f7e05fe9e9ee95b97c99b593f8fe498f2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Changes made:
* Avoid "encoding/json" for JSON processing, and instead use
"github.com/go-json-experiment/json/jsontext".
Use jsontext.Value.IsValid for validation, which is much faster.
Use jsontext.AppendQuote instead of our own JSON escaping.
* In drainPending, use a different maxLen depending on lowMem.
In lowMem mode, it is better to perform multiple uploads
than it is to construct a large body that OOMs the process.
* In drainPending, if an error is encountered draining,
construct an error message in the logtail JSON format
rather than something that is invalid JSON.
* In appendTextOrJSONLocked, use jsontext.Decoder to check
whether the input is a valid JSON object. This is faster than
the previous approach of unmarshaling into map[string]any and
then re-marshaling that data structure.
This is especially beneficial for network flow logging,
which produces relatively large JSON objects.
* In appendTextOrJSONLocked, enforce maxSize on the input.
If too large, then we may end up in a situation where the logs
can never be uploaded because it exceeds the maximum body size
that the Tailscale logs service accepts.
* Use "tailscale.com/util/truncate" to properly truncate a string
on valid UTF-8 boundaries.
* In general, remove unnecessary spaces in JSON output.
Performance:
name old time/op new time/op delta
WriteText 776ns ± 2% 596ns ± 1% -23.24% (p=0.000 n=10+10)
WriteJSON 110µs ± 0% 9µs ± 0% -91.77% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
WriteText 448B ± 0% 0B -100.00% (p=0.000 n=10+10)
WriteJSON 37.9kB ± 0% 0.0kB ± 0% -99.87% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
WriteText 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
WriteJSON 1.08k ± 0% 0.00k ± 0% -99.91% (p=0.000 n=10+10)
For text payloads, this is 1.30x faster.
For JSON payloads, this is 12.2x faster.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Request ID generation appears prominently in some services cumulative
allocation rate, and while this does not eradicate this issue (the API
still makes UUID objects), it does improve the overhead of this API and
reduce the amount of garbage that it produces.
Updates tailscale/corp#18266
Updates tailscale/corp#19054
Signed-off-by: James Tucker <james@tailscale.com>
This removes a potentially increased boot delay for certain boot
topologies where they block on ExecStartPre that may have socket
activation dependencies on other system services (such as
systemd-resolved and NetworkManager).
Also rename cleanup to clean up in affected/immediately nearby places
per code review commentary.
Fixes#11599
Signed-off-by: James Tucker <james@tailscale.com>
Also capitalises the start of all ShortHelp, allows subcommands to be hidden
with a "HIDDEN: " prefix in their ShortHelp, and adds a TS_DUMP_HELP envknob
to look at all --help messages together.
Fixes#11664
Signed-off-by: Paul Scott <paul@tailscale.com>
And add a test.
Regression from a5e1f7d703Fixestailscale/corp#19036
Change-Id: If90984049af0a4820c96e1f77ddf2fce8cb3043f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
cmd/tailscale/cli: respect $KUBECONFIG
* `$KUBECONFIG` is a `$PATH`-like: it defines a *list*.
`tailscale config kubeconfig` works like the rest of the
ecosystem so that if $KUBECONFIG is set it will write to the first existant file in the list, if none exist then
the final entry in the list.
* if `$KUBECONFIG` is an empty string, the old logic takes over.
Notes:
* The logic for file detection is inlined based on what `kind` does.
Technically it's a race condition, since the file could be removed/added
in between the processing steps, but the fallout shouldn't be too bad.
https://github.com/kubernetes-sigs/kind/blob/v0.23.0-alpha/pkg/cluster/internal/kubeconfig/internal/kubeconfig/paths.go
* The sandboxed (App Store) variant relies on a specific temporary
entitlement to access the ~/.kube/config file.
The entitlement is only granted to specific files, and so is not
applicable to paths supplied by the user at runtime.
While there may be other ways to achieve this access to arbitrary
kubeconfig files, it's out of scope for now.
Updates #11645
Signed-off-by: Chloé Vulquin <code@toast.bunkerlabs.net>
After:
bradfitz@book1pro tailscale.com % ./tool/go test -c ./cmd/tailscale/cli
bradfitz@book1pro tailscale.com % ./cli.test
bradfitz@book1pro tailscale.com %
Before:
bradfitz@book1pro tailscale.com % ./tool/go test -c ./cmd/tailscale/cli
bradfitz@book1pro tailscale.com % ./cli.test
Warning: funnel=on for foo.test.ts.net:443, but no serve config
run: `tailscale serve --help` to see how to configure handlers
Warning: funnel=on for foo.test.ts.net:443, but no serve config
run: `tailscale serve --help` to see how to configure handlers
USAGE
funnel <serve-port> {on|off}
funnel status [--json]
Funnel allows you to publish a 'tailscale serve'
server publicly, open to the entire internet.
Turning off Funnel only turns off serving to the internet.
It does not affect serving to your tailnet.
SUBCOMMANDS
status show current serve/funnel status
error: path must be absolute
error: invalid TCP source "localhost:5432": missing port in address
error: invalid TCP source "tcp://somehost:5432"
must be one of: localhost or 127.0.0.1
tcp://somehost:5432error: invalid TCP source "tcp://somehost:0"
must be one of: localhost or 127.0.0.1
tcp://somehost:0error: invalid TCP source "tcp://somehost:65536"
must be one of: localhost or 127.0.0.1
tcp://somehost:65536error: path must be absolute
error: cannot serve web; already serving TCP
You don't have permission to enable this feature.
This also moves the color handling up to a generic spot so it's
not just one subcommand doing it itself. See
https://github.com/tailscale/tailscale/issues/11626#issuecomment-2041795129Fixes#11643
Updates #11626
Change-Id: I3a49e659dcbce491f4a2cb784be20bab53f72303
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
At least in the case of dialing a Tailscale IP.
Updates #4529
Change-Id: I9fd667d088a14aec4a56e23aabc2b1ffddafa3fe
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This is primarily for GUIs, so they don't need to remember the most
recently used exit node themselves.
This adds some CLI commands, but they're disabled and behind the WIP
envknob, as we need to consider naming (on/off is ambiguous with
running an exit node, etc) as well as automatic exit node selection in
the future. For now the CLI commands are effectively developer debug
things to test the LocalAPI.
Updates tailscale/corp#18724
Change-Id: I9a32b00e3ffbf5b29bfdcad996a4296b5e37be7e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It was used when we only supported subnet routers on linux
and would nil out the SubnetRoutes slice as no other router
worked with it, but now we support subnet routers on ~all platforms.
The field it was setting to nil is now only used for network logging
and nowhere else, so keep the field but drop the SubnetRouterWrapper
as it's not useful.
Updates #cleanup
Change-Id: Id03f9b6ec33e47ad643e7b66e07911945f25db79
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The output was changing randomly per run, due to range over a map.
Then some misc style tweaks I noticed while debugging.
Fixes#11629
Change-Id: I67aef0e68566994e5744d4828002f6eb70810ee1
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This change switches the api to /drive, rather than the previous /tailfs
as well as updates the log lines to reflect the new value. It also
cleans up some existing tailfs references.
Updates tailscale/corp#16827
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
This change updates all tailfs functions and the majority of the tailfs
variables to use the new drive naming.
Updates tailscale/corp#16827
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
This change updates the tailfs file and package names to their new
naming convention.
Updates #tailscale/corp#16827
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
* cmd/k8s-nameserver,k8s-operator: add a nameserver that can resolve ts.net DNS names in cluster.
Adds a simple nameserver that can respond to A record queries for ts.net DNS names.
It can respond to queries from in-memory records, populated from a ConfigMap
mounted at /config. It dynamically updates its records as the ConfigMap
contents changes.
It will respond with NXDOMAIN to queries for any other record types
(AAAA to be implemented in the future).
It can respond to queries over UDP or TCP. It runs a miekg/dns
DNS server with a single registered handler for ts.net domain names.
Queries for other domain names will be refused.
The intended use of this is:
1) to allow non-tailnet cluster workloads to talk to HTTPS tailnet
services exposed via Tailscale operator egress over HTTPS
2) to allow non-tailnet cluster workloads to talk to workloads in
the same cluster that have been exposed to tailnet over their
MagicDNS names but on their cluster IPs.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy/crds,k8s-operator: add DNSConfig CustomResource Definition
DNSConfig CRD can be used to configure
the operator to deploy kube nameserver (./cmd/k8s-nameserver) to cluster.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator,k8s-operator: optionally reconcile nameserver resources
Adds a new reconciler that reconciles DNSConfig resources.
If a DNSConfig is deployed to cluster,
the reconciler creates kube nameserver resources.
This reconciler is only responsible for creating
nameserver resources and not for populating nameserver's records.
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/{k8s-operator,k8s-nameserver}: generate DNSConfig CRD for charts, append to static manifests
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
---------
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
This allows sending multiple files via Taildrop in one request.
Progress is tracked via ipn.Notify.
Updates tailscale/corp#18202
Signed-off-by: Percy Wegmann <percy@tailscale.com>
Including the double quotes (`"`) around the value made it appear like the helm chart should expect a string value for `installCRDs`.
Signed-off-by: Chris Milson-Tokunaga <chris.w.milson@gmail.com>
This implementation uses less memory than tempfork/device,
which helps avoid OOM conditions in the iOS VPN extension when
switching to a Tailnet with ExitNode routing enabled.
Updates tailscale/corp#18514
Signed-off-by: Percy Wegmann <percy@tailscale.com>
First we had Capabilities []string. Then
https://tailscale.com/blog/acl-grants (#4217) brought CapMap, a
superset of Capabilities. Except we never really finished the
transition inside the codebase to go all-in on CapMap. This does so.
Notably, this coverts Capabilities on the wire early to CapMap
internally so the code can only deal in CapMap, even against an old
control server.
In the process, this removes PeerChange.Capabilities support, which no
known control plane sent anyway. They can and should use
PeerChange.CapMap instead.
Updates #11508
Updates #4217
Change-Id: I872074e226b873f9a578d9603897b831d50b25d9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.
This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Fix a bug where all proxies got configured with --accept-routes set to true.
The bug was introduced in https://github.com/tailscale/tailscale/pull/11238.
Updates#cleanup
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Enable the web client over 100.100.100.100 by default. Accepting traffic
from [tailnet IP]:5252 still requires setting the `webclient` user pref.
Updates https://github.com/tailscale/tailscale/issues/10261
Signed-off-by: Mario Minardi <mario@tailscale.com>
Updates ENG-2848
We can safely disable the App Sandbox for our macsys GUI, allowing us to use `tailscale ssh` and do a few other things that we've wanted to do for a while. This PR:
- allows Tailscale SSH to be used from the macsys GUI binary when called from a CLI
- tweaks the detection of client variants in prop.go, with new functions `IsMacSys()`, `IsMacSysApp()` and `IsMacAppSandboxEnabled()`
Signed-off-by: Andrea Gottardo <andrea@gottardo.me>
When a user deletes the last cluster/user/context from their
kubeconfig via 'kubectl delete-[cluster|user|context] command,
kubectx sets the relevant field in kubeconfig to 'null'.
This was breaking our conversion logic that was assuming that the field
is either non-existant or is an array.
Updates tailscale/corp#18320
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
- Updates API to support renaming TailFS shares.
- Adds a CLI rename subcommand for renaming a share.
- Renames the CLI subcommand 'add' to 'set' to make it clear that
this is an add or update.
- Adds a unit test for TailFS in ipnlocal
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
Previously, the configuration of which folders to share persisted across
profile changes. Now, it is tied to the user's profile.
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
So we can use it in trunkd to quiet down the logs there.
Updates #5563
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ie3177dc33f5ad103db832aab5a3e0e4f128f973f
Moving logic that manipulates a ServeConfig into recievers on the
ServeConfig in the ipn package. This is setup work to allow the
web client and cli to both utilize these shared functions to edit
the serve config.
Any logic specific to flag parsing or validation is left untouched
in the cli command. The web client will similarly manage its
validation of user's requested changes. If validation logic becomes
similar-enough, we can make a serve util for shared functionality,
which likely does not make sense in ipn.
Updates #10261
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
In preparation for changes to allow configuration of serve/funnel
from the web client, this commit moves some functionality that will
be shared between the CLI and web client to the ipn package's
serve.go file, where some other util funcs are already defined.
Updates #10261
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
This allows the Mac application to regain access to restricted
folders after restarts.
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This adds a method to wgengine.Engine and plumbed down into magicsock
to add a way to get a type-safe Tailscale-safe wrapper around a
wireguard-go device.Peer that only exposes methods that are safe for
Tailscale to use internally.
It also removes HandshakeAttempts from PeerStatusLite that was just
added as it wasn't needed yet and is now accessible ala cart as needed
from the Peer type accessor.
None of this is used yet.
Updates #7617
Change-Id: I07be0c4e6679883e6eeddf8dbed7394c9e79c5f4
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This is so that if a backend Service gets created after the Ingress, it gets picked up by the operator.
Updates tailscale/tailscale#11251
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <1687799+knyar@users.noreply.github.com>
Containerboot container created for operator's ingress and egress proxies
are now always configured by passing a configfile to tailscaled
(tailscaled --config <configfile-path>.
It does not run 'tailscale set' or 'tailscale up'.
Upgrading existing setups to this version as well as
downgrading existing setups at this version works.
Updates tailscale/tailscale#10869
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Add logic to autogenerate CRD docs.
.github/workflows/kubemanifests.yaml CI workflow will fail if the doc is out of date with regard to the current CRDs.
Docs can be refreshed by running make kube-generate-all.
Updates tailscale/tailscale#11023
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
When reverse path filtering is in strict mode on Linux, using an exit
node blocks all network connectivity. This change adds a warning about
this to `tailscale status` and the logs.
Example in `tailscale status`:
```
- not connected to home DERP region 22
- The following issues on your machine will likely make usage of exit nodes impossible: [interface "eth0" has strict reverse-path filtering enabled], please set rp_filter=2 instead of rp_filter=1; see https://github.com/tailscale/tailscale/issues/3310
```
Example in the logs:
```
2024/02/21 21:17:07 health("overall"): error: multiple errors:
not in map poll
The following issues on your machine will likely make usage of exit nodes impossible: [interface "eth0" has strict reverse-path filtering enabled], please set rp_filter=2 instead of rp_filter=1; see https://github.com/tailscale/tailscale/issues/3310
```
Updates #3310
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
Tailscaled becomes inoperative if the Tailscale Tunnel wintun adapter is abruptly removed.
wireguard-go closes the device in case of a read error, but tailscaled keeps running.
This adds detection of a closed WireGuard device, triggering a graceful shutdown of tailscaled.
It is then restarted by the tailscaled watchdog service process.
Fixes#11222
Signed-off-by: Nick Khyl <nickk@tailscale.com>
Instead of modeling remote WebDAV servers as actual
webdav.FS instances, we now just proxy traffic to them.
This not only simplifies the code, but it also allows
WebDAV locking to work correctly by making sure locks are
handled by the servers that need to (i.e. the ones actually
serving the files).
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
I missed a case in the earlier patch, and so we're still sending 15s TCP
keepalive for TLS connections, now adjusted there too.
Updates tailscale/corp#17587
Updates #3363
Signed-off-by: James Tucker <james@tailscale.com>
This adds details on how to configure node attributes to allow
sharing and accessing shares.
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
The derper sends an in-protocol keepalive every 60-65s, so frequent TCP
keepalives are unnecessary. In this tuning TCP keepalives should never
occur for a DERP client connection, as they will send an L7 keepalive
often enough to always reset the TCP keepalive timer. If however a
connection does not receive an ACK promptly it will now be shutdown,
which happens sooner than it would with a normal TCP keepalive tuning.
This re-tuning reduces the frequency of network traffic from derp to
client, reducing battery cost.
Updates tailscale/corp#17587
Updates #3363
Signed-off-by: James Tucker <james@tailscale.com>
So derpers can check an external URL for whether to permit access
to a certain public key.
Updates tailscale/corp#17693
Change-Id: I8594de58f54a08be3e2dbef8bcd1ff9b728ab297
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This allows coverage from tests that hit multiple packages at once
to be reflected in all those packages' coverage.
Updates #cleanup
Signed-off-by: Percy Wegmann <percy@tailscale.com>
Instead of constructing the `ip:port` string ourselves, use
netip.AddrPortFrom which handles IPv6 correctly.
Updates #11164
Signed-off-by: Will Norris <will@tailscale.com>
* cmd/k8s-operator,k8s-operator: introduce proxy configuration mechanism via ProxyClass custom resource.
ProxyClass custom resource can be used to specify customizations
for the proxy resources created by the operator.
Add a reconciler that validates ProxyClass resources
and sets a Ready condition to True or False with a corresponding reason and message.
This is required because some fields (labels and annotations)
require complex validations that cannot be performed at custom resource apply time.
Reconcilers that use the ProxyClass to configure proxy resources are expected to
verify that the ProxyClass is Ready and not proceed with resource creation
if configuration from a ProxyClass that is not yet Ready is required.
If a tailscale ingress/egress Service is annotated with a tailscale.com/proxy-class annotation, look up the corresponding ProxyClass and, if it is Ready, apply the configuration from the ProxyClass to the proxy's StatefulSet.
If a tailscale Ingress has a tailscale.com/proxy-class annotation
and the referenced ProxyClass custom resource is available and Ready,
apply configuration from the ProxyClass to the proxy resources
that will be created for the Ingress.
Add a new .proxyClass field to the Connector spec.
If connector.spec.proxyClass is set to a ProxyClass that is available and Ready,
apply configuration from the ProxyClass to the proxy resources created for the Connector.
Ensure that when Helm chart is packaged, the ProxyClass yaml is added to chart templates. Ensure that static manifest generator adds ProxyClass yaml to operator.yaml. Regenerate operator.yaml
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
As part of #10631, we stopped using function pointers for subcommands,
preventing us from registering platform-specific installSystemDaemon
and uninstallSystemDaemon subcommands.
Fixes#11099
Signed-off-by: Percy Wegmann <percy@tailscale.com>
The new math/rand/v2 package includes an m-local global random number
generator that can not be reseeded by the user, which is suitable for
most uses without the RNG pools we have in a number of areas of the code
base.
The new API still does not have an allocation-free way of performing a
seeded operations, due to the long term compiler bug around interface
parameter escapes, and the Source interface.
This change introduces the two APIs that math/rand/v2 can not yet
replace efficiently: seeded Perm() and Shuffle() operations. This
implementation chooses to use the PCG random source from math/rand/v2,
as with sufficient compiler optimization, this source should boil down
to only two on-stack registers for random state under ideal conditions.
Updates #17243
Signed-off-by: James Tucker <james@tailscale.com>
`os.LookupEnv` may return true if the variable is present in
the environment but an empty string. We should only attempt
to set OAuth Config if thsoe values are non-empty.
Updates gitops-acl-action#33
Signed-off-by: Jenny Zhang <jz@tailscale.com>
Add a WebDAV-based folder sharing mechanism that is exposed to local clients at
100.100.100.100:8080 and to remote peers via a new peerapi endpoint at
/v0/tailfs.
Add the ability to manage folder sharing via the new 'share' CLI sub-command.
Updates tailscale/corp#16827
Signed-off-by: Percy Wegmann <percy@tailscale.com>
This change fixes the format of tailscale status output when location
based exit nodes are present.
Fixes#11065
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
Update logs for synology builds to more clearly callout which variant
is being built. The two existing variants are:
1. Sideloaded (can be manual installed on a device by anyone)
2. Package center distribution (by the tailscale team)
Updates #cleanup
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
The new read-only mode is only accessible when running `tailscale web`
by passing a new `-readonly` flag. This new mode is identical to the
existing login mode with two exceptions:
- the management client in tailscaled is not started (though if it is
already running, it is left alone)
- the client does not prompt the user to login or switch to the
management client. Instead, a message is shown instructing the user
to use other means to manage the device.
Updates #10979
Signed-off-by: Will Norris <will@tailscale.com>
* cmd/containerboot,cmd/k8s-operator/deploy/manifests: optionally forward cluster traffic via ingress proxy.
If a tailscale Ingress has tailscale.com/experimental-forward-cluster-traffic-via-ingress annotation, configure the associated ingress proxy to have its tailscale serve proxy to listen on Pod's IP address. This ensures that cluster traffic too can be forwarded via this proxy to the ingress backend(s).
In containerboot, if EXPERIMENTAL_PROXY_CLUSTER_TRAFFIC_VIA_INGRESS is set to true
and the node is Kubernetes operator ingress proxy configured via Ingress,
make sure that traffic from within the cluster can be proxied to the ingress target.
Updates tailscale/tailscale#10499
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
gitops-pusher supports authenticating with an API key or OAuth
credentials (added in #7393). You shouldn't ever use both of those
together, so we error if both are set.
In tailscale/gitops-acl-action#24, OAuth support is being added to the
GitHub action. In that environment, both the TS_API_KEY and OAuth
variables will be set, even if they are empty values. This causes an
error in gitops-pusher which expects only one to be set.
Update gitops-pusher to check that only one set of environment variables
are non-empty, rather than just checking if they are set.
Updates #7393
Signed-off-by: Will Norris <will@tailscale.com>
When running as non-root non-operator user, you get this error:
```
$ tailscale serve 8080
Access denied: watch IPN bus access denied, must set ipn.NotifyNoPrivateKeys when not running as admin/root or operator
Use 'sudo tailscale serve 8080' or 'tailscale up --operator=$USER' to not require root.
```
It should fail, but the error message is confusing.
With this fix:
```
$ tailscale serve 8080
sending serve config: Access denied: serve config denied
Use 'sudo tailscale serve 8080' or 'tailscale up --operator=$USER' to not require root.
```
Updates #cleanup
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
Do not provision resources for a tailscale Ingress that has no valid backends.
Updates tailscale/tailscale#10910
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Plan9 CI is disabled. 3p dependencies do not build for the target.
Contributor enthusiasm appears to have ceased again, and no usage has
been made.
Skipped gvisor, nfpm, and k8s.
Updates #5794
Updates #8043
Signed-off-by: James Tucker <james@tailscale.com>
When tailscaled is run with "-debug 127.0.0.1:12345", these metrics are
available at:
http://localhost:12345/debug/metrics
Updates #8210
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I19db6c445ac1f8344df2bc1066a3d9c9030606f8
If there are routes changes as a side effect of an app connector
configuration update, the connector configuration may want to reenter a
lock, so must be started asynchronously.
Updates tailscale/corp#16833
Signed-off-by: James Tucker <james@tailscale.com>
This is a useful primitive for asynchronous execution of ordered work I
want to use in another change.
Updates tailscale/corp#16833
Signed-off-by: James Tucker <james@tailscale.com>
Also perform minor cleanups on the ctxkey package itself.
Provide guidance on when to use ctxkey.Key[T] over ctxkey.New.
Also, allow for interface kinds because the value wrapping trick
also happens to fix edge cases with interfaces in Go.
Updates #cleanup
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
To reduce the likelihood of breaking users,
if we implement stricter Exact path type matching in the future.
Updates tailscale/tailscale#10730
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
So that users have predictable label values to use when configuring network policies.
Updates tailscale/tailscale#10854
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
* cmd/k8s-operator/deploy: deploy a Tailscale IngressClass resource.
Some Ingress validating webhooks reject Ingresses with
.spec.ingressClassName for which there is no matching IngressClass.
Additionally, validate that the expected IngressClass is present,
when parsing a tailscale `Ingress`.
We currently do not utilize the IngressClass,
however we might in the future at which point
we might start requiring that the right class
for this controller instance actually exists.
Updates tailscale/tailscale#10820
Signed-off-by: Irbe Krumina <irbe@tailscale.com>
Co-authored-by: Anton Tolchanov <anton@tailscale.com>