We would end up with duplicate profiles for the node as the UserID
would have chnaged. In order to correctly deduplicate profiles, we
need to look at both the UserID and the NodeID. A single machine can
only ever have 1 profile per NodeID and 1 profile per UserID.
Note: UserID of a Node can change when the node is tagged/untagged,
and the NodeID of a device can change when the node is deleted so we
need to check for both.
Updates #713
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The fix in 4fc8538e2 was sufficient for IPv6. Browsers (can?) send the
IPv6 literal, even without a port number, in brackets.
Updates tailscale/corp#7948
Change-Id: I0e429d3de4df8429152c12f251ab140b0c8f6b77
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I was too late with review feedback to 513780f4f8.
Updates tailscale/corp#7948
Change-Id: I8fa3b4eba4efaff591a2d0bfe6ab4795638b7c3a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We were not updating the LoginProfile.UserProfile when a netmap
updated the UserProfile (e.g. when a node was tagged via the admin panel).
Updates #713
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This moves the NetworkLock key from a dedicated StateKey to be part of the persist.Persist struct.
This struct is stored as part for ipn.Prefs and is also the place where we store the NodeKey.
It also moves the ChonkDir from "/tka" to "/tka-profile/<profile-id>". The rename was intentional
to be able to delete the "/tka" dir if it exists.
This means that we will have a unique key per profile, and a unique directory per profile.
Note: `tailscale logout` will delete the entire profile, including any keys. It currently does not
delete the ChonkDir.
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We do not need to wait for it to complete. And we might have to
call Shutdown from callback from the controlclient which might
already be holding a lock that Shutdown requires.
Updates #713
Signed-off-by: Maisem Ali <maisem@tailscale.com>
The health package was turning into a rando dumping ground. Make a new
Warnable type instead that callers can request an instance of, and
then Set it locally in their code without the health package being
aware of all the things that are warnable. (For plenty of things the
health package will want to know details of how Tailscale works so it
can better prioritize/suppress errors, but lots of the warnings are
pretty leaf-y and unrelated)
This just moves two of the health warnings. Can probably move more
later.
Change-Id: I51e50e46eb633f4e96ced503d3b18a1891de1452
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This makes it so that the backend also restarts when users change,
otherwise an extra call to Start was required.
Updates #713
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Noticed this while debugging something else, we would reset all routes if
either `--advertise-exit-node` or `--advertise-routes` were set. This handles
correctly updating them.
Also added tests.
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Example output:
# Health check:
# - Some peers are advertising routes but --accept-routes is false
Also, move "tailscale status" health checks to the bottom, where they
won't be lost in large netmaps.
Updates #2053
Updates #6266
Change-Id: I5ae76a0cd69a452ce70063875cd7d974bfeb8f1a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Numerous issues have been filed concerning an inability to install and run
Tailscale headlessly in unattended mode, particularly after rebooting. The
server mode `Prefs` stored in `server-state.conf` were not being updated with
`Persist` state once the node had been succesfully logged in.
Users have been working around this by finagling with the GUI to make it force
a state rewrite. This patch makes that unnecessary by ensuring the required
server mode state is updated when prefs are updated by the control client.
Fixes https://github.com/tailscale/tailscale/issues/3186
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
cmd/viewer couldn't deal with that map-of-map. Add a wrapper type
instead, which also gives us a place to add future stuff.
Updates tailscale/corp#7515
Change-Id: I44a4ca1915300ea8678e5b0385056f0642ccb155
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Not for end users (unless directed by support). Mostly for ease of
development for some upcoming webserver work.
Change-Id: I43acfed217514567acb3312367b24d620e739f88
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It is currently a `ipn.PrefsView` which means when we do a JSON roundtrip,
we go from an invalid Prefs to a valid one.
This makes it a pointer, which fixes the JSON roundtrip.
This was introduced in 0957bc5af2.
Signed-off-by: Maisem Ali <maisem@tailscale.com>
* Plumb disablement values through some of the internals of TKA enablement.
* Transmit the node's TKA hash at the end of sync so the control plane understands each node's head.
* Implement /machine/tka/disable RPC to actuate disablement on the control plane.
There is a partner PR for the control server I'll send shortly.
Signed-off-by: Tom DNetto <tom@tailscale.com>
This makes tags, creation time, exit node option and primary routes
for the current node exposed via `tailscale status --json`
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
Poller.C and Poller.c were duplicated for one caller. Add an accessor
returning the receive-only version instead. It'll inline.
Poller.Err was unused. Remove.
Then Poller is opaque.
The channel usage and shutdown was a bit sketchy. Clean it up.
And document some things.
Change-Id: I5669e54f51a6a13492cf5485c83133bda7ea3ce9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Running corp/ipn#TestNetworkLockE2E has a 1/300 chance of failing, and
deskchecking suggests thats whats happening are two netmaps are racing each
other to be processed through tkaSyncIfNeededLocked. This happens in the
first place because we release b.mu during network RPCs.
To fix this, we make the tka sync logic an exclusive section, so two
netmaps will need to wait for tka sync to complete serially (which is what
we would want anyway, as the second run through probably wont need to
sync).
Signed-off-by: Tom DNetto <tom@tailscale.com>
The macOS and iOS apps that used the /localapi/v0/file-targets handler
were getting too many candidate targets. They wouldn't actually accept
the file. This is effectively just a UI glitch in the wrong hosts
being listed as valid targets from the source side.
Change-Id: I6907a5a1c3c66920e5ec71601c044e722e7cb888
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
And add a CLI/localapi and c2n mechanism to enable it for a fixed
amount of time.
Updates #1548
Change-Id: I71674aaf959a9c6761ff33bbf4a417ffd42195a7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Most visible when using tsnet.Server, but could have resulted in dropped
messages in a few other places too.
Fixes#5743
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
* tailcfg, control/controlhttp, control/controlclient: add ControlDialPlan field
This field allows the control server to provide explicit information
about how to connect to it; useful if the client's link status can
change after the initial connection, or if the DNS settings pushed by
the control server break future connections.
Change-Id: I720afe6289ec27d40a41b3dcb310ec45bd7e5f3e
Signed-off-by: Andrew Dunham <andrew@tailscale.com>
It was checking if the sshServer was initialized as a proxy, but that
could either not have been initialized yet or Tailscale SSH could have
been disabled after intialized.
Also bump tailcfg.CurrentCapabilityVersion
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This is especially helpful as we launch newer DERPs over time, and older
clients have progressively out-of-date static DERP maps baked in. After
this, as long as the client has successfully connected once, it'll cache
the most recent DERP map it knows about.
Resolves an in-code comment from @bradfitz
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
This lets the control plane can make HTTP requests to nodes.
Then we can use this for future things rather than slapping more stuff
into MapResponse, etc.
Change-Id: Ic802078c50d33653ae1f79d1e5257e7ade4408fd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Updates #5435
Based on the discussion in #5435, we can better support transactional data models
by making the underlying storage layer a parameter (which can be specialized for
the request) rather than a long-lived member of Authority.
Now that Authority is just an instantaneous snapshot of state, we can do things
like provide idempotent methods and make it cloneable, too.
Signed-off-by: Tom DNetto <tom@tailscale.com>
The CapabilityFileSharingTarget capability added by eb32847d85
is meant to control the ability to share with nodes not owned by the
current user, not to restrict all sharing (the coordination server is
not currently populating the capability at all)
Fixestailscale/corp#6669
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Hashing []any is slow since hashing of interfaces is slow.
Hashing of interfaces is slow since we pessimistically assume
that cycles can occur through them and start cycle tracking.
Drop the variadic signature of Update and fix callers to pass in
an anonymous struct so that we are hashing concrete types
near the root of the value tree.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
- A network-lock key is generated if it doesn't already exist, and stored in the StateStore. The public component is communicated to control during registration.
- If TKA state exists on the filesystem, a tailnet key authority is initialized (but nothing is done with it for now).
Signed-off-by: Tom DNetto <tom@tailscale.com>
This adds the inverse to CapabilityFileSharingSend so that senders can
identify who they can Taildrop to.
Updates #2101
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Together with 06aa141632 this minimizes
the number of NEPacketTunnelNetworkSettings updates that we have to do,
and thus avoids Chrome interrupting outstanding requests due to
(perceived) network changes.
Updates #3102
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
And remove the GCP special-casing from ipn/ipnlocal; do it only in the
forwarder for *.internal.
Fixes#4980Fixes#4981
Change-Id: I5c481e96d91f3d51d274a80fbd37c38f16dfa5cb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This does three things:
* If you're on GCP, it adds a *.internal DNS split route to the
metadata server, so we never break GCP DNS names. This lets people
have some Tailscale nodes on GCP and some not (e.g. laptops at home)
without having to add a Tailnet-wide *.internal DNS route.
If you already have such a route, though, it won't overwrite it.
* If the 100.100.100.100 DNS forwarder has nowhere to forward to,
it forwards it to the GCP metadata IP, which forwards to 8.8.8.8.
This means there are never errNoUpstreams ("upstream nameservers not set")
errors on GCP due to e.g. mangled /etc/resolv.conf (GCP default VMs
don't have systemd-resolved, so it's likely a DNS supremacy fight)
* makes the DNS fallback mechanism use the GCP metadata IP as a
fallback before our hosted HTTP-based fallbacks
I created a default GCP VM from their web wizard. It has no
systemd-resolved.
I then made its /etc/resolv.conf be empty and deleted its GCP
hostnames in /etc/hosts.
I then logged in to a tailnet with no global DNS settings.
With this, tailscaled writes /etc/resolv.conf (direct mode, as no
systemd-resolved) and sets it to 100.100.100.100, which then has
regular DNS via the metadata IP and *.internal DNS via the metadata IP
as well. If the tailnet configures explicit DNS servers, those are used
instead, except for *.internal.
This also adds a new util/cloudenv package based on version/distro
where the cloud type is only detected once. We'll likely expand it in
the future for other clouds, doing variants of this change for other
popular cloud environments.
Fixes#4911
RELNOTES=Google Cloud DNS improvements
Change-Id: I19f3c2075983669b2b2c0f29a548da8de373c7cf
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Client.SetExpirySooner isn't part of the state machine. Remove it from
the Client interface.
And fix a use of LocalBackend.cc without acquiring the lock that
guards that field.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
On DSM7 as a non-root user it'll run into problems.
And we haven't tested on DSM6, even though it might work, but I doubt
it.
Updates #3802
Updates tailscale/corp#5468
Change-Id: I75729042e4788f03f9eb82057482a44b319f04f3
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Also lazify SSHServer initialization to allow restarting the server on a
subsequent `tailscale up`
Updates #3802
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Ideally we would re-establish these sessions when tailscaled comes back
up, however we do not do that yet so this is better than leaking the
sessions.
Updates #3802
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This fixes the "tailscale up --authkey=... --ssh" path (or any "up"
path that used Start instead of EditPrefs) which wasn't setting the
bit.
Updates #3802
Change-Id: Ifca532ec58296fedcedb5582312dfee884367ed7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>