I missed connecting some controlknobs.Knobs pieces in 4e91cf20a8
resulting in that breaking control knobs entirely.
Whoops.
The fix in ipn/ipnlocal (where it makes a new controlclient) but to
atone, I also added integration tests. Those integration tests use
a new "tailscale debug control-knobs" which by itself might be useful
for future debugging.
Updates #9351
Change-Id: Id9c89c8637746d879d5da67b9ac4e0d2367a3f0d
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Previously two tsnet nodes in the same process couldn't have disjoint
sets of controlknob settings from control as both would overwrite each
other's global variables.
This plumbs a new controlknobs.Knobs type around everywhere and hangs
the knobs sent by control on that instead.
Updates #9351
Change-Id: I75338646d36813ed971b4ffad6f9a8b41ec91560
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
All platforms use it at this point, including iOS which was the
original hold out for memory reasons. No more reason to make it
optional.
Updates #9332
Change-Id: I743fbc2f370921a852fbcebf4eb9821e2bdd3086
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Cache the last `ClientVersion` value that was received from coordination
server and pass it in the localapi `/status` response.
When running `tailscale status`, print a message if `RunningAsLatest` is
`false`.
Updates #6907
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
Log some progress info to make updates more debuggable. Also, track
whether an active update is already started and return an error if
a concurrent update is attempted.
Some planned future PRs:
* add JSON output to `tailscale update`
* use JSON output from `tailscale update` to provide a more detailed
status of in-progress update (stage, download progress, etc)
Updates #6907
Signed-off-by: Andrew Lytvynov <awly@tailscale.com>
This PR adds a new field to the serve config that can be used to identify which serves are in "foreground mode" and then can also be used to ensure they do not get persisted to disk so that if Tailscaled gets ungracefully shutdown, the reloaded ServeConfig will not have those ports opened.
Updates #8489
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
This PR removes the per request logging to the CLI as the CLI
will not be displaying those logs initially.
Updates #8489
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
This PR adds a new field to the ServeConfig which maps
WatchIPNBus session ids to foreground serve configs.
The PR also adds a DeleteForegroundSession method to ensure the config
gets cleaned up on sessions ending.
Note this field is not currently used but will be in follow up work.
Updates #8489
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
It would acquire the lock, calculate `nextState`, relase
the lock, then call `enterState` which would acquire the lock
again. There were obvious races there which could lead to
nil panics as seen in a test in a different repo.
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x70 pc=0x1050f2c7c]
goroutine 42240 [running]:
tailscale.com/ipn/ipnlocal.(*LocalBackend).enterStateLockedOnEntry(0x14002154e00, 0x6)
tailscale.com/ipn/ipnlocal/local.go:3715 +0x30c
tailscale.com/ipn/ipnlocal.(*LocalBackend).enterState(0x14002154e00?, 0x14002e3a140?)
tailscale.com/ipn/ipnlocal/local.go:3663 +0x8c
tailscale.com/ipn/ipnlocal.(*LocalBackend).stateMachine(0x14001f5e280?)
tailscale.com/ipn/ipnlocal/local.go:3836 +0x2c
tailscale.com/ipn/ipnlocal.(*LocalBackend).setWgengineStatus(0x14002154e00, 0x14002e3a190, {0x0?, 0x0?})
tailscale.com/ipn/ipnlocal/local.go:1193 +0x4d0
tailscale.com/wgengine.(*userspaceEngine).RequestStatus(0x14005d90300)
tailscale.com/wgengine/userspace.go:1051 +0x80
tailscale.com/wgengine.NewUserspaceEngine.func2({0x14002e3a0a0, 0x2, 0x140025cce40?})
tailscale.com/wgengine/userspace.go:318 +0x1a0
tailscale.com/wgengine/magicsock.(*Conn).updateEndpoints(0x14002154700, {0x105c13eaf, 0xf})
tailscale.com/wgengine/magicsock/magicsock.go:531 +0x424
created by tailscale.com/wgengine/magicsock.(*Conn).ReSTUN in goroutine 42077
tailscale.com/wgengine/magicsock/magicsock.go:2142 +0x3a4
```
Updates tailscale/corp#14480
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This PR adds a SessionID field to the ipn.Notify struct so that
ipn buses can identify a session and register deferred clean up
code in the future. The first use case this is for is to be able to
tie foreground serve configs to a specific watch session and ensure
its clean up when a connection is closed.
Updates #8489
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
If Start was called multiple times concurrently, it would
create a new client and shutdown the previous one. However
there was a race possible between shutting down the old one
and assigning a new one where the concurent goroutine may
have assigned another one already and it would leak.
Updates tailscale/corp#14471
Signed-off-by: Maisem Ali <maisem@tailscale.com>
resetControlClientLocked is called while b.mu was held and
would call cc.Shutdown which would wait for the observer queue
to drain.
However, there may be active callbacks from cc already waiting for
b.mu resulting in a deadlock.
This makes it so that resetControlClientLocked does not call
Shutdown, and instead just returns the value.
It also makes it so that any status received from previous cc
are ignored.
Updates tailscale/corp#12827
Signed-off-by: Maisem Ali <maisem@tailscale.com>
During Shutdown of an ephemeral node, we called Logout (to best effort
delete the node earlier), which then called back into
resetForProfileChangeLockedOnEntry, which then tried to Start
again. That's all a waste of work during shutdown and complicates
other cleanups coming later.
Updates #cleanup
Change-Id: I0b8648cac492fc70fa97c4ebef919bbe352c5d7b
Co-authored-by: Maisem Ali <maisem@tailscale.com>
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We already removed the async API, make it more sync and remove
the FinishLogout state too.
This also makes the callback be synchronous again as the previous
attempt was trying to work around the logout callback resulting
in a client shutdown getting blocked forever.
Updates #3833
Signed-off-by: Maisem Ali <maisem@tailscale.com>
We have cases where the SetControlClientStatus would result in
a Shutdown call back into the auto client that would block
forever. The right thing to do here is to fix the LocalBackend
state machine but thats a different dumpster fire that we
are slowly making progress towards.
This makes it so that the SetControlClientStatus happens in a
different goroutine so that calls back into the auto client
do not block.
Also add a few missing mu.Unlocks in LocalBackend.Start.
Updates #9181
Signed-off-by: Maisem Ali <maisem@tailscale.com>
* don't try to re-Start (and thus create a new client) during Shutdown
* in tests, wait for controlclient to fully shut down when replacing it
* log a bit more
Updates tailscale/corp#14139
Updates tailscale/corp#13175 etc
Updates #9178 and its flakes.
Change-Id: I3ed2440644dc157aa6e616fe36fbd29a6056846c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
They were entirely redundant and 1:1 with the status field
so this turns them into methods instead.
Updates #cleanup
Updates #1909
Change-Id: I7d939750749edf7dae4c97566bbeb99f2f75adbc
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
For now the method has only one interface (the same as the func it's
replacing) but it will grow, eventually with the goal to remove the
controlclient.Status type for most purposes.
Updates #1909
Change-Id: I715c8bf95e3f5943055a94e76af98d988558a2f2
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Upcoming work on incremental netmap change handling will require some
replumbing of which subsystems get notified about what. Done naively,
it could break "tailscale status --json" visibility later. To make sure
I understood the flow of all the updates I was rereading the status code
and realized parts of ipnstate.Status were being populated by the wrong
subsystems.
The engine (wireguard) and magicsock (data plane, NAT traveral) should
only populate the stuff that they uniquely know. The WireGuard bits
were fine but magicsock was populating stuff stuff that LocalBackend
could've better handled, so move it there.
Updates #1909
Change-Id: I6d1b95d19a2d1b70fbb3c875fac8ea1e169e8cb0
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It was in SelfNode.Hostinfo anyway. The redundant copy was just
costing us an allocation per netmap (a Hostinfo.Clone).
Updates #1909
Change-Id: Ifac568aa5f8054d9419828489442a0f4559bc099
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Adds ability to start Funnel in the foreground and stream incoming
connections. When foreground process is stopped, Funnel is turned
back off for the port.
Exampe usage:
```
TAILSCALE_FUNNEL_V2=on tailscale funnel 8080
```
Updates #8489
Signed-off-by: Marwan Sulaiman <marwan@tailscale.com>
And optimize the Persist setting a bit, allocating later and only mutating
fields when there's been a Node change.
Updates #1909
Change-Id: Iaddfd9e88ef76e1d18e8d0a41926eb44d0955312
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In b987b2ab18 (2021-01-12) when we introduced sharing we mapped
the sharer to the userid at a low layer, mostly to fix the display of
"tailscale status" and the client UIs, but also some tests.
The commit earlier today, 7dec09d169, removed the 2.5yo option
to let clients disable that automatic mapping, as clearly we were never
getting around to it.
This plumbs the Sharer UserID all the way to ipnstatus so the CLI
itself can choose to print out the Sharer's identity over the node's
original owner.
Then we stop mangling Node.User and let clients decide how they want
to render things.
To ease the migration for the Windows GUI (which currently operates on
tailcfg.Node via the NetMap from WatchIPNBus, instead of PeerStatus),
a new method Node.SharerOrUser is added to do the mapping of
Sharer-else-User.
Updates #1909
Updates tailscale/corp#1183
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Make it just a views.Slice[netip.Prefix] instead of its own named type.
Having the special case led to circular dependencies in another WIP PR
of mine.
Updates #8948
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Now a nodeAttr: ForceBackgroundSTUN, DERPRoute, TrimWGConfig,
DisableSubnetsIfPAC, DisableUPnP.
Kept support for, but also now a NodeAttr: RandomizeClientPort.
Removed: SetForceBackgroundSTUN, SetRandomizeClientPort (both never
used, sadly... never got around to them. But nodeAttrs are better
anyway), EnableSilentDisco (will be a nodeAttr later when that effort
resumes).
Updates #8923
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It was being modified in two places in Direct for the auth routine
and then in LocalBackend when a new NetMap was received. This was
confusing, so make Direct also own changes to Persist when a new
NetMap is received.
Updates #7726
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This adds the capability to pad disco ping message payloads to reach a
specified size. It also plumbs it through to the tailscale ping -size
flag.
Disco pings used for actual endpoint discovery do not use this yet.
Updates #311.
Signed-off-by: salman <salman@tailscale.com>
Co-authored-by: Val <valerie@tailscale.com>
Rather than make each ipn.StateStore implementation guard against
useless writes (a write of the same value that's already in the
store), do writes via a new wrapper that has a fast path for the
unchanged case.
This then fixes profileManager's flood of useless writes to AWS SSM,
etc.
Updates #8785
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This change introduces a new subcommand, `exit-node`, along with a
subsubcommand of `list` and a `--filter` flag.
Exit nodes without location data will continue to be displayed when
`status` is used. Exit nodes with location data will only be displayed
behind `exit-node list`, and in status if they are the active exit node.
The `filter` flag can be used to filter exit nodes with location data by
country.
Exit nodes with Location.Priority data will have only the highest
priority option for each country and city listed. For countries with
multiple cities, a <Country> <Any> option will be displayed, indicating
the highest priority node within that country.
Updates tailscale/corp#13025
Signed-off-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>