It doesn't make a ton of sense for disablement to be communicated as an AUM, because
any failure in the AUM or chain mechanism will mean disablement wont function.
Instead, tracking of the disablement secrets remains inside the state machine, but
actual disablement and communication of the disablement secret is done by the caller.
Signed-off-by: Tom DNetto <tom@tailscale.com>
The CapabilityFileSharingTarget capability added by eb32847d85
is meant to control the ability to share with nodes not owned by the
current user, not to restrict all sharing (the coordination server is
not currently populating the capability at all)
Fixestailscale/corp#6669
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This will update a licenses/tailscale.md file with all of our go
dependencies and their respective licenses. Notices for other clients
will be triggered by similar actions in other repos.
Co-authored-by: Andrew Dunham <andrew@tailscale.com>
Signed-off-by: Will Norris <will@tailscale.com>
Signed-off-by: Andrew Dunham <andrew@tailscale.com>
`src/` is broken up into several subdirectories:
- `lib/` and `types`/ for shared code and type definitions (more code
will be moved here)
- `app/` for the existing Preact-app
- `pkg/` for the new NPM package
A new `build-pkg` esbuild-based command is added to generate the files
for the NPM package. To generate type definitions (something that esbuild
does not do), we set up `dts-bundle-generator`.
Includes additional cleanups to the Wasm type definitions (we switch to
string literals for enums, since exported const enums are hard to use
via packages).
Also allows the control URL to be set a runtime (in addition to the
current build option), so that we don't have to rebuild the package
for dev vs. prod use.
Updates #5415
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Needed to identify the node. A serverside-check the machine key (used
to authenticate the noise session) is that of the specified NodeID
ensures the authenticity of the request.
Signed-off-by: Tom DNetto <tom@tailscale.com>
When sharing nodes, the name of the sharee node is not exposed (instead
it is hardcoded to "device-of-shared-to-user"), which means that we
can't determine the tailnet of that node. Don't immediately fail when
that happens, since it only matters if "Expected-Tailnet" is used.
Signed-off-by: Will Norris <will@tailscale.com>
Add support for maps and interfaces to the fast path.
Add cycle-detection to the pointer handling logic.
This logic is mostly copied from the slow path.
A future commit will delete the slow path once
the fast path never falls back to the slow path.
Performance:
name old time/op new time/op delta
Hash-24 18.5µs ± 1% 14.9µs ± 2% -19.52% (p=0.000 n=10+10)
HashPacketFilter-24 2.54µs ± 1% 2.60µs ± 1% +2.19% (p=0.000 n=10+10)
HashMapAcyclic-24 31.6µs ± 1% 30.5µs ± 1% -3.42% (p=0.000 n=9+8)
TailcfgNode-24 1.44µs ± 2% 1.43µs ± 1% ~ (p=0.171 n=10+10)
HashArray-24 324ns ± 1% 324ns ± 2% ~ (p=0.425 n=9+9)
The additional cycle detection logic doesn't incur much slow down
since it only activates if a type is recursive, which does not apply
for any of the types that we care about.
There is a notable performance boost since we switch from the fath path
to the slow path less often. Most notably, a struct with a field that
could not be handled by the fast path would previously cause
the entire struct to go through the slow path.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We can't write to src/ when tsconnect is used a dependency in another
repo (see also b763a12331). We therefore
need to switch from writing to src/ to using esbuild plugins to handle
the requests for wasm_exec.js (the Go JS runtime for Wasm) and the
Wasm build of the Go module.
This has the benefit of allowing Go/Wasm changes to be picked up without
restarting the server when in dev mode (Go compilation is fast enough
that we can do this on every request, CSS compilation continues to be
the long pole).
Fixes#5382
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
The Start method was removed in 4c27e2fa22, but the comment on NewConn
still mentioned it doesn't do anything until this method is called.
Signed-off-by: Kris Brandow <kris.brandow@gmail.com>
We're going to want to enable audit logging on a per-Tailnet basis. When this
happens, we want control to inform the Tailnet's clients of this capability.
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
This works around the 2.3s delay in short name lookups when SNR is
enabled.
C:\Windows\System32\drivers\etc\hosts file. We only add known hosts that
match the search domains, and we populate the list in order of
Search Domains so that our matching algorithm mimics what Windows would
otherwise do itself if SNR was off.
Updates #1659
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Several customers have had issues due to the permissions
on /dev/net. Set permissions to 0755.
Fixes https://github.com/tailscale/tailscale/issues/5048
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
There are 5 types that we care about that implement AppendTo:
key.DiscoPublic
key.NodePublic
netip.Prefix
netipx.IPRange
netip.Addr
The key types are thin wrappers around [32]byte and are memory hashable.
The netip.Prefix and netipx.IPRange types are thin wrappers over netip.Addr
and are hashable by default if netip.Addr is hashable.
The netip.Addr type is the only one with a complex structure where
the default behavior of deephash does not hash it correctly due to the presence
of the intern.Value type.
Drop support for AppendTo and instead add specialized hashing for netip.Addr
that would be semantically equivalent to == on the netip.Addr values.
The AppendTo support was already broken prior to this change.
It was fully removed (intentionally or not) in #4870.
It was partially restored in #4858 for the fast path,
but still broken in the slow path.
Just drop support for it altogether.
This does mean we lack any ability for types to self-hash themselves.
In the future we can add support for types that implement:
interface { DeepHash() Sum }
Test and fuzz cases were added for the relevant types that
used to rely on the AppendTo method.
FuzzAddr has been executed on 1 billion samples without issues.
Signed-off-by: Joe Tsai joetsai@digital-static.net
Rename Hash as Block512 to indicate that this is a general-purpose
hash.Hash for any algorithm that operates on 512-bit block sizes.
While we rename the package as hashx in this commit,
a subsequent commit will move the sha256x package to hashx.
This is done separately to avoid confusing git.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Also, rename canMemHash to typeIsMemHashable to be consistent.
There are zero changes to the semantics.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Any type that is memory hashable must not be recursive since
there are definitely no pointers involved to make a cycle.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Put the t.Size() == 0 check first since this is applicable in all cases.
Drop the last struct field conditional since this is covered by the
sumFieldSize check at the end.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Apparently OpenBSD can forward packets with manual configuration,
https://github.com/tailscale/tailscale/issues/2498#issuecomment-1114216999
But this makes it work by default. People doing things by hand can
set TS_DEBUG_WRAP_NETSTACK=0 in the environment.
Change-Id: Iee5f32252f83af2baa0ebbe3f20ce9fec5f29e96
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Hashing []any is slow since hashing of interfaces is slow.
Hashing of interfaces is slow since we pessimistically assume
that cycles can occur through them and start cycle tracking.
Drop the variadic signature of Update and fix callers to pass in
an anonymous struct so that we are hashing concrete types
near the root of the value tree.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Convert ParseResponse and Response to use netip.AddrPort instead of
net.IP and separate port.
Fixes#5281
Signed-off-by: Kris Brandow <kris.brandow@gmail.com>
Like LLMNR, NetBIOS also adds resolution delays and we don't support it
anyway so just disable it on the interface.
Updates #1659
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Currently we forward unmatched queries to the default resolver on
Windows. This results in duplicate queries being issued to the same
resolver which is just wasted.
Updates #1659
Signed-off-by: Maisem Ali <maisem@tailscale.com>
Formatting a time.Time as RFC3339 is slow.
See https://go.dev/issue/54093
Now that we have efficient hashing of fixed-width integers,
just hash the time.Time as a binary value.
Performance:
Hash-24 19.0µs ± 1% 18.6µs ± 1% -2.03% (p=0.000 n=10+9)
TailcfgNode-24 1.79µs ± 1% 1.40µs ± 1% -21.74% (p=0.000 n=10+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
It flakes more often than it runs. It provides no value and builds
failure blindness, making people get used to submitting on red.
Bye.
Change-Id: If5491c70737b4c9851c103733b1855af2a90a9e9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Switch deephash to use sha256x.Hash.
We add sha256x.HashString to efficiently hash a string.
It uses unsafe under the hood to convert a string to a []byte.
We also modify sha256x.Hash to export the underlying hash.Hash
for testing purposes so that we can intercept all hash.Hash calls.
Performance:
name old time/op new time/op delta
Hash-24 19.8µs ± 1% 19.2µs ± 1% -3.01% (p=0.000 n=10+10)
HashPacketFilter-24 2.61µs ± 0% 2.53µs ± 1% -3.01% (p=0.000 n=8+10)
HashMapAcyclic-24 31.3µs ± 1% 29.8µs ± 0% -4.80% (p=0.000 n=10+9)
TailcfgNode-24 1.83µs ± 1% 1.82µs ± 2% ~ (p=0.305 n=10+10)
HashArray-24 344ns ± 2% 323ns ± 1% -6.02% (p=0.000 n=9+10)
The performance gains is not as dramatic as sha256x over sha256 due to:
1. most of the hashing already occurring through the direct memory hashing logic, and
2. what does not go through direct memory hashing is slowed down by reflect.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I documented capver 37 in 4ee64681a but forgot to bump the actual
constant. I've done this previously too, so add a test to prevent
it from happening again.
Change-Id: I6f7659db1243d30672121a384beb386d9f9f5b98
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
In Go 1.19, the reflect.Value.MapRange method uses "function outlining"
so that the allocation of reflect.MapIter is inlinable by the caller.
If the iterator doesn't escape the caller, it can be stack allocated.
See https://go.dev/cl/400675
Performance:
name old time/op new time/op delta
HashMapAcyclic-24 31.9µs ± 2% 32.1µs ± 1% ~ (p=0.075 n=10+10)
name old alloc/op new alloc/op delta
HashMapAcyclic-24 0.00B 0.00B ~ (all equal)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>