Update all code generation tools, and those that check for license
headers to use the new standard header.
Also update copyright statement in LICENSE file.
Fixes#6865
Signed-off-by: Will Norris <will@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
Goal: one way for users to update Tailscale, downgrade, switch tracks,
regardless of platform (Windows, most Linux distros, macOS, Synology).
This is a start.
Updates #755, etc
Change-Id: I23466da1ba41b45f0029ca79a17f5796c2eedd92
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Nodes that are expired, taking into account the time delta calculated
from MapResponse.ControlTime have the newly-added Expired boolean set.
For additional defense-in-depth, also replicate what control does and
clear the Endpoints and DERP fields, and additionally set the node key
to a bogus value.
Updates #6932
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ia2bd6b56064416feee28aef5699ca7090940662a
Consider the following pattern:
err1 := foo()
err2 := bar()
err3 := baz()
return multierr.New(err1, err2, err3)
If err1, err2, and err3 are all nil, then multierr.New should not allocate.
Thus, modify the logic of New to count the number of distinct error values
and allocate the exactly needed slice. This also speeds up non-empty error
situation since repeatedly growing with append is slow.
Performance:
name old time/op new time/op delta
Empty-24 41.8ns ± 2% 6.4ns ± 1% -84.73% (p=0.000 n=10+10)
NonEmpty-24 120ns ± 3% 69ns ± 1% -42.01% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Empty-24 64.0B ± 0% 0.0B -100.00% (p=0.000 n=10+10)
NonEmpty-24 168B ± 0% 88B ± 0% -47.62% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Empty-24 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
NonEmpty-24 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Errors in Go are no longer viewed as a linear chain, but a tree.
See golang/go#53435.
Add a Range function that iterates through an error
in a pre-order, depth-first order.
This matches the iteration order of errors.As in Go 1.20.
This adds the logic (but currently commented out) for having
Error implement the multi-error version of Unwrap in Go 1.20.
It is commented out currently since it causes "go vet"
to complain about having the "wrong" signature.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I added util/winutil/LookupPseudoUser, which essentially consists of the bits
that I am in the process of adding to Go's standard library.
We check the provided SID for "S-1-5-x" where 17 <= x <= 20 (which are the
known pseudo-users) and then manually populate a os/user.User struct with
the correct information.
Fixes https://github.com/tailscale/tailscale/issues/869
Fixes https://github.com/tailscale/tailscale/issues/2894
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
We use this pattern in a number of places (in this repo and elsewhere)
and I was about to add a fourth to this repo which was crossing the line.
Add this type instead so they're all the same.
Also, we have another Set type (SliceSet, which tracks its keys in
order) in another repo we can move to this package later.
Change-Id: Ibbdcdba5443fae9b6956f63990bdb9e9443cefa9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This sets the "com.apple.quarantine" flag on macOS, and the
"Zone.Identifier" alternate data stream on Windows.
Change-Id: If14f805467b0e2963067937d7f34e08ba1d1fa85
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
This is similar to the golang.org/x/tools/internal/fastwalk I'd
previously written but not recursive and using mem.RO.
The metrics package already had some Linux-specific directory reading
code in it. Move that out to a new general package that can be reused
by portlist too, which helps its scanning of all /proc files:
name old time/op new time/op delta
FindProcessNames-8 2.79ms ± 6% 2.45ms ± 7% -12.11% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
FindProcessNames-8 62.9kB ± 0% 33.5kB ± 0% -46.76% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
FindProcessNames-8 2.25k ± 0% 0.38k ± 0% -82.98% (p=0.000 n=9+10)
Change-Id: I75db393032c328f12d95c39f71c9742c375f207a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The //go:build syntax was introduced in Go 1.17:
https://go.dev/doc/go1.17#build-lines
gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.
Done with:
perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It's normal for HKLM\SOFTWARE\Policies\Tailscale to not exist but that
currently produces a lot of log spam.
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
This way we can do that once (out of band, in the GitHub action),
instead of increasing the time of each deploy that uses the package.
.wasm is removed from the list of automatically pre-compressed
extensions, an OSS bump and small change on the corp side is needed to
make use of this change.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Sync with golang.org/x/sync/singleflight at commit
8fcdb60fdcc0539c5e357b2308249e4e752147f1
Fixes#5790
Signed-off-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
I added new functions to winutil to obtain the state of a service and all
its depedencies, serialize them to JSON, and write them to a Logf.
When tstun.New returns a wrapped ERROR_DEVICE_NOT_AVAILABLE, we know that wintun
installation failed. We then log the service graph rooted at "NetSetupSvc".
We are interested in that specific service because network devices will not
install if that service is not running.
Updates https://github.com/tailscale/tailscale/issues/5531
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
* and move goroutine scrubbing code to its own package for reuse
* bump capver to 45
Change-Id: I9b4dfa5af44d2ecada6cc044cd1b5674ee427575
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We're adding two log IDs to facilitate data-plane audit logging: a node-specific
log ID, and a domain-specific log ID.
Updated util/deephash/deephash_test.go with revised expectations for tailcfg.Node.
Updates https://github.com/tailscale/corp/issues/6991
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
And put the rationale in the name too to save the callers the need for a comment.
Change-Id: I090f51b749a5a0641897ee89a8fb2e2080c8b782
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It is unclear whether the lack of checking nil-ness of slices
was an oversight or a deliberate feature.
Lacking a comment, the assumption is that this was an oversight.
Also, expand the logic to perform cycle detection for recursive slices.
We do this on a per-element basis since a slice is semantically
equivalent to a list of pointers.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I was working on my "dump iptables rules using only syscalls" branch and
had a bunch of C structure decoding to do. Rather than manually
calculating the padding or using unsafe trickery to actually cast
variable-length structures to Go types, I'd rather use a helper package
that deals with padding for me.
Padding rules were taken from the following article:
http://www.catb.org/esr/structure-packing/
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Add a new lookupTypeHasher function that is just a cached front-end
around the makeTypeHasher function.
We do not need to worry about the recursive type cycle issue that
made getTypeInfo more complicated since makeTypeHasher
is not directly recursive. All calls to itself happen lazily
through a sync.Once upon first use.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The entry logic of Hash has extra complexity to make sure
we always have an addressable value on hand.
If not, we heap allocate the input.
For this reason we document that there are performance benefits
to always providing a pointer.
Rather than documenting this, just enforce it through generics.
Also, delete the unused HasherForType function.
It's an interesting use of generics, but not well tested.
We can resurrect it from code history if there's a need for it.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
There is a minor adjustment where we hash the length of the map
to be more on the cautious side.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than having two copies []fieldInfo,
just maintain one and perform merging in the same pass.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Use of reflect.Value.SetXXX panics if the provided argument was
obtained from an unexported struct field.
Instead, pass an unsafe.Pointer around and convert to a
reflect.Value when necessary (i.e., for maps and interfaces).
Converting from unsafe.Pointer to reflect.Value guarantees that
none of the read-only bits will be populated.
When running in race mode, we attach type information to the pointer
so that we can type check every pointer operation.
This also type-checks that direct memory hashing is within
the valid range of a struct value.
We add test cases that previously caused deephash to panic,
but now pass.
Performance:
name old time/op new time/op delta
Hash 14.1µs ± 1% 14.1µs ± 1% ~ (p=0.590 n=10+9)
HashPacketFilter 2.53µs ± 2% 2.44µs ± 1% -3.79% (p=0.000 n=9+10)
TailcfgNode 1.45µs ± 1% 1.43µs ± 0% -1.36% (p=0.000 n=9+9)
HashArray 318ns ± 2% 318ns ± 2% ~ (p=0.541 n=10+10)
HashMapAcyclic 32.9µs ± 1% 31.6µs ± 1% -4.16% (p=0.000 n=10+9)
There is a slight performance gain due to the use of unsafe.Pointer
over reflect.Value methods. Also, passing an unsafe.Pointer (1 word)
on the stack is cheaper than passing a reflect.Value (3 words).
Performance gains are diminishing since SHA-256 hashing now dominates the runtime.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
When built with "deephash_debug", print the set of HashXXX methods.
Example usage:
$ go test -run=GetTypeHasher/string_slice -tags=deephash_debug
U64(2)+U64(3)+S("foo")+U64(3)+S("bar")+FIN
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than separate functions to hash each kind,
just rely on the fact that these are direct memory hashable,
thus simplifying the code.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Every implementation of typeHasherFunc always returns true,
which implies that the slow path is no longer executed.
Delete it.
h.hashValueWithType(v, ti, ...) is deleted as it is equivalent to:
ti.hasher()(h, v)
h.hashValue(v, ...) is deleted as it is equivalent to:
ti := getTypeInfo(v.Type())
ti.hasher()(h, v)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Add support for maps and interfaces to the fast path.
Add cycle-detection to the pointer handling logic.
This logic is mostly copied from the slow path.
A future commit will delete the slow path once
the fast path never falls back to the slow path.
Performance:
name old time/op new time/op delta
Hash-24 18.5µs ± 1% 14.9µs ± 2% -19.52% (p=0.000 n=10+10)
HashPacketFilter-24 2.54µs ± 1% 2.60µs ± 1% +2.19% (p=0.000 n=10+10)
HashMapAcyclic-24 31.6µs ± 1% 30.5µs ± 1% -3.42% (p=0.000 n=9+8)
TailcfgNode-24 1.44µs ± 2% 1.43µs ± 1% ~ (p=0.171 n=10+10)
HashArray-24 324ns ± 1% 324ns ± 2% ~ (p=0.425 n=9+9)
The additional cycle detection logic doesn't incur much slow down
since it only activates if a type is recursive, which does not apply
for any of the types that we care about.
There is a notable performance boost since we switch from the fath path
to the slow path less often. Most notably, a struct with a field that
could not be handled by the fast path would previously cause
the entire struct to go through the slow path.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>