In order to improve our ability to understand the state of policies and
registry settings when troubleshooting, we enumerate all values in all subkeys.
x/sys/windows does not already offer this, so we need to call RegEnumValue
directly.
For now we're just logging this during startup, however in a future PR I plan to
also trigger this code during a bugreport. I also want to log more than just
registry.
Fixes#8141
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
We have two other types of Sets here. Add the basic obvious one too.
Needed for a change elsewhere.
Updates #cleanup
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I noticed cmd/{cloner,viewer} didn't support structs with embedded
fields while working on a change in another repo. This adds support.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This adds an initial and intentionally minimal configuration for
golang-ci, fixes the issues reported, and adds a GitHub Action to check
new pull requests against this linter configuration.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I8f38fbc315836a19a094d0d3e986758b9313f163
This is an exact copy of the files misc/set/set{,_test}.go from
tailscale/corp@a5415daa9c, plus the
license headers.
For use in #7877
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I712d09c6d1a180c6633abe3acf8feb59b27e2866
This makes `omitempty` actually work, and saves bytes in each map response.
Updates tailscale/corp#8020
Signed-off-by: Maisem Ali <maisem@tailscale.com>
A peer can have IsWireGuardOnly, which means it will not support DERP or
Disco, and it must have Endpoints filled in order to be usable.
In the present implementation only the first Endpoint will be used as
the bestAddr.
Updates tailscale/corp#10351
Co-authored-by: Charlotte Brandhorst-Satzkorn <charlotte@tailscale.com>
Co-authored-by: James Tucker <james@tailscale.com>
Signed-off-by: James Tucker <james@tailscale.com>
Adds NewGaugeFunc and NewCounterFunc (inspired by expvar.Func) which
change the current value to be reported by a function. This allows
some client metric values to be computed on-demand during uploading (at
most every 15 seconds), instead of being continuously updated.
clientmetric uploading had a bunch of micro-optimizations for memory
access (#3331) which are not possible with this approach. However, any
performance hit from function-based metrics is contained to those metrics
only, and we expect to have very few.
Also adds a DisableDeltas() option for client metrics, so that absolute
values are always reported. This makes server-side processing of some
metrics easier to reason about.
Updates tailscale/corp#9230
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
I realized that a lot of the problems that we're seeing around migration and
LocalBackend state can be avoided if we drive Windows pref migration entirely
from within tailscaled. By doing it this way, tailscaled can automatically
perform the migration as soon as the connection with the client frontend is
established.
Since tailscaled is already running as LocalSystem, it already has access to
the user's local AppData directory. The profile manager already knows which
user is connected, so we simply need to resolve the user's prefs file and read
it from there.
Of course, to properly migrate this information we need to also check system
policies. I moved a bunch of policy resolution code out of the GUI and into
a new package in util/winutil/policy.
Updates #7626
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
This adds the util/sysresources package, which currently only contains a
function to return the total memory size of the current system.
Then, we modify magicsock to scale the number of buffered DERP messages
based on the system's available memory, ensuring that we never use a
value lower than the previous constant of 32.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ib763c877de4d0d4ee88869078e7d512f6a3a148d
In addition to checking the total hostname length, validate characters used in each DNS label and label length.
Updates https://github.com/tailscale/corp/issues/10012
Signed-off-by: Anton Tolchanov <anton@tailscale.com>
This only adds the field, to be used in a future commit.
Updates tailscale/corp#8020
Co-authored-by: Melanie Warrick <warrick@tailscale.com>
Signed-off-by: Maisem Ali <maisem@tailscale.com>
This package handles cases where we need to truncate human-readable text to fit
a length constraint without leaving "ragged" multi-byte rune fragments at the
end of the truncated value.
Change-Id: Id972135d1880485f41b1fedfb65c2b8cc012d416
Signed-off-by: M. J. Fromberger <fromberger@tailscale.com>
Now that we're using rand.Shuffle in a few locations, create a generic
shuffle function and use it instead. While we're at it, move the
interleaveSlices function to the same package for use.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I0b00920e5b3eea846b6cedc30bd34d978a049fd3
Also add some basic tests for this implementation.
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: I307ebb6db91d0c172657befb276b38ccb638f828
This isn't currently supported due to missing support in upstream
dependencies, and also we don't use this package anywhere right now.
Just conditionally skip this for now.
Fixes#7268
Change-Id: Ie7389c2c0816b39b410c02a7276051a4c18b6450
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
This package is an initial implementation of something that can read
netfilter and iptables rules from the Linux kernel without needing to
shell out to an external utility; it speaks directly to the kernel using
syscalls and parses the data returned.
Currently this is read-only since it only knows how to parse a subset of
the available data.
Signed-off-by: Andrew Dunham <andrew@tailscale.com>
Change-Id: Iccadf5dcc081b73268d8ccf8884c24eb6a6f1ff5
Now that Go 1.20 is released, multierr.Error can implement
Unwrap() []error
Updates #7123
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ic28c2579de6799801836c447afbca8cdcba732cf
Update all code generation tools, and those that check for license
headers to use the new standard header.
Also update copyright statement in LICENSE file.
Fixes#6865
Signed-off-by: Will Norris <will@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
Goal: one way for users to update Tailscale, downgrade, switch tracks,
regardless of platform (Windows, most Linux distros, macOS, Synology).
This is a start.
Updates #755, etc
Change-Id: I23466da1ba41b45f0029ca79a17f5796c2eedd92
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Nodes that are expired, taking into account the time delta calculated
from MapResponse.ControlTime have the newly-added Expired boolean set.
For additional defense-in-depth, also replicate what control does and
clear the Endpoints and DERP fields, and additionally set the node key
to a bogus value.
Updates #6932
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Change-Id: Ia2bd6b56064416feee28aef5699ca7090940662a
Consider the following pattern:
err1 := foo()
err2 := bar()
err3 := baz()
return multierr.New(err1, err2, err3)
If err1, err2, and err3 are all nil, then multierr.New should not allocate.
Thus, modify the logic of New to count the number of distinct error values
and allocate the exactly needed slice. This also speeds up non-empty error
situation since repeatedly growing with append is slow.
Performance:
name old time/op new time/op delta
Empty-24 41.8ns ± 2% 6.4ns ± 1% -84.73% (p=0.000 n=10+10)
NonEmpty-24 120ns ± 3% 69ns ± 1% -42.01% (p=0.000 n=9+10)
name old alloc/op new alloc/op delta
Empty-24 64.0B ± 0% 0.0B -100.00% (p=0.000 n=10+10)
NonEmpty-24 168B ± 0% 88B ± 0% -47.62% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
Empty-24 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
NonEmpty-24 3.00 ± 0% 2.00 ± 0% -33.33% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Errors in Go are no longer viewed as a linear chain, but a tree.
See golang/go#53435.
Add a Range function that iterates through an error
in a pre-order, depth-first order.
This matches the iteration order of errors.As in Go 1.20.
This adds the logic (but currently commented out) for having
Error implement the multi-error version of Unwrap in Go 1.20.
It is commented out currently since it causes "go vet"
to complain about having the "wrong" signature.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I added util/winutil/LookupPseudoUser, which essentially consists of the bits
that I am in the process of adding to Go's standard library.
We check the provided SID for "S-1-5-x" where 17 <= x <= 20 (which are the
known pseudo-users) and then manually populate a os/user.User struct with
the correct information.
Fixes https://github.com/tailscale/tailscale/issues/869
Fixes https://github.com/tailscale/tailscale/issues/2894
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
We use this pattern in a number of places (in this repo and elsewhere)
and I was about to add a fourth to this repo which was crossing the line.
Add this type instead so they're all the same.
Also, we have another Set type (SliceSet, which tracks its keys in
order) in another repo we can move to this package later.
Change-Id: Ibbdcdba5443fae9b6956f63990bdb9e9443cefa9
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This sets the "com.apple.quarantine" flag on macOS, and the
"Zone.Identifier" alternate data stream on Windows.
Change-Id: If14f805467b0e2963067937d7f34e08ba1d1fa85
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
This is similar to the golang.org/x/tools/internal/fastwalk I'd
previously written but not recursive and using mem.RO.
The metrics package already had some Linux-specific directory reading
code in it. Move that out to a new general package that can be reused
by portlist too, which helps its scanning of all /proc files:
name old time/op new time/op delta
FindProcessNames-8 2.79ms ± 6% 2.45ms ± 7% -12.11% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
FindProcessNames-8 62.9kB ± 0% 33.5kB ± 0% -46.76% (p=0.000 n=9+10)
name old allocs/op new allocs/op delta
FindProcessNames-8 2.25k ± 0% 0.38k ± 0% -82.98% (p=0.000 n=9+10)
Change-Id: I75db393032c328f12d95c39f71c9742c375f207a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The //go:build syntax was introduced in Go 1.17:
https://go.dev/doc/go1.17#build-lines
gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.
Done with:
perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It's normal for HKLM\SOFTWARE\Policies\Tailscale to not exist but that
currently produces a lot of log spam.
Signed-off-by: Adrian Dewhurst <adrian@tailscale.com>
This way we can do that once (out of band, in the GitHub action),
instead of increasing the time of each deploy that uses the package.
.wasm is removed from the list of automatically pre-compressed
extensions, an OSS bump and small change on the corp side is needed to
make use of this change.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Sync with golang.org/x/sync/singleflight at commit
8fcdb60fdcc0539c5e357b2308249e4e752147f1
Fixes#5790
Signed-off-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
I added new functions to winutil to obtain the state of a service and all
its depedencies, serialize them to JSON, and write them to a Logf.
When tstun.New returns a wrapped ERROR_DEVICE_NOT_AVAILABLE, we know that wintun
installation failed. We then log the service graph rooted at "NetSetupSvc".
We are interested in that specific service because network devices will not
install if that service is not running.
Updates https://github.com/tailscale/tailscale/issues/5531
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
* and move goroutine scrubbing code to its own package for reuse
* bump capver to 45
Change-Id: I9b4dfa5af44d2ecada6cc044cd1b5674ee427575
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We're adding two log IDs to facilitate data-plane audit logging: a node-specific
log ID, and a domain-specific log ID.
Updated util/deephash/deephash_test.go with revised expectations for tailcfg.Node.
Updates https://github.com/tailscale/corp/issues/6991
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
And put the rationale in the name too to save the callers the need for a comment.
Change-Id: I090f51b749a5a0641897ee89a8fb2e2080c8b782
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It is unclear whether the lack of checking nil-ness of slices
was an oversight or a deliberate feature.
Lacking a comment, the assumption is that this was an oversight.
Also, expand the logic to perform cycle detection for recursive slices.
We do this on a per-element basis since a slice is semantically
equivalent to a list of pointers.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
I was working on my "dump iptables rules using only syscalls" branch and
had a bunch of C structure decoding to do. Rather than manually
calculating the padding or using unsafe trickery to actually cast
variable-length structures to Go types, I'd rather use a helper package
that deals with padding for me.
Padding rules were taken from the following article:
http://www.catb.org/esr/structure-packing/
Signed-off-by: Andrew Dunham <andrew@du.nham.ca>
Add a new lookupTypeHasher function that is just a cached front-end
around the makeTypeHasher function.
We do not need to worry about the recursive type cycle issue that
made getTypeInfo more complicated since makeTypeHasher
is not directly recursive. All calls to itself happen lazily
through a sync.Once upon first use.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The entry logic of Hash has extra complexity to make sure
we always have an addressable value on hand.
If not, we heap allocate the input.
For this reason we document that there are performance benefits
to always providing a pointer.
Rather than documenting this, just enforce it through generics.
Also, delete the unused HasherForType function.
It's an interesting use of generics, but not well tested.
We can resurrect it from code history if there's a need for it.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
There is a minor adjustment where we hash the length of the map
to be more on the cautious side.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than having two copies []fieldInfo,
just maintain one and perform merging in the same pass.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Use of reflect.Value.SetXXX panics if the provided argument was
obtained from an unexported struct field.
Instead, pass an unsafe.Pointer around and convert to a
reflect.Value when necessary (i.e., for maps and interfaces).
Converting from unsafe.Pointer to reflect.Value guarantees that
none of the read-only bits will be populated.
When running in race mode, we attach type information to the pointer
so that we can type check every pointer operation.
This also type-checks that direct memory hashing is within
the valid range of a struct value.
We add test cases that previously caused deephash to panic,
but now pass.
Performance:
name old time/op new time/op delta
Hash 14.1µs ± 1% 14.1µs ± 1% ~ (p=0.590 n=10+9)
HashPacketFilter 2.53µs ± 2% 2.44µs ± 1% -3.79% (p=0.000 n=9+10)
TailcfgNode 1.45µs ± 1% 1.43µs ± 0% -1.36% (p=0.000 n=9+9)
HashArray 318ns ± 2% 318ns ± 2% ~ (p=0.541 n=10+10)
HashMapAcyclic 32.9µs ± 1% 31.6µs ± 1% -4.16% (p=0.000 n=10+9)
There is a slight performance gain due to the use of unsafe.Pointer
over reflect.Value methods. Also, passing an unsafe.Pointer (1 word)
on the stack is cheaper than passing a reflect.Value (3 words).
Performance gains are diminishing since SHA-256 hashing now dominates the runtime.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
When built with "deephash_debug", print the set of HashXXX methods.
Example usage:
$ go test -run=GetTypeHasher/string_slice -tags=deephash_debug
U64(2)+U64(3)+S("foo")+U64(3)+S("bar")+FIN
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than separate functions to hash each kind,
just rely on the fact that these are direct memory hashable,
thus simplifying the code.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Every implementation of typeHasherFunc always returns true,
which implies that the slow path is no longer executed.
Delete it.
h.hashValueWithType(v, ti, ...) is deleted as it is equivalent to:
ti.hasher()(h, v)
h.hashValue(v, ...) is deleted as it is equivalent to:
ti := getTypeInfo(v.Type())
ti.hasher()(h, v)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Add support for maps and interfaces to the fast path.
Add cycle-detection to the pointer handling logic.
This logic is mostly copied from the slow path.
A future commit will delete the slow path once
the fast path never falls back to the slow path.
Performance:
name old time/op new time/op delta
Hash-24 18.5µs ± 1% 14.9µs ± 2% -19.52% (p=0.000 n=10+10)
HashPacketFilter-24 2.54µs ± 1% 2.60µs ± 1% +2.19% (p=0.000 n=10+10)
HashMapAcyclic-24 31.6µs ± 1% 30.5µs ± 1% -3.42% (p=0.000 n=9+8)
TailcfgNode-24 1.44µs ± 2% 1.43µs ± 1% ~ (p=0.171 n=10+10)
HashArray-24 324ns ± 1% 324ns ± 2% ~ (p=0.425 n=9+9)
The additional cycle detection logic doesn't incur much slow down
since it only activates if a type is recursive, which does not apply
for any of the types that we care about.
There is a notable performance boost since we switch from the fath path
to the slow path less often. Most notably, a struct with a field that
could not be handled by the fast path would previously cause
the entire struct to go through the slow path.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
There are 5 types that we care about that implement AppendTo:
key.DiscoPublic
key.NodePublic
netip.Prefix
netipx.IPRange
netip.Addr
The key types are thin wrappers around [32]byte and are memory hashable.
The netip.Prefix and netipx.IPRange types are thin wrappers over netip.Addr
and are hashable by default if netip.Addr is hashable.
The netip.Addr type is the only one with a complex structure where
the default behavior of deephash does not hash it correctly due to the presence
of the intern.Value type.
Drop support for AppendTo and instead add specialized hashing for netip.Addr
that would be semantically equivalent to == on the netip.Addr values.
The AppendTo support was already broken prior to this change.
It was fully removed (intentionally or not) in #4870.
It was partially restored in #4858 for the fast path,
but still broken in the slow path.
Just drop support for it altogether.
This does mean we lack any ability for types to self-hash themselves.
In the future we can add support for types that implement:
interface { DeepHash() Sum }
Test and fuzz cases were added for the relevant types that
used to rely on the AppendTo method.
FuzzAddr has been executed on 1 billion samples without issues.
Signed-off-by: Joe Tsai joetsai@digital-static.net
Rename Hash as Block512 to indicate that this is a general-purpose
hash.Hash for any algorithm that operates on 512-bit block sizes.
While we rename the package as hashx in this commit,
a subsequent commit will move the sha256x package to hashx.
This is done separately to avoid confusing git.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Also, rename canMemHash to typeIsMemHashable to be consistent.
There are zero changes to the semantics.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Any type that is memory hashable must not be recursive since
there are definitely no pointers involved to make a cycle.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Put the t.Size() == 0 check first since this is applicable in all cases.
Drop the last struct field conditional since this is covered by the
sumFieldSize check at the end.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Hashing []any is slow since hashing of interfaces is slow.
Hashing of interfaces is slow since we pessimistically assume
that cycles can occur through them and start cycle tracking.
Drop the variadic signature of Update and fix callers to pass in
an anonymous struct so that we are hashing concrete types
near the root of the value tree.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Formatting a time.Time as RFC3339 is slow.
See https://go.dev/issue/54093
Now that we have efficient hashing of fixed-width integers,
just hash the time.Time as a binary value.
Performance:
Hash-24 19.0µs ± 1% 18.6µs ± 1% -2.03% (p=0.000 n=10+9)
TailcfgNode-24 1.79µs ± 1% 1.40µs ± 1% -21.74% (p=0.000 n=10+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Switch deephash to use sha256x.Hash.
We add sha256x.HashString to efficiently hash a string.
It uses unsafe under the hood to convert a string to a []byte.
We also modify sha256x.Hash to export the underlying hash.Hash
for testing purposes so that we can intercept all hash.Hash calls.
Performance:
name old time/op new time/op delta
Hash-24 19.8µs ± 1% 19.2µs ± 1% -3.01% (p=0.000 n=10+10)
HashPacketFilter-24 2.61µs ± 0% 2.53µs ± 1% -3.01% (p=0.000 n=8+10)
HashMapAcyclic-24 31.3µs ± 1% 29.8µs ± 0% -4.80% (p=0.000 n=10+9)
TailcfgNode-24 1.83µs ± 1% 1.82µs ± 2% ~ (p=0.305 n=10+10)
HashArray-24 344ns ± 2% 323ns ± 1% -6.02% (p=0.000 n=9+10)
The performance gains is not as dramatic as sha256x over sha256 due to:
1. most of the hashing already occurring through the direct memory hashing logic, and
2. what does not go through direct memory hashing is slowed down by reflect.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
In Go 1.19, the reflect.Value.MapRange method uses "function outlining"
so that the allocation of reflect.MapIter is inlinable by the caller.
If the iterator doesn't escape the caller, it can be stack allocated.
See https://go.dev/cl/400675
Performance:
name old time/op new time/op delta
HashMapAcyclic-24 31.9µs ± 2% 32.1µs ± 1% ~ (p=0.075 n=10+10)
name old alloc/op new alloc/op delta
HashMapAcyclic-24 0.00B 0.00B ~ (all equal)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The hash.Hash provided by sha256.New is much more efficient
if we always provide it with data a multiple of the block size.
This avoids double-copying of data into the internal block
of sha256.digest.x. Effectively, we are managing a block ourselves
to ensure we only ever call hash.Hash.Write with full blocks.
Performance:
name old time/op new time/op delta
Hash 33.5µs ± 1% 20.6µs ± 1% -38.40% (p=0.000 n=10+9)
The logic has gone through CPU-hours of fuzzing.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The logic of deephash is both simpler and easier to reason about
if values are always addressable.
In Go, the composite kinds are slices, arrays, maps, structs,
interfaces, pointers, channels, and functions,
where we define "composite" as a Go value that encapsulates
some other Go value (e.g., a map is a collection of key-value entries).
In the cases of pointers and slices, the sub-values are always addressable.
In the cases of arrays and structs, the sub-values are always addressable
if and only if the parent value is addressable.
In the case of maps and interfaces, the sub-values are never addressable.
To make them addressable, we need to copy them onto the heap.
For the purposes of deephash, we do not care about channels and functions.
For all non-composite kinds (e.g., strings and ints), they are only addressable
if obtained from one of the composite kinds that produce addressable values
(i.e., pointers, slices, addressable arrays, and addressable structs).
A non-addressible, non-composite kind can be made addressable by
allocating it on the heap, obtaining a pointer to it, and dereferencing it.
Thus, if we can ensure that values are addressable at the entry points,
and shallow copy sub-values whenever we encounter an interface or map,
then we can ensure that all values are always addressable and
assume such property throughout all the logic.
Performance:
name old time/op new time/op delta
Hash-24 21.5µs ± 1% 19.7µs ± 1% -8.29% (p=0.000 n=9+9)
HashPacketFilter-24 2.61µs ± 1% 2.62µs ± 0% +0.29% (p=0.037 n=10+9)
HashMapAcyclic-24 30.8µs ± 1% 30.9µs ± 1% ~ (p=0.400 n=9+10)
TailcfgNode-24 1.84µs ± 1% 1.84µs ± 2% ~ (p=0.928 n=10+10)
HashArray-24 324ns ± 2% 332ns ± 2% +2.45% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The Do function assists in calling functions that must succeed.
It only interacts well with functions that return (T, err).
Signatures with more return arguments are not supported.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We have very similar code in corp, moving it to util/precompress allows
it to be reused.
Updates #5133
Signed-off-by: Mihai Parparita <mihai@tailscale.com>