This helper allows us to retrieve `DWORD` and `QWORD` values from the Tailscale key in the Windows registry.
Signed-off-by: Aaron Klotz <aaron@tailscale.com>
The fully qualified name of the type is thisPkg.tname,
so write the args like that too.
Suggested-by: Joe Tsai <joetsai@digital-static.net>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is a package for shared utilities used in doing codegen programs.
The inaugural API is for writing gofmt'd code to a file.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Unfortunately this test fails on certain architectures.
The problem comes down to inconsistencies in the Go escape analysis
where specific variables are marked as escaping on certain architectures.
The variables escaping to the heap are unfortunately in crypto/sha256,
which makes it impossible to fixthis locally in deephash.
For now, fix the test by compensating for the allocations that
occur from calling sha256.digest.Sum.
See golang/go#48055
Fixes#2727
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The index for every struct field or slice element and
the number of fields for the struct is unncessary.
The hashing of Go values is unambiguous because every type (except maps)
encodes in a parsable manner. So long as we know the type information,
we could theoretically decode every value (except for maps).
At a high level:
* numbers are encoded as fixed-width records according to precision.
* strings (and AppendTo output) are encoded with a fixed-width length,
followed by the contents of the buffer.
* slices are prefixed by a fixed-width length, followed by the encoding
of each value. So long as we know the type of each element, we could
theoretically decode each element.
* arrays are encoded just like slices, but elide the length
since it is determined from the Go type.
* maps are encoded first with a byte indicating whether it is a cycle.
If a cycle, it is followed by a fixed-width index for the pointer,
otherwise followed by the SHA-256 hash of its contents. The encoding of maps
is not decodeable, but a SHA-256 hash is sufficient to avoid ambiguities.
* interfaces are encoded first with a byte indicating whether it is nil.
If not nil, it is followed by a fixed-width index for the type,
and then the encoding for the underlying value. Having the type be encoded
first ensures that the value could theoretically be decoded next.
* pointers are encoded first with a byte indicating whether it is
1) nil, 2) a cycle, or 3) newly seen. If a cycle, it is followed by
a fixed-width index for the pointer. If newly seen, it is followed by
the encoding for the pointed-at value.
Removing unnecessary details speeds up hashing:
name old time/op new time/op delta
Hash-8 76.0µs ± 1% 55.8µs ± 2% -26.62% (p=0.000 n=10+10)
HashMapAcyclic-8 61.9µs ± 0% 62.0µs ± 0% ~ (p=0.666 n=9+9)
TailcfgNode-8 10.2µs ± 1% 7.5µs ± 1% -26.90% (p=0.000 n=10+9)
HashArray-8 1.07µs ± 1% 0.70µs ± 1% -34.67% (p=0.000 n=10+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Instead of hashing the humanly formatted forms of a number,
hash the native machine bits of the integers themselves.
There is a small performance gain for this:
name old time/op new time/op delta
Hash-8 75.7µs ± 1% 76.0µs ± 2% ~ (p=0.315 n=10+9)
HashMapAcyclic-8 63.1µs ± 3% 61.3µs ± 1% -2.77% (p=0.000 n=10+10)
TailcfgNode-8 10.3µs ± 1% 10.2µs ± 1% -1.48% (p=0.000 n=10+10)
HashArray-8 1.07µs ± 1% 1.05µs ± 1% -1.79% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The swapping of bufio.Writer between hasher and mapHasher is subtle.
Just embed a hasher in mapHasher to avoid complexity here.
No notable change in performance:
name old time/op new time/op delta
Hash-8 76.7µs ± 1% 77.0µs ± 1% ~ (p=0.182 n=9+10)
HashMapAcyclic-8 62.4µs ± 1% 62.5µs ± 1% ~ (p=0.315 n=10+9)
TailcfgNode-8 10.3µs ± 1% 10.3µs ± 1% -0.62% (p=0.004 n=10+9)
HashArray-8 1.07µs ± 1% 1.06µs ± 1% -0.98% (p=0.001 n=8+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The previous algorithm used a map of all visited pointers.
The strength of this approach is that it quickly prunes any nodes
that we have ever visited before. The detriment of the approach
is that pruning is heavily dependent on the order that pointers
were visited. This is especially relevant for hashing a map
where map entries are visited in a non-deterministic manner,
which would cause the map hash to be non-deterministic
(which defeats the point of a hash).
This new algorithm uses a stack of all visited pointers,
similar to how github.com/google/go-cmp performs cycle detection.
When we visit a pointer, we push it onto the stack, and when
we leave a pointer, we pop it from the stack.
Before visiting a pointer, we first check whether the pointer exists
anywhere in the stack. If yes, then we prune the node.
The detriment of this approach is that we may hash a node more often
than before since we do not prune as aggressively.
The set of visited pointers up until any node is only the
path of nodes up to that node and not any other pointers
that may have been visited elsewhere. This provides us
deterministic hashing regardless of visit order.
We can now delete hashMapFallback and associated complexity,
which only exists because the previous approach was non-deterministic
in the presence of cycles.
This fixes a failure of the old algorithm where obviously different
values are treated as equal because the pruning was too aggresive.
See https://github.com/tailscale/tailscale/issues/2443#issuecomment-883653534
The new algorithm is slightly slower since it prunes less aggresively:
name old time/op new time/op delta
Hash-8 66.1µs ± 1% 68.8µs ± 1% +4.09% (p=0.000 n=19+19)
HashMapAcyclic-8 63.0µs ± 1% 62.5µs ± 1% -0.76% (p=0.000 n=18+19)
TailcfgNode-8 9.79µs ± 2% 9.88µs ± 1% +0.95% (p=0.000 n=19+17)
HashArray-8 643ns ± 1% 653ns ± 1% +1.64% (p=0.000 n=19+19)
However, a slower but more correct algorithm seems
more favorable than a faster but incorrect algorithm.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
A Go interface may hold any number of different concrete types.
Just because two underlying values hash to the same thing
does not mean the two values are identical if they have different
concrete types. As such, include the type in the hash.
Seed the hash upon first use with the current time.
This ensures that the stability of the hash is bounded within
the lifetime of one program execution.
Hopefully, this prevents future bugs where someone assumes that
this hash is stable.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The fact that Hash returns a [sha256.Size]byte leaks details about
the underlying hash implementation. This could very well be any other
hashing algorithm with a possible different block size.
Abstract this implementation detail away by declaring an opaque type
that is comparable. While we are changing the signature of UpdateHash,
rename it to just Update to reduce stutter (e.g., deephash.Update).
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
DNS names consist of labels, but outside of length limits, DNS
itself permits any content within the labels. Some records require
labels to conform to hostname limitations (which is what we implemented
before), but not all.
Fixes#2024.
Signed-off-by: David Anderson <danderson@tailscale.com>
I based my estimation of the required timeout based on locally
observed behavior. But CI machines are worse than my local machine.
16s was enough to reduce flakiness but not eliminate it. Bump it up again.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Signed-off-by: Sonia Appasamy <sonia@tailscale.com>
Consolidates the node display name logic from each of the clients into
tailcfg.Node. UI clients can use these names directly, rather than computing
them independently.
* show DNS name over hostname, removing domain's common MagicDNS suffix.
only show hostname if there's no DNS name.
but still show shared devices' MagicDNS FQDN.
* remove nerdy low-level details by default: endpoints, DERP relay,
public key. They're available in JSON mode still for those who need
them.
* only show endpoint or DERP relay when it's active with the goal of
making debugging easier. (so it's easier for users to understand
what's happening) The asterisks are gone.
* remove Tx/Rx numbers by default for idle peers; only show them when
there's traffic.
* include peers' owner login names
* add CLI option to not show peers (matching --self=true, --peers= also
defaults to true)
* sort by DNS/host name, not public key
* reorder columns
Addresses #964
Still to be done:
- Figure out the correct logging lines in util/systemd
- Figure out if we need to slip the systemd.Status function anywhere
else
- Log util/systemd errors? (most of the errors are of the "you cannot do
anything about this, but it might be a bad idea to crash the program if
it errors" kind)
Assistance in getting this over the finish line would help a lot.
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: rename the nonlinux file to appease the magic
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: fix package name
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: fix review feedback from @mdlayher
Signed-off-by: Christine Dodrill <me@christine.website>
cmd/tailscale{,d}: update depaware manifests
Signed-off-by: Christine Dodrill <me@christine.website>
util/systemd: use sync.Once instead of func init
Signed-off-by: Christine Dodrill <me@christine.website>
control/controlclient: minor review feedback fixes
Signed-off-by: Christine Dodrill <me@christine.website>
{control,ipn,systemd}: fix review feedback
Signed-off-by: Christine Dodrill <me@christine.website>
review feedback fixes
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: fix sprintf call
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: make staticcheck less sad
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: print IP address in connected status
Signed-off-by: Christine Dodrill <me@christine.website>
ipn: review feedback
Signed-off-by: Christine Dodrill <me@christine.website>
final fixups
Signed-off-by: Christine Dodrill <me@christine.website>
The cornerstone API is a more memory-efficient Unmarshal.
The savings come from re-using a json.Decoder.
BenchmarkUnmarshal-8 4016418 288 ns/op 8 B/op 1 allocs/op
BenchmarkStdUnmarshal-8 4189261 283 ns/op 184 B/op 2 allocs/op
It also includes a Bytes type to reduce allocations
when unmarshalling a non-hex-encoded JSON string into a []byte.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This makes it easy to compact slices that contain duplicate elements
by sorting and then uniqing.
This is an alternative to constructing an intermediate map
and then extracting elements from it. It also provides
more control over equality than using a map key does.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>