Commit Graph

145 Commits

Author SHA1 Message Date
Joe Tsai
9bf13fc3d1
util/deephash: remove getTypeInfo (#5469)
Add a new lookupTypeHasher function that is just a cached front-end
around the makeTypeHasher function.
We do not need to worry about the recursive type cycle issue that
made getTypeInfo more complicated since makeTypeHasher
is not directly recursive. All calls to itself happen lazily
through a sync.Once upon first use.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 17:39:51 -07:00
Joe Tsai
ab7e6f3f11
util/deephash: require pointer in API (#5467)
The entry logic of Hash has extra complexity to make sure
we always have an addressable value on hand.
If not, we heap allocate the input.
For this reason we document that there are performance benefits
to always providing a pointer.
Rather than documenting this, just enforce it through generics.

Also, delete the unused HasherForType function.
It's an interesting use of generics, but not well tested.
We can resurrect it from code history if there's a need for it.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 16:08:31 -07:00
Joe Tsai
c5b1565337
util/deephash: move pointer and interface logic to separate function (#5465)
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:51:34 -07:00
Joe Tsai
d2e2d8438b
util/deephash: move map logic to separate function (#5464)
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.

There is a minor adjustment where we hash the length of the map
to be more on the cautious side.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:49:26 -07:00
Joe Tsai
23c3831ff9
util/deephash: coalesce struct logic (#5466)
Rather than having two copies []fieldInfo,
just maintain one and perform merging in the same pass.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:39:46 -07:00
Joe Tsai
296b008b9f
util/deephash: move array and slice logic to separate function (#5463)
This helps pprof better identify which Go kinds take the most time
since the kind is always in the function name.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 15:37:36 -07:00
Joe Tsai
31bf3874d6
util/deephash: use unsafe.Pointer instead of reflect.Value (#5459)
Use of reflect.Value.SetXXX panics if the provided argument was
obtained from an unexported struct field.
Instead, pass an unsafe.Pointer around and convert to a
reflect.Value when necessary (i.e., for maps and interfaces).
Converting from unsafe.Pointer to reflect.Value guarantees that
none of the read-only bits will be populated.

When running in race mode, we attach type information to the pointer
so that we can type check every pointer operation.
This also type-checks that direct memory hashing is within
the valid range of a struct value.

We add test cases that previously caused deephash to panic,
but now pass.

Performance:

	name              old time/op    new time/op    delta
	Hash              14.1µs ± 1%    14.1µs ± 1%    ~     (p=0.590 n=10+9)
	HashPacketFilter  2.53µs ± 2%    2.44µs ± 1%  -3.79%  (p=0.000 n=9+10)
	TailcfgNode       1.45µs ± 1%    1.43µs ± 0%  -1.36%  (p=0.000 n=9+9)
	HashArray         318ns ± 2%     318ns ± 2%    ~      (p=0.541 n=10+10)
	HashMapAcyclic    32.9µs ± 1%    31.6µs ± 1%  -4.16%  (p=0.000 n=10+9)

There is a slight performance gain due to the use of unsafe.Pointer
over reflect.Value methods. Also, passing an unsafe.Pointer (1 word)
on the stack is cheaper than passing a reflect.Value (3 words).

Performance gains are diminishing since SHA-256 hashing now dominates the runtime.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 12:30:35 -07:00
Joe Tsai
e0c5ac1f02
util/deephash: add debug printer (#5460)
When built with "deephash_debug", print the set of HashXXX methods.

Example usage:

	$ go test -run=GetTypeHasher/string_slice -tags=deephash_debug
	U64(2)+U64(3)+S("foo")+U64(3)+S("bar")+FIN

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-27 12:14:07 -07:00
Joe Tsai
70f9fc8c7a
util/deephash: rely on direct memory hashing for primitive kinds (#5457)
Rather than separate functions to hash each kind,
just rely on the fact that these are direct memory hashable,
thus simplifying the code.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-26 20:50:56 -07:00
Joe Tsai
531ccca648
util/deephash: delete slow path (#5423)
Every implementation of typeHasherFunc always returns true,
which implies that the slow path is no longer executed.
Delete it.

h.hashValueWithType(v, ti, ...) is deleted as it is equivalent to:
	ti.hasher()(h, v)

h.hashValue(v, ...) is deleted as it is equivalent to:
	ti := getTypeInfo(v.Type())
	ti.hasher()(h, v)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-26 17:46:22 -07:00
Joe Tsai
3fc8683585
util/deephash: expand fast-path capabilities (#5404)
Add support for maps and interfaces to the fast path.
Add cycle-detection to the pointer handling logic.
This logic is mostly copied from the slow path.

A future commit will delete the slow path once
the fast path never falls back to the slow path.

Performance:

	name                 old time/op    new time/op    delta
	Hash-24                18.5µs ± 1%    14.9µs ± 2%  -19.52%  (p=0.000 n=10+10)
	HashPacketFilter-24    2.54µs ± 1%    2.60µs ± 1%   +2.19%  (p=0.000 n=10+10)
	HashMapAcyclic-24      31.6µs ± 1%    30.5µs ± 1%   -3.42%  (p=0.000 n=9+8)
	TailcfgNode-24         1.44µs ± 2%    1.43µs ± 1%     ~     (p=0.171 n=10+10)
	HashArray-24            324ns ± 1%     324ns ± 2%     ~     (p=0.425 n=9+9)

The additional cycle detection logic doesn't incur much slow down
since it only activates if a type is recursive, which does not apply
for any of the types that we care about.

There is a notable performance boost since we switch from the fath path
to the slow path less often. Most notably, a struct with a field that
could not be handled by the fast path would previously cause
the entire struct to go through the slow path.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-24 01:31:01 -07:00
Tom DNetto
18edd79421 control/controlclient,tailcfg: [capver 40] create KeySignature field in tailcfg.Node
We calve out a space to put the node-key signature (used on tailnets where network lock is enabled).

Signed-off-by: Tom DNetto <tom@tailscale.com>
2022-08-22 11:25:41 -07:00
Joe Tsai
d32700c7b2
util/deephash: specialize for netip.Addr and drop AppendTo support (#5402)
There are 5 types that we care about that implement AppendTo:

	key.DiscoPublic
	key.NodePublic
	netip.Prefix
	netipx.IPRange
	netip.Addr

The key types are thin wrappers around [32]byte and are memory hashable.
The netip.Prefix and netipx.IPRange types are thin wrappers over netip.Addr
and are hashable by default if netip.Addr is hashable.
The netip.Addr type is the only one with a complex structure where
the default behavior of deephash does not hash it correctly due to the presence
of the intern.Value type.

Drop support for AppendTo and instead add specialized hashing for netip.Addr
that would be semantically equivalent to == on the netip.Addr values.

The AppendTo support was already broken prior to this change.
It was fully removed (intentionally or not) in #4870.
It was partially restored in #4858 for the fast path,
but still broken in the slow path.
Just drop support for it altogether.

This does mean we lack any ability for types to self-hash themselves.
In the future we can add support for types that implement:

	interface { DeepHash() Sum }

Test and fuzz cases were added for the relevant types that
used to rely on the AppendTo method.
FuzzAddr has been executed on 1 billion samples without issues.

Signed-off-by: Joe Tsai joetsai@digital-static.net
2022-08-18 22:54:56 -07:00
Joe Tsai
03f7e4e577
util/hashx: move from sha256x (#5388) 2022-08-16 13:15:33 -07:00
Joe Tsai
f061d20c9d
util/sha256x: rename Hash as Block512 (#5351)
Rename Hash as Block512 to indicate that this is a general-purpose
hash.Hash for any algorithm that operates on 512-bit block sizes.

While we rename the package as hashx in this commit,
a subsequent commit will move the sha256x package to hashx.
This is done separately to avoid confusing git.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-16 09:49:48 -07:00
Joe Tsai
44d62b65d0
util/deephash: move typeIsRecursive and canMemHash to types.go (#5386)
Also, rename canMemHash to typeIsMemHashable to be consistent.
There are zero changes to the semantics.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-16 09:31:19 -07:00
Joe Tsai
d53eb6fa11
util/deephash: simplify typeIsRecursive (#5385)
Any type that is memory hashable must not be recursive since
there are definitely no pointers involved to make a cycle.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 23:29:06 -07:00
Joe Tsai
23ec3c104a
util/deephash: remove unused stack slice in typeIsRecursive (#5363)
No operation ever reads from this variable.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 22:36:08 -07:00
Joe Tsai
c200229f9e
util/deephash: simplify canMemHash (#5384)
Put the t.Size() == 0 check first since this is applicable in all cases.
Drop the last struct field conditional since this is covered by the
sumFieldSize check at the end.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 22:22:18 -07:00
Joe Tsai
32a1a3d1c0
util/deephash: avoid variadic argument for Update (#5372)
Hashing []any is slow since hashing of interfaces is slow.
Hashing of interfaces is slow since we pessimistically assume
that cycles can occur through them and start cycle tracking.

Drop the variadic signature of Update and fix callers to pass in
an anonymous struct so that we are hashing concrete types
near the root of the value tree.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-15 11:22:28 -07:00
Maisem Ali
545639ee44 util/winutil: consolidate interface specific registry keys
Code movement to allow reuse in a follow up PR.

Updates #1659

Signed-off-by: Maisem Ali <maisem@tailscale.com>
2022-08-14 22:34:59 -07:00
Joe Tsai
548fa63e49
util/deephash: use binary encoding of time.Time (#5352)
Formatting a time.Time as RFC3339 is slow.
See https://go.dev/issue/54093

Now that we have efficient hashing of fixed-width integers,
just hash the time.Time as a binary value.

Performance:

	Hash-24                19.0µs ± 1%    18.6µs ± 1%   -2.03%  (p=0.000 n=10+9)
	TailcfgNode-24         1.79µs ± 1%    1.40µs ± 1%  -21.74%  (p=0.000 n=10+9)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-12 14:42:51 -07:00
Joe Tsai
1f7479466e
util/deephash: use sha256x (#5339)
Switch deephash to use sha256x.Hash.

We add sha256x.HashString to efficiently hash a string.
It uses unsafe under the hood to convert a string to a []byte.
We also modify sha256x.Hash to export the underlying hash.Hash
for testing purposes so that we can intercept all hash.Hash calls.

Performance:

	name                 old time/op    new time/op    delta
	Hash-24                19.8µs ± 1%    19.2µs ± 1%  -3.01%  (p=0.000 n=10+10)
	HashPacketFilter-24    2.61µs ± 0%    2.53µs ± 1%  -3.01%  (p=0.000 n=8+10)
	HashMapAcyclic-24      31.3µs ± 1%    29.8µs ± 0%  -4.80%  (p=0.000 n=10+9)
	TailcfgNode-24         1.83µs ± 1%    1.82µs ± 2%    ~     (p=0.305 n=10+10)
	HashArray-24            344ns ± 2%     323ns ± 1%  -6.02%  (p=0.000 n=9+10)

The performance gains is not as dramatic as sha256x over sha256 due to:
1. most of the hashing already occurring through the direct memory hashing logic, and
2. what does not go through direct memory hashing is slowed down by reflect.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-11 17:44:09 -07:00
Joe Tsai
77a92f326d
util/deephash: avoid using sync.Pool for reflect.MapIter (#5333)
In Go 1.19, the reflect.Value.MapRange method uses "function outlining"
so that the allocation of reflect.MapIter is inlinable by the caller.
If the iterator doesn't escape the caller, it can be stack allocated.
See https://go.dev/cl/400675

Performance:

	name               old time/op    new time/op    delta
	HashMapAcyclic-24    31.9µs ± 2%    32.1µs ± 1%   ~     (p=0.075 n=10+10)

	name               old alloc/op   new alloc/op   delta
	HashMapAcyclic-24     0.00B          0.00B        ~     (all equal)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-11 00:33:40 -07:00
Joe Tsai
1c3c6b5382
util/sha256x: make Hash.Sum non-escaping (#5338)
Since Hash is a concrete type, we can make it such that
Sum never escapes the input.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-10 17:31:44 -07:00
Joe Tsai
76b0e578c5
util/sha256x: new package (#5337)
The hash.Hash provided by sha256.New is much more efficient
if we always provide it with data a multiple of the block size.
This avoids double-copying of data into the internal block
of sha256.digest.x. Effectively, we are managing a block ourselves
to ensure we only ever call hash.Hash.Write with full blocks.

Performance:

	name    old time/op    new time/op    delta
	Hash    33.5µs ± 1%    20.6µs ± 1%  -38.40%  (p=0.000 n=10+9)

The logic has gone through CPU-hours of fuzzing.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-10 15:49:36 -07:00
Joe Tsai
539c5e44c5
util/deephash: always keep values addressable (#5328)
The logic of deephash is both simpler and easier to reason about
if values are always addressable.

In Go, the composite kinds are slices, arrays, maps, structs,
interfaces, pointers, channels, and functions,
where we define "composite" as a Go value that encapsulates
some other Go value (e.g., a map is a collection of key-value entries).

In the cases of pointers and slices, the sub-values are always addressable.

In the cases of arrays and structs, the sub-values are always addressable
if and only if the parent value is addressable.

In the case of maps and interfaces, the sub-values are never addressable.
To make them addressable, we need to copy them onto the heap.

For the purposes of deephash, we do not care about channels and functions.

For all non-composite kinds (e.g., strings and ints), they are only addressable
if obtained from one of the composite kinds that produce addressable values
(i.e., pointers, slices, addressable arrays, and addressable structs).
A non-addressible, non-composite kind can be made addressable by
allocating it on the heap, obtaining a pointer to it, and dereferencing it.

Thus, if we can ensure that values are addressable at the entry points,
and shallow copy sub-values whenever we encounter an interface or map,
then we can ensure that all values are always addressable and
assume such property throughout all the logic.

Performance:

	name                 old time/op    new time/op    delta
	Hash-24                21.5µs ± 1%    19.7µs ± 1%  -8.29%  (p=0.000 n=9+9)
	HashPacketFilter-24    2.61µs ± 1%    2.62µs ± 0%  +0.29%  (p=0.037 n=10+9)
	HashMapAcyclic-24      30.8µs ± 1%    30.9µs ± 1%    ~     (p=0.400 n=9+10)
	TailcfgNode-24         1.84µs ± 1%    1.84µs ± 2%    ~     (p=0.928 n=10+10)
	HashArray-24            324ns ± 2%     332ns ± 2%  +2.45%  (p=0.000 n=10+10)

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-09 22:00:02 -07:00
Maisem Ali
40ec8617ac util/must: rename Do->Get, add Do
Signed-off-by: Maisem Ali <maisem@tailscale.com>
2022-08-06 09:30:10 -07:00
Brad Fitzpatrick
01e8ef7293 util/strs: add new package for string utility funcs
Updates #5309

Change-Id: I677cc6e01050b6e10d8d6907d961b11c7a787a05
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-08-05 15:06:08 -07:00
Joe Tsai
622d80c007
util/must: new package (#5307)
The Do function assists in calling functions that must succeed.
It only interacts well with functions that return (T, err).
Signatures with more return arguments are not supported.

Signed-off-by: Joe Tsai <joetsai@digital-static.net>
2022-08-05 12:20:50 -07:00
Maisem Ali
a9f6cd41fd all: use syncs.AtomicValue
Signed-off-by: Maisem Ali <maisem@tailscale.com>
2022-08-04 11:52:16 -07:00
Mihai Parparita
4aa88bc2c0 cmd/tsconnect,util/precompress: move precompression to its own package
We have very similar code in corp, moving it to util/precompress allows
it to be reused.

Updates #5133

Signed-off-by: Mihai Parparita <mihai@tailscale.com>
2022-08-03 17:44:57 -07:00
Brad Fitzpatrick
8725b14056 all: migrate more code code to net/netip directly
Instead of going through the tailscale.com/net/netaddr transitional
wrappers.

Updates #5162

Change-Id: I3dafd1c2effa1a6caa9b7151ecf6edd1a3fda3dd
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-08-02 13:59:57 -07:00
Brad Fitzpatrick
116f55ff66 all: gofmt for Go 1.19
Updates #5210

Change-Id: Ib02cd5e43d0a8db60c1f09755a8ac7b140b670be
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-08-02 10:08:05 -07:00
Brad Fitzpatrick
04cf46a762 util/deephash: fix unexported time.Time hashing
Updates tailscale/corp#6311

Change-Id: I33cd7e4040966261c2f2eb3d32f29936aeb7f632
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-07-27 09:28:23 -07:00
Brad Fitzpatrick
a12aad6b47 all: convert more code to use net/netip directly
perl -i -npe 's,netaddr.IPPrefixFrom,netip.PrefixFrom,' $(git grep -l -F netaddr.)
    perl -i -npe 's,netaddr.IPPortFrom,netip.AddrPortFrom,' $(git grep -l -F netaddr. )
    perl -i -npe 's,netaddr.IPPrefix,netip.Prefix,g' $(git grep -l -F netaddr. )
    perl -i -npe 's,netaddr.IPPort,netip.AddrPort,g' $(git grep -l -F netaddr. )
    perl -i -npe 's,netaddr.IP\b,netip.Addr,g' $(git grep -l -F netaddr. )
    perl -i -npe 's,netaddr.IPv6Raw\b,netip.AddrFrom16,g' $(git grep -l -F netaddr. )
    goimports -w .

Then delete some stuff from the net/netaddr shim package which is no
longer neeed.

Updates #5162

Change-Id: Ia7a86893fe21c7e3ee1ec823e8aba288d4566cd8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-07-25 21:53:49 -07:00
Brad Fitzpatrick
6a396731eb all: use various net/netip parse funcs directly
Mechanical change with perl+goimports.

Changed {Must,}Parse{IP,IPPrefix,IPPort} to their netip variants, then
goimports -d .

Finally, removed the net/netaddr wrappers, to prevent future use.

Updates #5162

Change-Id: I59c0e38b5fbca5a935d701645789cddf3d7863ad
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-07-25 21:12:28 -07:00
Brad Fitzpatrick
7eaf5e509f net/netaddr: start migrating to net/netip via new netaddr adapter package
Updates #5162

Change-Id: Id7bdec303b25471f69d542f8ce43805328d56c12
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-07-25 16:20:43 -07:00
Brad Fitzpatrick
2a22ea3e83 util/deephash: generate type-specific hasher funcs
name                old time/op    new time/op    delta
Hash-8                71.1µs ± 2%    71.5µs ± 1%     ~     (p=0.114 n=9+8)
HashPacketFilter-8    8.39µs ± 1%    4.83µs ± 2%  -42.38%  (p=0.000 n=8+9)
HashMapAcyclic-8      56.2µs ± 1%    56.9µs ± 2%   +1.17%  (p=0.035 n=10+9)
TailcfgNode-8         6.49µs ± 2%    3.54µs ± 1%  -45.37%  (p=0.000 n=9+9)
HashArray-8            729ns ± 2%     566ns ± 3%  -22.30%  (p=0.000 n=10+10)

name                old alloc/op   new alloc/op   delta
Hash-8                 24.0B ± 0%     24.0B ± 0%     ~     (all equal)
HashPacketFilter-8     24.0B ± 0%     24.0B ± 0%     ~     (all equal)
HashMapAcyclic-8       0.00B          0.00B          ~     (all equal)
TailcfgNode-8          0.00B          0.00B          ~     (all equal)
HashArray-8            0.00B          0.00B          ~     (all equal)

name                old allocs/op  new allocs/op  delta
Hash-8                  1.00 ± 0%      1.00 ± 0%     ~     (all equal)
HashPacketFilter-8      1.00 ± 0%      1.00 ± 0%     ~     (all equal)
HashMapAcyclic-8        0.00           0.00          ~     (all equal)
TailcfgNode-8           0.00           0.00          ~     (all equal)
HashArray-8             0.00           0.00          ~     (all equal)

Change-Id: I34c4e786e748fe60280646d40cc63a2adb2ea6fe
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-07-19 11:33:13 -07:00
Mihai Parparita
e37167b3ef ipn/localapi: add API for uploading client metrics
Clients may have platform-specific metrics they would like uploaded
(e.g. extracted from MetricKit on iOS). Add a new local API endpoint
that allows metrics to be updated by a simple name/value JSON-encoded
struct.

Signed-off-by: Mihai Parparita <mihai@tailscale.com>
2022-07-11 09:51:09 -07:00
Brad Fitzpatrick
6b71568eb7 util/cloudenv: add Azure support & DNS IPs
And rewrite cloud detection to try to do only zero or one metadata
discovery request for all clouds, only doing a first (or second) as
confidence increases. Work remains for Windows, but a start.

And add Cloud to tailcfg.Hostinfo, which helped with testing using
"tailcfg debug hostinfo".

Updates #4983 (Linux only)
Updates #4984

Change-Id: Ib03337089122ce0cb38c34f724ba4b4812bc614e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-30 17:03:46 -07:00
Brad Fitzpatrick
aa37aece9c ipn/ipnlocal, net/dns*, util/cloudenv: add AWS DNS support
And remove the GCP special-casing from ipn/ipnlocal; do it only in the
forwarder for *.internal.

Fixes #4980
Fixes #4981

Change-Id: I5c481e96d91f3d51d274a80fbd37c38f16dfa5cb
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-29 20:37:44 -07:00
Brad Fitzpatrick
88c2afd1e3 ipn/ipnlocal, net/dns*, util/cloudenv: specialize DNS config on Google Cloud
This does three things:

* If you're on GCP, it adds a *.internal DNS split route to the
  metadata server, so we never break GCP DNS names. This lets people
  have some Tailscale nodes on GCP and some not (e.g. laptops at home)
  without having to add a Tailnet-wide *.internal DNS route.
  If you already have such a route, though, it won't overwrite it.

* If the 100.100.100.100 DNS forwarder has nowhere to forward to,
  it forwards it to the GCP metadata IP, which forwards to 8.8.8.8.
  This means there are never errNoUpstreams ("upstream nameservers not set")
  errors on GCP due to e.g. mangled /etc/resolv.conf (GCP default VMs
  don't have systemd-resolved, so it's likely a DNS supremacy fight)

* makes the DNS fallback mechanism use the GCP metadata IP as a
  fallback before our hosted HTTP-based fallbacks

I created a default GCP VM from their web wizard. It has no
systemd-resolved.

I then made its /etc/resolv.conf be empty and deleted its GCP
hostnames in /etc/hosts.

I then logged in to a tailnet with no global DNS settings.

With this, tailscaled writes /etc/resolv.conf (direct mode, as no
systemd-resolved) and sets it to 100.100.100.100, which then has
regular DNS via the metadata IP and *.internal DNS via the metadata IP
as well. If the tailnet configures explicit DNS servers, those are used
instead, except for *.internal.

This also adds a new util/cloudenv package based on version/distro
where the cloud type is only detected once. We'll likely expand it in
the future for other clouds, doing variants of this change for other
popular cloud environments.

Fixes #4911

RELNOTES=Google Cloud DNS improvements

Change-Id: I19f3c2075983669b2b2c0f29a548da8de373c7cf
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-29 17:39:13 -07:00
Brad Fitzpatrick
35782f891d util/deephash: add canMemHash func + typeInfo property
Currently unused. (breaking up a bigger change)

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-25 13:09:30 -07:00
Brad Fitzpatrick
7b9a901489 util/deephash: add packet filter benchmark
(breaking up parts of another change)

This adds a PacketFilter hashing benchmark with an input that both
contains every possible field, but also is somewhat representative in
the shape of what real packet filters contain.

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-25 12:55:15 -07:00
Brad Fitzpatrick
8c5c87be26 util/deephash: fix collisions between different types
Updates #4883

Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-21 22:29:04 -07:00
Brad Fitzpatrick
59ed846277 util/singleflight: add fork of singleflight with generics
Forked from golang.org/x/sync/singleflight at
the x/sync repo's commit 67f06af15bc961c363a7260195bcd53487529a21

Updates golang/go#53427

Change-Id: Iec2b47b7777940017bb9b3db9bd7d93ba4a2e394
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-17 10:20:16 -07:00
Brad Fitzpatrick
757ecf7e80 util/deephash: fix map hashing when key & element have the same type
Regression from 09afb8e35b, in which the
same reflect.Value scratch value was being used as the map iterator
copy destination.

Also: make nil and empty maps hash differently, add test.

Fixes #4871

Co-authored-by: Josh Bleecher Snyder <josharian@gmail.com>
Change-Id: I67f42524bc81f694c1b7259d6682200125ea4a66
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-16 22:29:47 -07:00
Brad Fitzpatrick
f31588786f util/deephash: don't track cycles on non-recursive types
name              old time/op    new time/op    delta
Hash-8              67.3µs ±20%    76.5µs ±16%     ~     (p=0.143 n=10+10)
HashMapAcyclic-8    63.0µs ± 2%    56.3µs ± 1%  -10.65%  (p=0.000 n=10+8)
TailcfgNode-8       9.18µs ± 2%    6.52µs ± 3%  -28.96%  (p=0.000 n=9+10)
HashArray-8          732ns ± 3%     709ns ± 1%   -3.21%  (p=0.000 n=10+10)

name              old alloc/op   new alloc/op   delta
Hash-8               24.0B ± 0%     24.0B ± 0%     ~     (all equal)
HashMapAcyclic-8     0.00B          0.00B          ~     (all equal)
TailcfgNode-8        0.00B          0.00B          ~     (all equal)
HashArray-8          0.00B          0.00B          ~     (all equal)

name              old allocs/op  new allocs/op  delta
Hash-8                1.00 ± 0%      1.00 ± 0%     ~     (all equal)
HashMapAcyclic-8      0.00           0.00          ~     (all equal)
TailcfgNode-8         0.00           0.00          ~     (all equal)
HashArray-8           0.00           0.00          ~     (all equal)

Change-Id: I28642050d837dff66b2db54b2b0e6d272a930be8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-16 18:39:10 -07:00
Brad Fitzpatrick
36ea837736 util/deephash: fix map hashing to actually hash elements
Fixes #4868

Change-Id: I574fd139cb7f7033dd93527344e6aa0e625477c7
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
2022-06-16 11:32:12 -07:00