Instead of hashing the humanly formatted forms of a number,
hash the native machine bits of the integers themselves.
There is a small performance gain for this:
name old time/op new time/op delta
Hash-8 75.7µs ± 1% 76.0µs ± 2% ~ (p=0.315 n=10+9)
HashMapAcyclic-8 63.1µs ± 3% 61.3µs ± 1% -2.77% (p=0.000 n=10+10)
TailcfgNode-8 10.3µs ± 1% 10.2µs ± 1% -1.48% (p=0.000 n=10+10)
HashArray-8 1.07µs ± 1% 1.05µs ± 1% -1.79% (p=0.000 n=10+10)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The swapping of bufio.Writer between hasher and mapHasher is subtle.
Just embed a hasher in mapHasher to avoid complexity here.
No notable change in performance:
name old time/op new time/op delta
Hash-8 76.7µs ± 1% 77.0µs ± 1% ~ (p=0.182 n=9+10)
HashMapAcyclic-8 62.4µs ± 1% 62.5µs ± 1% ~ (p=0.315 n=10+9)
TailcfgNode-8 10.3µs ± 1% 10.3µs ± 1% -0.62% (p=0.004 n=10+9)
HashArray-8 1.07µs ± 1% 1.06µs ± 1% -0.98% (p=0.001 n=8+9)
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This is a simplified rate limiter geared for exactly our needs:
A fast, mono.Time-based rate limiter for use in tstun.
It was generated by stripping down the x/time/rate rate limiter
to just our needs and switching it to use mono.Time.
It removes one time.Now call per packet.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
magicsock makes multiple calls to Now per packet.
Move to mono.Now. Changing some of the calls to
use package mono has a cascading effect,
causing non-per-packet call sites to also switch.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
There's a call to Now once per packet.
Move to mono.Now.
Though the current implementation provides high precision,
we document it to be coarse, to preserve the ability
to switch to a coarse monotonic time later.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Package mono provides a fast monotonic time.
Its primary advantage is that it is fast:
It is approximately twice as fast as time.Now.
This is because time.Now uses two clock calls,
one for wall time and one for monotonic time.
We ask for the current time 4-6 times per network packet.
At ~50ns per call to time.Now, that's enough to show
up in CPU profiles.
Package mono is a first step towards addressing that.
It is designed to be a near drop-in replacement for package time.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Go 1.17 switches to a register ABI on amd64 platforms.
Part of that switch is that go and defer calls use an argument-less
closure, which allocates. This means that we have an extra
alloc in some DNS work. That's unfortunate but not a showstopper,
and I don't see a clear path to fixing it.
The other performance benefits from the register ABI will all
but certainly outweigh this extra alloc.
Fixes#2545
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The kr/pty module moved to creack/pty per the kr/pty README[1].
creack/pty brings in support for a number of OS/arch combos that
are lacking in kr/pty.
Run `go mod tidy` while here.
[1] https://github.com/kr/pty/blob/master/README.md
Signed-off-by: Aaron Bieber <aaron@bolddaemon.com>
I don't know how to get access to a real packet. Basing this commit
entirely off:
+------------+--------------+------------------------------+
| Field Name | Field Type | Description |
+------------+--------------+------------------------------+
| NAME | domain name | MUST be 0 (root domain) |
| TYPE | u_int16_t | OPT (41) |
| CLASS | u_int16_t | requestor's UDP payload size |
| TTL | u_int32_t | extended RCODE and flags |
| RDLEN | u_int16_t | length of all RDATA |
| RDATA | octet stream | {attribute,value} pairs |
+------------+--------------+------------------------------+
From https://datatracker.ietf.org/doc/html/rfc6891#section-6.1.2
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
The handoff between tstun.Wrap's Read and poll methods
is one of the per-packet hotspots. It shows up in pprof.
Making outbound buffered increases throughput.
It is hard to measure exactly how much, because the numbers
are highly variable, but I'd estimate it at about 1%,
using the best observed max throughput across three runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The handoff between tstun.Wrap's Read and poll methods
is one of the per-packet hotspots. It shows up in pprof.
Making outbound buffered increases throughput.
It is hard to measure exactly how much, because the numbers
are highly variable, but I'd estimate it at about 1%,
using the best observed max throughput across three runs.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Tested manually with:
$ go test -v ./net/dnscache/ -dial-test=bogusplane.dev.tailscale.com:80
Where bogusplane has three A records, only one of which works.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
A previously added metric which was float64 was being ignored in tsweb, because it previously
only accepted int64 and ints. It can be handled in the same way as ints.
Signed-off-by: julianknodt <julianknodt@gmail.com>
Instead of blasting away at all upstream resolvers at the same time,
make a timing plan upon reconfiguration and have each upstream have an
associated start delay, depending on the overall forwarding config.
So now if you have two or four upstream Google or Cloudflare DNS
servers (e.g. two IPv4 and two IPv6), we now usually only send a
query, not four.
This is especially nice on iOS where we start fewer DoH queries and
thus fewer HTTP/1 requests (because we still disable HTTP/2 on iOS),
fewer sockets, fewer goroutines, and fewer associated HTTP buffers,
etc, saving overall memory burstiness.
Fixes#2436
Updates tailscale/corp#2250
Updates tailscale/corp#2238
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Add a place to hang state in a future change for #2436.
For now this just simplifies the send signature without
any functional change.
Updates #2436
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The previous algorithm used a map of all visited pointers.
The strength of this approach is that it quickly prunes any nodes
that we have ever visited before. The detriment of the approach
is that pruning is heavily dependent on the order that pointers
were visited. This is especially relevant for hashing a map
where map entries are visited in a non-deterministic manner,
which would cause the map hash to be non-deterministic
(which defeats the point of a hash).
This new algorithm uses a stack of all visited pointers,
similar to how github.com/google/go-cmp performs cycle detection.
When we visit a pointer, we push it onto the stack, and when
we leave a pointer, we pop it from the stack.
Before visiting a pointer, we first check whether the pointer exists
anywhere in the stack. If yes, then we prune the node.
The detriment of this approach is that we may hash a node more often
than before since we do not prune as aggressively.
The set of visited pointers up until any node is only the
path of nodes up to that node and not any other pointers
that may have been visited elsewhere. This provides us
deterministic hashing regardless of visit order.
We can now delete hashMapFallback and associated complexity,
which only exists because the previous approach was non-deterministic
in the presence of cycles.
This fixes a failure of the old algorithm where obviously different
values are treated as equal because the pruning was too aggresive.
See https://github.com/tailscale/tailscale/issues/2443#issuecomment-883653534
The new algorithm is slightly slower since it prunes less aggresively:
name old time/op new time/op delta
Hash-8 66.1µs ± 1% 68.8µs ± 1% +4.09% (p=0.000 n=19+19)
HashMapAcyclic-8 63.0µs ± 1% 62.5µs ± 1% -0.76% (p=0.000 n=18+19)
TailcfgNode-8 9.79µs ± 2% 9.88µs ± 1% +0.95% (p=0.000 n=19+17)
HashArray-8 643ns ± 1% 653ns ± 1% +1.64% (p=0.000 n=19+19)
However, a slower but more correct algorithm seems
more favorable than a faster but incorrect algorithm.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>