Changes made:
* Avoid "encoding/json" for JSON processing, and instead use
"github.com/go-json-experiment/json/jsontext".
Use jsontext.Value.IsValid for validation, which is much faster.
Use jsontext.AppendQuote instead of our own JSON escaping.
* In drainPending, use a different maxLen depending on lowMem.
In lowMem mode, it is better to perform multiple uploads
than it is to construct a large body that OOMs the process.
* In drainPending, if an error is encountered draining,
construct an error message in the logtail JSON format
rather than something that is invalid JSON.
* In appendTextOrJSONLocked, use jsontext.Decoder to check
whether the input is a valid JSON object. This is faster than
the previous approach of unmarshaling into map[string]any and
then re-marshaling that data structure.
This is especially beneficial for network flow logging,
which produces relatively large JSON objects.
* In appendTextOrJSONLocked, enforce maxSize on the input.
If too large, then we may end up in a situation where the logs
can never be uploaded because it exceeds the maximum body size
that the Tailscale logs service accepts.
* Use "tailscale.com/util/truncate" to properly truncate a string
on valid UTF-8 boundaries.
* In general, remove unnecessary spaces in JSON output.
Performance:
name old time/op new time/op delta
WriteText 776ns ± 2% 596ns ± 1% -23.24% (p=0.000 n=10+10)
WriteJSON 110µs ± 0% 9µs ± 0% -91.77% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
WriteText 448B ± 0% 0B -100.00% (p=0.000 n=10+10)
WriteJSON 37.9kB ± 0% 0.0kB ± 0% -99.87% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
WriteText 1.00 ± 0% 0.00 -100.00% (p=0.000 n=10+10)
WriteJSON 1.08k ± 0% 0.00k ± 0% -99.91% (p=0.000 n=10+10)
For text payloads, this is 1.30x faster.
For JSON payloads, this is 12.2x faster.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Buffer.Write has the exact same signature of io.Writer.Write.
The latter requires that implementations to never retain
the provided input buffer, which is an expectation that most
users will have when they see a Write signature.
The current behavior of Buffer.Write where it does retain
the input buffer is a risky precedent to set.
Switch the behavior to match io.Writer.Write.
There are only two implementations of Buffer in existence:
* logtail.memBuffer
* filch.Filch
The former can be fixed by cloning the input to Write.
This will cause an extra allocation in every Write,
but we can fix that will pooling on the caller side
in a follow-up PR.
The latter only passes the input to os.File.Write,
which does respect the io.Writer.Write requirements.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This is based on empirical testing using actual logs data.
FastestCompression only incurs a marginal <1% compression ratio hit
for a 2.25x reduction in memory use for small payloads
(which are common if log uploads happen at a decently high frequency).
The memory savings for large payloads is much lower
(less than 1.1x reduction).
LowMemory only incurs a marginal <5% hit on performance
for a 1.6-2.0x reduction in memory use for small or large payloads.
The memory gains for both settings justifies the loss of benefits,
which are arguably minimal.
tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Rather than pass around a scratch buffer, put it on the Logger.
This is a baby step towards removing the background uploading
goroutine and starting it as needed.
Updates tailscale/corp#18514 (insofar as it led me to look at this code)
Change-Id: I6fd94581c28bde40fdb9fca788eb9590bcedae1b
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Use the zstdframe package where sensible instead of plumbing
around our own zstd.Encoder just for stateless operations.
This causes logtail to have a dependency on zstd,
but that's arguably okay since zstd support is implicit
to the protocol between a client and the logging service.
Also, virtually every caller to logger.NewLogger was
manually setting up a zstd.Encoder anyways,
meaning that zstd was functionally always a dependency.
Updates #cleanup
Updates tailscale/corp#18514
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
io.Writer says you need to write completely on err=nil. (the result
int should be the same as the input buffer length)
We weren't doing that. We used to, but at some point the verbose
filtering was modifying buf before the final return of len(buf).
We've been getting lucky probably, that callers haven't looked at our
results and turned us into a short write error.
Updates #cleanup
Updates tailscale/corp#15664
Change-Id: I01e765ba35b86b759819e38e0072eceb9d10d75c
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
I'm not saying it works, but it compiles.
Updates #5794
Change-Id: I2f3c99732e67fe57a05edb25b758d083417f083e
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The retry logic was pathological in the following ways:
* If we restarted the logging service, any pending uploads
would be placed in a retry-loop where it depended on backoff.Backoff,
which was too aggresive. It would retry failures within milliseconds,
taking at least 10 retries to hit a delay of 1 second.
* In the event where a logstream was rate limited,
the aggressive retry logic would severely exacerbate the problem
since each retry would also log an error message.
It is by chance that the rate of log error spam
does not happen to exceed the rate limit itself.
We modify the retry logic in the following ways:
* We now respect the "Retry-After" header sent by the logging service.
* Lacking a "Retry-After" header, we retry after a hard-coded period of
30 to 60 seconds. This avoids the thundering-herd effect when all nodes
try reconnecting to the logging service at the same time after a restart.
* We do not treat a status 400 as having been uploaded.
This is simply not the behavior of the logging service.
Updates #tailscale/corp#11213
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
We're using it in more and more places, and it's not really specific to
our use of Wireguard (and does more just link/interface monitoring).
Also removes the separate interface we had for it in sockstats -- it's
a small enough package (we already pull in all of its dependencies
via other paths) that it's not worth the extra complexity.
Updates #7621
Updates #7850
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Using log.Printf may end up being printed out to the console, which
is not desirable. I noticed this when I was investigating some client
logs with `sockstats: trace "NetcheckClient" was overwritten by another`.
That turns to be harmless/expected (the netcheck client will fall back
to the DERP client in some cases, which does its own sockstats trace).
However, the log output could be visible to users if running the
`tailscale netcheck` CLI command, which would be needlessly confusing.
Updates tailscale/corp#9230
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Split apart polling of sockstats and logging them to disk. Add a 3
second delay before writing logs to disk to prevent an infinite upload
loop when uploading stats to logcatcher.
Fixes#7719
Signed-off-by: Will Norris <will@tailscale.com>
Effectively reverts #249, since the server side was fixed (with #251?)
to send a 200 OK/content-length 0 response.
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Makes it cheaper/simpler to persist values, and encourages reuse of
labels as opposed to generating an arbitrary number.
Updates tailscale/corp#9230
Updates #3363
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Share the same underlying implementation for both PrivateID and PublicID.
For the shared methods, declare them in the same order.
Only keep documentation on methods without obvious meaning.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Uses the hooks added by tailscale/go#45 to instrument the reads and
writes on the major code paths that do network I/O in the client. The
convention is to use "<package>.<type>:<label>" as the annotation for
the responsible code path.
Enabled on iOS, macOS and Android only, since mobile platforms are the
ones we're most interested in, and we are less sensitive to any
throughput degradation due to the per-I/O callback overhead (macOS is
also enabled for ease of testing during development).
For now just exposed as counters on a /v0/sockstats PeerAPI endpoint.
We also keep track of the current interface so that we can break out
the stats by interface.
Updates tailscale/corp#9230
Updates #3363
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
The log ID types were moved to a separate package so that
code that only depend on log ID types do not need to link
in the logic for the logtail client itself.
Not all code need the logtail client.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The 255 byte limit was chosen more than 3 years ago (tailscale/corp@929635c9d9),
when iOS was operating under much more significant memory constraints.
With iOS 15 the network extension has an increased limit, so increasing
it to 4K should be fine.
The motivating factor was that the network interfaces being logged
by linkChange in wgengine/userspace.go were getting truncated, and it
would be useful to know why in some cases we're choosing the pdp_ip1
cell interface instead of the pdp_ip0 one.
Updates #7184
Updates #7188
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
This updates all source files to use a new standard header for copyright
and license declaration. Notably, copyright no longer includes a date,
and we now use the standard SPDX-License-Identifier header.
This commit was done almost entirely mechanically with perl, and then
some minimal manual fixes.
Updates #6865
Signed-off-by: Will Norris <will@tailscale.com>
Instead of a static FlushDelay configuration value, use a FlushDelayFn
function that we invoke every time we decide send logs. This will allow
mobile clients to be more dynamic about when to send logs.
Updates #6768
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Many packages reference the logtail ID types,
but unfortunately pull in the transitive dependencies of logtail.
Fix this problem by putting the log ID types in its own package
with minimal dependencies.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
This function is no longer necessary as you can trivially rewrite:
logtail.MustParsePublicID(...)
with:
must.Get(logtail.ParsePublicID(...))
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The //go:build syntax was introduced in Go 1.17:
https://go.dev/doc/go1.17#build-lines
gofmt has kept the +build and go:build lines in sync since
then, but enough time has passed. Time to remove them.
Done with:
perl -i -npe 's,^// \+build.*\n,,' $(git grep -l -F '+build')
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Per chat. This is close enough to realtime but massively reduces
number of HTTP requests. (which you can verify with
TS_DEBUG_LOGTAIL_WAKES and watching tailscaled run at start)
By contrast, this is set to 2 minutes on mobile.
Change-Id: Id737c7924d452de5c446df3961f5e94a43a33f1f
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
The mobile implementation had a 2 minute ticker going all the time
to do a channel send. Instead, schedule it as needed based on activity.
Then we can be actually idle for long periods of time.
Updates #3363
Change-Id: I0dba4150ea7b94f74382fbd10db54a82f7ef6c29
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Upstream optimizations to the Go time package will make
unmarshaling of time.Time 3-6x faster. See:
* https://go.dev/cl/425116
* https://go.dev/cl/425197
* https://go.dev/cl/429862
The last optimization avoids a []byte -> string allocation
if the timestamp string less than than 32B.
Unfortunately, the presence of a timezone breaks that optimization.
Drop recording of timezone as this is non-essential information.
Most of the performance gains is upon unmarshal,
but there is also a slight performance benefit to
not marshaling the timezone as well.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The copy ID operates similar to a CC in email where
a message is sent to both the primary ID and also the copy ID.
A given log message is uploaded once, but the log server
records it twice for each ID.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
The io/ioutil package has been deprecated as of Go 1.16 [1]. This commit
replaces the existing io/ioutil functions with their new definitions in
io and os packages.
Reference: https://golang.org/doc/go1.16#ioutil
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Initialize logtail and provide an uploader that works in the
browser (we make a no-cors cross-origin request to avoid having to
open up the logcatcher servers to CORS).
Fixes#5147
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
Just reading the code again in prep for some alloc reductions.
Change-Id: I065226ea794b7ec7144c2b15942d35131c9313a8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It is not idiomatic for Go code to panic for situations that
can be normal. For example, if a server receives PrivateID
from a client, it is normal for the server to call
PrivateID.PublicID to validate that the PublicID matches.
However, doing so would panic prior to this change.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Saves some allocs. Not hot, but because we can now.
And a const instead of a var.
Change-Id: Ieb2b64534ed38051c36b2c0aa2e82739d9d0e015
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>