Just reading the code again in prep for some alloc reductions.
Change-Id: I065226ea794b7ec7144c2b15942d35131c9313a8
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
It is not idiomatic for Go code to panic for situations that
can be normal. For example, if a server receives PrivateID
from a client, it is normal for the server to call
PrivateID.PublicID to validate that the PublicID matches.
However, doing so would panic prior to this change.
Signed-off-by: Joe Tsai <joetsai@digital-static.net>
Saves some allocs. Not hot, but because we can now.
And a const instead of a var.
Change-Id: Ieb2b64534ed38051c36b2c0aa2e82739d9d0e015
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Allows instances that are running with the same machine ID (due to
cloning) to be distinguished.
Also adds sequence numbers to detect duplicates.
For tailscale/corp#5244
Signed-off-by: Mihai Parparita <mihai@tailscale.com>
For people running self-hosted control planes who want a global
opt-out knob instead of running their own logcatcher.
Change-Id: I7f996c09f45850ff77b58bfd5a535e197971725a
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
e.g. the change to ipnlocal in this commit ultimately logs out:
{"logtail":{"client_time":"2022-02-17T20:40:30.511381153-08:00","server_time":"2022-02-18T04:40:31.057771504Z"},"type":"Hostinfo","val":{"GoArch":"amd64","Hostname":"tsdev","IPNVersion":"1.21.0-date.20220107","OS":"linux","OSVersion":"Debian 11.2 (bullseye); kernel=5.10.0-10-amd64"},"v":1}
Change-Id: I668646b19aeae4a2fed05170d7b279456829c844
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
For analysis of log spam.
Bandwidth is ~unchanged from had we not stripped the "[vN] " from
text; it just gets restructed intot he new "v":N, field. I guess it
adds one byte.
Updates #1548
Change-Id: Ie00a4e0d511066a33d10dc38d765d92b0b044697
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This started as an attempt to placate GitHub's code scanner,
but it's also probably generally a good idea.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This moves the Windows-only initialization of the filelogger into
logpolicy. Previously we only did it when babysitting the tailscaled
subprocess, but this meant that log messages from the service itself
never made it to disk. Examples that weren't logged to disk:
* logtail unable to dial out,
* DNS flush messages from the service
* svc.ChangeRequest messages (#3581)
This is basically the same fix as #3571 but staying in the Logf type,
and avoiding build-tagged file (which wasn't quite a goal, but
happened and seemed nice)
Fixes#3570
Co-authored-by: Aaron Klotz <aaron@tailscale.com>
Change-Id: Iacd80c4720b7218365ec80ae143339d030842702
There are a few remaining uses of testing.AllocsPerRun:
Two in which we only log the number of allocations,
and one in which dynamically calculate the allocations
target based on a different AllocsPerRun run.
This also allows us to tighten the "no allocs"
test in wgengine/filter.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The code goes to some effort to send a single JSON object
when there's only a single line and a JSON array when there
are multiple lines.
It makes the code more complex and more expensive;
when we add a second line, we have to use a second buffer
to duplicate the first one after adding a leading square brackets.
The savings come to two bytes. Instead, always send an array.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Scanning log lines is a frequent source of allocations.
Pre-allocate a re-usable buffer.
This still doesn't help when there are giant log lines.
Those will still be problematic from an iOS memory perspective.
For more on that, see https://github.com/tailscale/corp/issues/2423.
(For those who cannot follow that link, it is a discussion
of particular problematic types of log lines for
particular categories of customers. The "categories of customers"
part is the reason that it is a private issue.)
There is also a latent bug here. If we ever encounter
a log line longer than bufio.MaxScanTokenSize,
then bufio.Scan will return an error,
and we'll truncate the file and discard the rest of the log.
That's not good, but bufio.MaxScanTokenSize is really big,
so it probably doesn't matter much in practice now.
Unfortunately, it does prevent us from easily capping the potential
memory usage here, on pain of losing log entries.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
When tailscaled starts up, these lines run:
func run() error {
// ...
pol := logpolicy.New("tailnode.log.tailscale.io")
pol.SetVerbosityLevel(args.verbose)
// ...
}
If there are old log entries present, they immediate start getting uploaded. This races with the call to pol.SetVerbosityLevel.
This manifested itself as a test failure in tailscale.com/tstest/integration
when run with -race:
WARNING: DATA RACE
Read at 0x00c0001bc970 by goroutine 24:
tailscale.com/logtail.(*Logger).Write()
/Users/josh/t/corp/oss/logtail/logtail.go:517 +0x27c
log.(*Logger).Output()
/Users/josh/go/ts/src/log/log.go:184 +0x2b8
log.Printf()
/Users/josh/go/ts/src/log/log.go:323 +0x94
tailscale.com/logpolicy.newLogtailTransport.func1()
/Users/josh/t/corp/oss/logpolicy/logpolicy.go:509 +0x36c
net/http.(*Transport).dial()
/Users/josh/go/ts/src/net/http/transport.go:1168 +0x238
net/http.(*Transport).dialConn()
/Users/josh/go/ts/src/net/http/transport.go:1606 +0x21d0
net/http.(*Transport).dialConnFor()
/Users/josh/go/ts/src/net/http/transport.go:1448 +0xe4
Previous write at 0x00c0001bc970 by main goroutine:
tailscale.com/logtail.(*Logger).SetVerbosityLevel()
/Users/josh/t/corp/oss/logtail/logtail.go:131 +0x98
tailscale.com/logpolicy.(*Policy).SetVerbosityLevel()
/Users/josh/t/corp/oss/logpolicy/logpolicy.go:463 +0x60
main.run()
/Users/josh/t/corp/oss/cmd/tailscaled/tailscaled.go:178 +0x50
main.main()
/Users/josh/t/corp/oss/cmd/tailscaled/tailscaled.go:163 +0x71c
Goroutine 24 (running) created at:
net/http.(*Transport).queueForDial()
/Users/josh/go/ts/src/net/http/transport.go:1417 +0x4d8
net/http.(*Transport).getConn()
/Users/josh/go/ts/src/net/http/transport.go:1371 +0x5b8
net/http.(*Transport).roundTrip()
/Users/josh/go/ts/src/net/http/transport.go:585 +0x7f4
net/http.(*Transport).RoundTrip()
/Users/josh/go/ts/src/net/http/roundtrip.go:17 +0x30
net/http.send()
/Users/josh/go/ts/src/net/http/client.go:251 +0x4f0
net/http.(*Client).send()
/Users/josh/go/ts/src/net/http/client.go:175 +0x148
net/http.(*Client).do()
/Users/josh/go/ts/src/net/http/client.go:717 +0x1d0
net/http.(*Client).Do()
/Users/josh/go/ts/src/net/http/client.go:585 +0x358
tailscale.com/logtail.(*Logger).upload()
/Users/josh/t/corp/oss/logtail/logtail.go:367 +0x334
tailscale.com/logtail.(*Logger).uploading()
/Users/josh/t/corp/oss/logtail/logtail.go:289 +0xec
Rather than complicate the logpolicy API,
allow the verbosity to be adjusted concurrently.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Part of overall effort to clean up, unify, use link monitoring more,
and make Tailscale quieter when all networks are down. This is especially
bad on macOS where we can get killed for not being polite it seems.
(But we should be polite in any case)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
+ add a test for parseAndRemoveLogLevel()
+ add a test for drainPendingMessages()
+ test JSON log encoding including several special cases
Other tests frequently send logs but a) don't check the result and
b) do so by happenstance, such that the code in encode() was not
consistently being exercised and leading to spurious changes in
code coverage. These tests attempt to more systematically test
the logging function.
This is the second attempt to add these tests, the first attempt
(in https://github.com/tailscale/tailscale/pull/1114) had two issues:
1. httptest.NewServer creates multiple goroutine handlers, and
logtail uses goroutines to upload, but the first version had no
locking in the server to guard this.
Moved data handling into channels to get synchronization.
2. The channel to notify the test of the arrival of data had a depth
of 1, in cases where the Logger sent multiple uploads it would
block the server.
This resulted in the first iteration of these tests being flaky,
and we reverted it.
This new version of the tests has passed with
go test -race -count=10000
and seems solid.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
This reverts commit e4f53e9b6f.
At least two of these tests are flakey, reverting until they can be
made more robust.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
* logtail: test parseAndRemoveLogLevel()
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
* logtail: test JSON log encoding.
Expand TestUploadMessages to also exercise the encoding functions
in logtail, like JSON logging and timestamps.
Other tests frequently send logs but a) don't check the result and
b) do so by happenstance, such that the lines in encode() were not
consistently being exercised and leading to spurious changes in
code coverage.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
* logtail: add a test for drainPendingMessages
Make the client buffer some messages before the upload server
becomes available.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
* logtail: use %q, raw strings, and io.WriteString
%q escapes binary characters for us.
raw strings avoid so much backslash escaping
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
Right now TestFastShutdown tries to upload logs to localhost:1234,
which will most likely respond with an error. However if one has an
actual service running on port 1234, it would receive a connection
attempting to POST every time the unit test runs.
Start a local server and direct the upload there instead.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
Start an HTTP server to accept POST requests, and upload some logs to
it. Check that uploaded logs were received.
Code in logtail:drainPending was not being reliably exercised by other
tests. This shows up in code coverage reports, as lines of code in
drainPending are alternately added and subtracted from code coverage.
This test will reliably exercise and verify this code.
Signed-off-by: Denton Gentry <dgentry@tailscale.com>
Log levels can now be specified with "[v1] " or "[v2] " substrings
that are then stripped and filtered at the final logger. This follows
our existing "[unexpected]" etc convention and doesn't require a
wholesale reworking of our logging at the moment.
cmd/tailscaled then gets a new --verbose=N flag to take a log level
that controls what gets logged to stderr (and thus systemd, syslog,
etc). Logtail is unaffected by --verbose.
This commit doesn't add annotations to any existing log prints. That
is in the next commit.
Updates #924
Updates #282
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Add content length hints to headers.
The server can use these hints to more efficiently select buffers.
Stop attempting to compress tiny requests.
The bandwidth savings are negligible (and sometimes negative!),
and it makes extra work for the server.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Also, bit of behavior change: on non-nil err but expired context,
don't reset the consecutive failure count. I don't think the old
behavior was intentional.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
We want to run bo.Backoff() after every upload, regardless. If
upload==true but err!=nil, we weren't backing off, which caused some
very-high-throughput log upload retries in bad network conditions.
Updates #282.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
If a test calls log.Printf, 'go test' horrifyingly rearranges the
output to no longer be in chronological order, which makes debugging
virtually impossible. Let's stop that from happening by making
log.Printf panic if called from any module, no matter how deep, during
tests.
This required us to change the default error handler in at least one
http.Server, as well as plumbing a bunch of logf functions around,
especially in magicsock and wgengine, but also in logtail and backoff.
To add insult to injury, 'go test' also rearranges the output when a
parent test has multiple sub-tests (all the sub-test's t.Logf is always
printed after all the parent tests t.Logf), so we need to screw around
with a special Logf that can point at the "current" t (current_t.Logf)
in some places. Probably our entire way of using subtests is wrong,
since 'go test' would probably like to run them all in parallel if you
called t.Parallel(), but it definitely can't because the're all
manipulating the shared state created by the parent test. They should
probably all be separate toplevel tests instead, with common
setup/teardown logic. But that's a job for another time.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
Some programs use frequent short-duration backoffs even under non-error
conditions. They can set this to avoid logging short backoffs when
things are operating normally, but still get messages when longer
backoffs kick in.
Signed-off-by: Avery Pennarun <apenwarr@tailscale.com>
We'll be fixing the server so this won't trigger in practice,
but it demos the connection reuse problem.
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Its semantics has changed slightly, this will let us use it to
drive batched logging in special circumstances.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
In particular, the Dup2 syscall is not defined in the syscall
package on arm64, but is defined in x/sys/unix.
Signed-off-by: David Anderson <dave@natulte.net>