mirror of
https://github.com/tailscale/tailscale.git
synced 2024-11-29 04:55:31 +00:00
84c99fe0d9
The retry logic was pathological in the following ways: * If we restarted the logging service, any pending uploads would be placed in a retry-loop where it depended on backoff.Backoff, which was too aggresive. It would retry failures within milliseconds, taking at least 10 retries to hit a delay of 1 second. * In the event where a logstream was rate limited, the aggressive retry logic would severely exacerbate the problem since each retry would also log an error message. It is by chance that the rate of log error spam does not happen to exceed the rate limit itself. We modify the retry logic in the following ways: * We now respect the "Retry-After" header sent by the logging service. * Lacking a "Retry-After" header, we retry after a hard-coded period of 30 to 60 seconds. This avoids the thundering-herd effect when all nodes try reconnecting to the logging service at the same time after a restart. * We do not treat a status 400 as having been uploaded. This is simply not the behavior of the logging service. Updates #tailscale/corp#11213 Signed-off-by: Joe Tsai <joetsai@digital-static.net> |
||
---|---|---|
.. | ||
addlicense | ||
cloner | ||
containerboot | ||
derper | ||
derpprobe | ||
dist | ||
get-authkey | ||
gitops-pusher | ||
hello | ||
k8s-operator | ||
mkmanifest | ||
mkpkg | ||
mkversion | ||
nardump | ||
netlogfmt | ||
nginx-auth | ||
pgproxy | ||
printdep | ||
proxy-to-grafana | ||
sniproxy | ||
speedtest | ||
ssh-auth-none-demo | ||
stunc | ||
sync-containers | ||
tailscale | ||
tailscaled | ||
testcontrol | ||
testwrapper | ||
tsconnect | ||
tsshd | ||
viewer |