In prep for moving stuff out of LocalBackend.
Change-Id: I9725aa9c3ebc7275f8c40e040b326483c0340127
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
More work towards removing the massive ipnserver.Run and ipnserver.Options
and making composable pieces.
Work remains. (The getEngine retry loop on Windows complicates things.)
For now some duplicate code exists. Once the Windows side is fixed
to either not need the retry loop or to move the retry loop into a
custom wgengine.Engine wrapper, then we can unify tailscaled_windows.go
too.
Change-Id: If84d16e3cd15b54ead3c3bb301f27ae78d055f80
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
And add a --socks5-server flag.
And fix a race in SOCKS5 replies where the response header was written
concurrently with the copy from the backend.
Co-authored with Naman Sood.
Updates #707
Updates #504
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Not usefully functional yet (mostly a proof of concept), but getting
it submitted for some work @namansood is going to do atop this.
Updates #707
Updates #634
Updates #48
Updates #835
So a backend in server-an-error state (as used by Windows) can try to
create a new Engine again each time somebody re-connects, relaunching
the GUI app.
(The proper fix is actually fixing Windows issues, but this makes things better
in the short term)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Also stop logging data sent/received from nodes we're not connected to (ie all those `x`s being logged in the `peers: ` line)
Signed-off-by: Wendi <wendi.yu@yahoo.ca>
Implement rate limiting on log messages
Addresses issue #317, where logs can get spammed with the same message
nonstop. Created a rate limiting closure on logging functions, which
limits the number of messages being logged per second based on format
string. To keep memory usage as constant as possible, the previous cache
purging at periodic time intervals has been replaced by an LRU that
discards the oldest string when the capacity of the cache is reached.
Signed-off-by: Wendi Yu <wendi.yu@yahoo.ca>