wgengine/netstack: disable RACK on all platforms

The gVisor RACK implementation appears to perfom badly, particularly in
scenarios with higher BDP. This may have gone poorly noticed as a result
of it being gated on SACK, which is not enabled by default in upstream
gVisor, but itself has a higher positive impact on performance. Both the
RACK and DACK implementations (which are now one) have overlapping
non-completion of tasks in their work streams on the public tracker.

Updates #9707

Signed-off-by: James Tucker <james@tailscale.com>
This commit is contained in:
James Tucker 2025-02-03 16:18:07 -08:00 committed by James Tucker
parent 431216017b
commit 83808029d8

View File

@ -317,16 +317,14 @@ func Create(logf logger.Logf, tundev *tstun.Wrapper, e wgengine.Engine, mc *magi
if tcpipErr != nil {
return nil, fmt.Errorf("could not enable TCP SACK: %v", tcpipErr)
}
if runtime.GOOS == "windows" {
// See https://github.com/tailscale/tailscale/issues/9707
// Windows w/RACK performs poorly. ACKs do not appear to be handled in a
// timely manner, leading to spurious retransmissions and a reduced
// congestion window.
tcpRecoveryOpt := tcpip.TCPRecovery(0)
tcpipErr = ipstack.SetTransportProtocolOption(tcp.ProtocolNumber, &tcpRecoveryOpt)
if tcpipErr != nil {
return nil, fmt.Errorf("could not disable TCP RACK: %v", tcpipErr)
}
// See https://github.com/tailscale/tailscale/issues/9707
// gVisor's RACK performs poorly. ACKs do not appear to be handled in a
// timely manner, leading to spurious retransmissions and a reduced
// congestion window.
tcpRecoveryOpt := tcpip.TCPRecovery(0)
tcpipErr = ipstack.SetTransportProtocolOption(tcp.ProtocolNumber, &tcpRecoveryOpt)
if tcpipErr != nil {
return nil, fmt.Errorf("could not disable TCP RACK: %v", tcpipErr)
}
err := setTCPBufSizes(ipstack)
if err != nil {