control/controlclient: remove optimization that was more convoluted than useful

While working on #13390, I ran across this non-idiomatic
pointer-to-view and parallel-sorted-map accounting code that was all
just to avoid a sort later.

But the sort later when building a new netmap.NetworkMap is already a
drop in the bucket of CPU compared to how much work & allocs
mapSession.netmap and LocalBackend's spamming of the full netmap
(potentially tens of thousands of peers, MBs of JSON) out to IPNBus
clients for any tiny little change (node changing online status, etc).

Removing the parallel sorted slice let everything be simpler to reason
about, so this does that. The sort might take a bit more CPU time now
in theory, but in practice for any netmap size for which it'd matter,
the quadratic netmap IPN bus spam (which we need to fix soon) will
overshadow that little sort.

Updates #13390
Updates #1909

Change-Id: I3092d7c67dc10b2a0f141496fe0e7e98ccc07712
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
This commit is contained in:
Brad Fitzpatrick
2025-01-03 10:10:16 -08:00
committed by Brad Fitzpatrick
parent 1e2e319e7d
commit 402fc9d65f
2 changed files with 30 additions and 51 deletions

View File

@@ -340,19 +340,18 @@ func TestUpdatePeersStateFromResponse(t *testing.T) {
}
ms := newTestMapSession(t, nil)
for _, n := range tt.prev {
mak.Set(&ms.peers, n.ID, ptr.To(n.View()))
mak.Set(&ms.peers, n.ID, n.View())
}
ms.rebuildSorted()
gotStats := ms.updatePeersStateFromResponse(tt.mapRes)
got := make([]*tailcfg.Node, len(ms.sortedPeers))
for i, vp := range ms.sortedPeers {
got[i] = vp.AsStruct()
}
if gotStats != tt.wantStats {
t.Errorf("got stats = %+v; want %+v", gotStats, tt.wantStats)
}
var got []*tailcfg.Node
for _, vp := range ms.sortedPeers() {
got = append(got, vp.AsStruct())
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("wrong results\n got: %s\nwant: %s", formatNodes(got), formatNodes(tt.want))
}