mirror of
https://github.com/tailscale/tailscale.git
synced 2025-07-30 07:43:42 +00:00

Adds a new reconciler for ProxyGroups of type kube-apiserver that will provision a Tailscale Service for each replica to advertise. Adds two new condition types to the ProxyGroup, TailscaleServiceValid and TailscaleServiceConfigured, to post updates on the state of that reconciler in a way that's consistent with the service-pg reconciler. The created Tailscale Service name is configurable via a new ProxyGroup field spec.kubeAPISserver.ServiceName, which expects a string of the form "svc:<dns-label>". Lots of supporting changes were needed to implement this in a way that's consistent with other operator workflows, including: * Pulled containerboot's ensureServicesUnadvertised and certManager into kube/ libraries to be shared with k8s-proxy. Use those in k8s-proxy to aid Service cert sharing between replicas and graceful Service shutdown. * For certManager, add an initial wait to the cert loop to wait until the domain appears in the devices's netmap to avoid a guaranteed error on the first issue attempt when it's quick to start. * Made several methods in ingress-for-pg.go and svc-for-pg.go into functions to share with the new reconciler * Added a Resource struct to the owner refs stored in Tailscale Service annotations to be able to distinguish between Ingress- and ProxyGroup- based Services that need cleaning up in the Tailscale API. * Added a ListVIPServices method to the internal tailscale client to aid cleaning up orphaned Services * Support for reading config from a kube Secret, and partial support for config reloading, to prevent us having to force Pod restarts when config changes. * Fixed up the zap logger so it's possible to set debug log level. Updates #13358 Change-Id: Ia9607441157dd91fb9b6ecbc318eecbef446e116 Signed-off-by: Tom Proctor <tomhjp@users.noreply.github.com>
64 lines
2.1 KiB
Go
64 lines
2.1 KiB
Go
// Copyright (c) Tailscale Inc & AUTHORS
|
|
// SPDX-License-Identifier: BSD-3-Clause
|
|
|
|
// Package services manages graceful shutdown of Tailscale Services advertised
|
|
// by Kubernetes clients.
|
|
package services
|
|
|
|
import (
|
|
"context"
|
|
"fmt"
|
|
"time"
|
|
|
|
"tailscale.com/client/local"
|
|
"tailscale.com/ipn"
|
|
"tailscale.com/types/logger"
|
|
)
|
|
|
|
// EnsureServicesNotAdvertised is a function that gets called on containerboot
|
|
// or k8s-proxy termination and ensures that any currently advertised Services
|
|
// get unadvertised to give clients time to switch to another node before this
|
|
// one is shut down.
|
|
func EnsureServicesNotAdvertised(ctx context.Context, lc *local.Client, logf logger.Logf) error {
|
|
prefs, err := lc.GetPrefs(ctx)
|
|
if err != nil {
|
|
return fmt.Errorf("error getting prefs: %w", err)
|
|
}
|
|
if len(prefs.AdvertiseServices) == 0 {
|
|
return nil
|
|
}
|
|
|
|
logf("unadvertising services: %v", prefs.AdvertiseServices)
|
|
if _, err := lc.EditPrefs(ctx, &ipn.MaskedPrefs{
|
|
AdvertiseServicesSet: true,
|
|
Prefs: ipn.Prefs{
|
|
AdvertiseServices: nil,
|
|
},
|
|
}); err != nil {
|
|
// EditPrefs only returns an error if it fails _set_ its local prefs.
|
|
// If it fails to _persist_ the prefs in state, we don't get an error
|
|
// and we continue waiting below, as control will failover as usual.
|
|
return fmt.Errorf("error setting prefs AdvertiseServices: %w", err)
|
|
}
|
|
|
|
// Services use the same (failover XOR regional routing) mechanism that
|
|
// HA subnet routers use. Unfortunately we don't yet get a reliable signal
|
|
// from control that it's responded to our unadvertisement, so the best we
|
|
// can do is wait for 20 seconds, where 15s is the approximate maximum time
|
|
// it should take for control to choose a new primary, and 5s is for buffer.
|
|
//
|
|
// Note: There is no guarantee that clients have been _informed_ of the new
|
|
// primary no matter how long we wait. We would need a mechanism to await
|
|
// netmap updates for peers to know for sure.
|
|
//
|
|
// See https://tailscale.com/kb/1115/high-availability for more details.
|
|
// TODO(tomhjp): Wait for a netmap update instead of sleeping when control
|
|
// supports that.
|
|
select {
|
|
case <-ctx.Done():
|
|
return nil
|
|
case <-time.After(20 * time.Second):
|
|
return nil
|
|
}
|
|
}
|