88 lines
2.4 KiB
Go
Raw Normal View History

package metrics
import (
"context"
"net/http"
fix(build): update go version to 1.16 and dependencies (#2136) * chore(deps): bump k8s.io/apiextensions-apiserver from 0.19.2 to 0.21.3 Bumps [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) from 0.19.2 to 0.21.3. - [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases) - [Commits](https://github.com/kubernetes/apiextensions-apiserver/compare/v0.19.2...v0.21.3) --- updated-dependencies: - dependency-name: k8s.io/apiextensions-apiserver dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * chore(deps): bump google.golang.org/api from 0.34.0 to 0.52.0 Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.34.0 to 0.52.0. - [Release notes](https://github.com/googleapis/google-api-go-client/releases) - [Changelog](https://github.com/googleapis/google-api-go-client/blob/master/CHANGES.md) - [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.34.0...v0.52.0) --- updated-dependencies: - dependency-name: google.golang.org/api dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * start update dependencies * update mods and otlp * fix(build): update to go 1.16 * old version for k8s mods * update k8s versions * update orbos * with batcher * add batch span processor * try with older otel version 0.20 * remove syncer * otel rc2 * fix config Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Stefan Benz <stefan@caos.ch>
2021-08-10 07:27:27 +02:00
"go.opentelemetry.io/otel/attribute"
fix(build): update go version to 1.16 and dependencies (#2136) * chore(deps): bump k8s.io/apiextensions-apiserver from 0.19.2 to 0.21.3 Bumps [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) from 0.19.2 to 0.21.3. - [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases) - [Commits](https://github.com/kubernetes/apiextensions-apiserver/compare/v0.19.2...v0.21.3) --- updated-dependencies: - dependency-name: k8s.io/apiextensions-apiserver dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * chore(deps): bump google.golang.org/api from 0.34.0 to 0.52.0 Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.34.0 to 0.52.0. - [Release notes](https://github.com/googleapis/google-api-go-client/releases) - [Changelog](https://github.com/googleapis/google-api-go-client/blob/master/CHANGES.md) - [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.34.0...v0.52.0) --- updated-dependencies: - dependency-name: google.golang.org/api dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * start update dependencies * update mods and otlp * fix(build): update to go 1.16 * old version for k8s mods * update k8s versions * update orbos * with batcher * add batch span processor * try with older otel version 0.20 * remove syncer * otel rc2 * fix config Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Stefan Benz <stefan@caos.ch>
2021-08-10 07:27:27 +02:00
"go.opentelemetry.io/otel/metric"
)
const (
ActiveSessionCounter = "zitadel.active_session_counter"
ActiveSessionCounterDescription = "Active session counter"
SpoolerDivCounter = "zitadel.spooler_div_milliseconds"
SpoolerDivCounterDescription = "Spooler div from last successful run to now in milliseconds"
Database = "database"
ViewName = "view_name"
)
type Metrics interface {
GetExporter() http.Handler
GetMetricsProvider() metric.MeterProvider
RegisterCounter(name, description string) error
AddCount(ctx context.Context, name string, value int64, labels map[string]attribute.Value) error
fix: add prometheus metrics on projection handlers (#9561) # Which Problems Are Solved With current provided telemetry it's difficult to predict when a projection handler is under increased load until it's too late and causes downstream issues. Importantly, projection updating is in the critical path for many login flows and increased latency there can result in system downtime for users. # How the Problems Are Solved This PR adds three new prometheus-style metrics: 1. **projection_events_processed** (_labels: projection, success_) - This metric gives us a counter of the number of events processed per projection update run and whether they we're processed without error. A high number of events being processed can let us know how busy a particular projection handler is. 2. **projection_handle_timer** _(labels: projection)_ - This is the time it takes to process a projection update given a batch of events - time to take the current_states lock, query for new events, reduce, update_the projection, and update current_states. 3. **projection_state_latency** _(labels: projection)_ - This is the time from the last event processed in the current_states table for a given projection. It tells us how old was the last event you processed? Or, how far behind are you running for this projection? Higher latencies could mean high load or stalled projection handling. # Additional Changes I also had to initialize the global otel metrics provider (`metrics.M`) in the `setup` step additionally to `start` since projection handlers are initialized at setup. The initialization checks if a metrics provider is already set (in case of `start-from-setup` or `start-from-init` to prevent overwriting, which causes the otel metrics provider to stop working. # Additional Context ## Example Dashboards ![image](https://github.com/user-attachments/assets/94ba5c2b-9c62-44cd-83ee-4db4a8859073) ![image](https://github.com/user-attachments/assets/60a1b406-a8c6-48dc-a925-575359f97e1e) --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> Co-authored-by: Livio Spring <livio.a@gmail.com> (cherry picked from commit c1535b7b4916abd415a29e034d266c4ea1dc3645)
2025-03-27 02:40:27 -04:00
AddHistogramMeasurement(ctx context.Context, name string, value float64, labels map[string]attribute.Value) error
RegisterUpDownSumObserver(name, description string, callbackFunc metric.Int64Callback) error
RegisterValueObserver(name, description string, callbackFunc metric.Int64Callback) error
fix: add prometheus metrics on projection handlers (#9561) # Which Problems Are Solved With current provided telemetry it's difficult to predict when a projection handler is under increased load until it's too late and causes downstream issues. Importantly, projection updating is in the critical path for many login flows and increased latency there can result in system downtime for users. # How the Problems Are Solved This PR adds three new prometheus-style metrics: 1. **projection_events_processed** (_labels: projection, success_) - This metric gives us a counter of the number of events processed per projection update run and whether they we're processed without error. A high number of events being processed can let us know how busy a particular projection handler is. 2. **projection_handle_timer** _(labels: projection)_ - This is the time it takes to process a projection update given a batch of events - time to take the current_states lock, query for new events, reduce, update_the projection, and update current_states. 3. **projection_state_latency** _(labels: projection)_ - This is the time from the last event processed in the current_states table for a given projection. It tells us how old was the last event you processed? Or, how far behind are you running for this projection? Higher latencies could mean high load or stalled projection handling. # Additional Changes I also had to initialize the global otel metrics provider (`metrics.M`) in the `setup` step additionally to `start` since projection handlers are initialized at setup. The initialization checks if a metrics provider is already set (in case of `start-from-setup` or `start-from-init` to prevent overwriting, which causes the otel metrics provider to stop working. # Additional Context ## Example Dashboards ![image](https://github.com/user-attachments/assets/94ba5c2b-9c62-44cd-83ee-4db4a8859073) ![image](https://github.com/user-attachments/assets/60a1b406-a8c6-48dc-a925-575359f97e1e) --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> Co-authored-by: Livio Spring <livio.a@gmail.com> (cherry picked from commit c1535b7b4916abd415a29e034d266c4ea1dc3645)
2025-03-27 02:40:27 -04:00
RegisterHistogram(name, description, unit string, buckets []float64) error
}
var M Metrics
func GetExporter() http.Handler {
if M == nil {
return nil
}
return M.GetExporter()
}
func GetMetricsProvider() metric.MeterProvider {
if M == nil {
return nil
}
return M.GetMetricsProvider()
}
func RegisterCounter(name, description string) error {
if M == nil {
return nil
}
return M.RegisterCounter(name, description)
}
func AddCount(ctx context.Context, name string, value int64, labels map[string]attribute.Value) error {
if M == nil {
return nil
}
return M.AddCount(ctx, name, value, labels)
}
fix: add prometheus metrics on projection handlers (#9561) # Which Problems Are Solved With current provided telemetry it's difficult to predict when a projection handler is under increased load until it's too late and causes downstream issues. Importantly, projection updating is in the critical path for many login flows and increased latency there can result in system downtime for users. # How the Problems Are Solved This PR adds three new prometheus-style metrics: 1. **projection_events_processed** (_labels: projection, success_) - This metric gives us a counter of the number of events processed per projection update run and whether they we're processed without error. A high number of events being processed can let us know how busy a particular projection handler is. 2. **projection_handle_timer** _(labels: projection)_ - This is the time it takes to process a projection update given a batch of events - time to take the current_states lock, query for new events, reduce, update_the projection, and update current_states. 3. **projection_state_latency** _(labels: projection)_ - This is the time from the last event processed in the current_states table for a given projection. It tells us how old was the last event you processed? Or, how far behind are you running for this projection? Higher latencies could mean high load or stalled projection handling. # Additional Changes I also had to initialize the global otel metrics provider (`metrics.M`) in the `setup` step additionally to `start` since projection handlers are initialized at setup. The initialization checks if a metrics provider is already set (in case of `start-from-setup` or `start-from-init` to prevent overwriting, which causes the otel metrics provider to stop working. # Additional Context ## Example Dashboards ![image](https://github.com/user-attachments/assets/94ba5c2b-9c62-44cd-83ee-4db4a8859073) ![image](https://github.com/user-attachments/assets/60a1b406-a8c6-48dc-a925-575359f97e1e) --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> Co-authored-by: Livio Spring <livio.a@gmail.com> (cherry picked from commit c1535b7b4916abd415a29e034d266c4ea1dc3645)
2025-03-27 02:40:27 -04:00
func AddHistogramMeasurement(ctx context.Context, name string, value float64, labels map[string]attribute.Value) error {
if M == nil {
return nil
}
return M.AddHistogramMeasurement(ctx, name, value, labels)
}
func RegisterHistogram(name, description, unit string, buckets []float64) error {
if M == nil {
return nil
}
return M.RegisterHistogram(name, description, unit, buckets)
}
func RegisterUpDownSumObserver(name, description string, callbackFunc metric.Int64Callback) error {
if M == nil {
return nil
}
return M.RegisterUpDownSumObserver(name, description, callbackFunc)
}
func RegisterValueObserver(name, description string, callbackFunc metric.Int64Callback) error {
if M == nil {
return nil
}
return M.RegisterValueObserver(name, description, callbackFunc)
}