mirror of
https://github.com/zitadel/zitadel.git
synced 2025-08-12 05:27:33 +00:00
v4.0.0
4167 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
d537e86345 |
fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved
When authenticating with email or phone number in the login V1, users
were not able to request a password reset and would be given a "User not
found" error.
This was due to a check of the loginname of the auth request, which in
those cases would not match the user's stored loginname.
# How the Problems Are Solved
Switch to a check of the resolved userID in the auth request. (We still
check the user again, since the ID might be a placeholder for an unknown
user and we do not want to disclose any information by omitting a check
and reduce the response time.)
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit
|
||
![]() |
b638ed528d |
fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved
Customers reported, that if the session / access token in Console
expired and they re-authenticated, the user list would be empty.
While reproducing the issue, we discovered that the necessary
organization information, would be missing in the access token, since
this would already be missing in the OIDC session creation when using an
id_token_hint.
# How the Problems Are Solved
- Ensure the user's organization is set in the login v1 auth request.
This is used to create the OIDC and token information.
# Additional Changes
None
# Additional Context
- reported by customers
- requires backport to v3.x
(cherry picked from commit
|
||
![]() |
8aa7801f40 |
chore(deps): upgrade oidc and chi for dependabot alert (#10160)
# Which Problems Are Solved
Solve dependabot alerts for Go packages.
# How the Problems Are Solved
- Upgrade to latest github.com/zitadel/oidc, which already pulls the
fixed version of chi.
- Upgrade mapstructure
# Additional Changes
- none
# Additional Context
- https://github.com/zitadel/zitadel/security/dependabot/323
- https://github.com/zitadel/zitadel/security/dependabot/324
(cherry picked from commit
|
||
![]() |
1d409f7959 |
fix(webauthn): allow to use "old" passkeys/u2f credentials on session API (#10150)
# Which Problems Are Solved
To prevent presenting unusable WebAuthN credentials to the user /
browser, we filtered out all credentials, which do not match the
requested RP ID. Since credentials set up through Login V1 and Console
do not have an RP ID stored, they never matched. This was previously
intended, since the Login V2 could be served on a separate domain.
The problem is, that if it is hosted on the same domain, the credentials
would also be filtered out and user would not be able to login.
# How the Problems Are Solved
Change the filtering to return credentials, if no RP ID is stored and
the requested RP ID matches the instance domain.
# Additional Changes
None
# Additional Context
Noted internally when testing the login v2
(cherry picked from commit
|
||
![]() |
c2c49679cb |
fix(scim): add type attribute to ScimEmail (#9690)
# Which Problems Are Solved
- SCIM PATCH operations for users from Entra ID for the `emails`
attribute fails due to missing `type` subattribute
# How the Problems Are Solved
- Adds the `type` attribute to the `ScimUser` struct and sets the
default value to `"work"` in the `mapWriteModelToScimUser()` method.
# Additional Changes
# Additional Context
The SCIM handlers for POST and PUT ignore multiple emails and only uses
the primary email for a given user, or falls back to the first email if
none are marked as primary. PATCH operations however, will attempt to
resolve the provided filter in `operations[].path`.
Some services, such as Entra ID, only support patching emails by
filtering for `emails[type eq "(work|home|other)"].value`, which fails
with Zitadel as the ScimUser struct (and thus the generated schema)
doesn't include the `type` field.
This commit adds the `type` field to work around this issue, while still
preserving compatibility with filters such as `emails[primary eq
true].value`.
-
https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191
---------
Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no>
Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no>
(cherry picked from commit
|
||
![]() |
056399bdb4 |
fix: enable opentelemetry metrics for river queue (#10044)
# Which Problems Are Solved
Right now we have no visibility into river queue's job processing times
and queue sizes. This makes it difficult to reliably know if
notifications are actually being published in a reasonable time and
current queue size.
# How the Problems Are Solved
Integrates River's OpenTelemetry middleware with Zitadel's metrics
system by adding the otelriver middleware to the queue configuration.
# Additional Changes
- Updated dependencies to include required `otelriver` package
# Additional Context
Example output from `/debug/metrics`
<details>
<summary>output</summary>
# HELP failed_deliveries_json_total Failed JSON message deliveries
# TYPE failed_deliveries_json_total counter
failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"}
2
# HELP go_gc_duration_seconds A summary of the wall-time pause
(stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8e-05
go_gc_duration_seconds{quantile="0.25"} 6.3916e-05
go_gc_duration_seconds{quantile="0.5"} 7.5584e-05
go_gc_duration_seconds{quantile="0.75"} 9.2584e-05
go_gc_duration_seconds{quantile="1"} 0.000204292
go_gc_duration_seconds_sum 0.003028502
go_gc_duration_seconds_count 34
# HELP go_gc_gogc_percent Heap size target percentage configured by the
user, otherwise 100. This value is set by the GOGC environment variable,
and the runtime/debug.SetGCPercent function. Sourced from
/gc/gogc:percent
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the
user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT
environment variable, and the runtime/debug.SetMemoryLimit function.
Sourced from /gc/gomemlimit:bytes
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 231
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and
currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.7565832e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in
heap until now, even if released already. Equals to
/gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 7.3319844e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the
profiling bucket hash table. Equals to
/memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.63816e+06
# HELP go_memstats_frees_total Total number of heap objects frees.
Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.1496925e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage
collection system metadata. Equals to
/memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.182776e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and
currently in use, same as go_memstats_alloc_bytes. Equals to
/memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.7565832e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be
used. Equals to /memory/classes/heap/released:bytes +
/memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.8179584e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in
use. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.5868544e+07
# HELP go_memstats_heap_objects Number of currently allocated objects.
Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 573723
# HELP go_memstats_heap_released_bytes Number of heap bytes released to
OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 7.20896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from
system. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes
+ /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.44048128e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of
last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.749491558214289e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated,
both live and gc-ed. Semantically a counter version for
go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects +
/gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.2070648e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache
structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 16912
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache
structures obtained from system. Equals to
/memory/classes/metadata/mcache/inuse:bytes +
/memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan
structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.3496e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan
structures obtained from system. Equals to
/memory/classes/metadata/mspan/inuse:bytes +
/memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 2.18688e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage
collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.34730994e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system
allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 3.125168e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from
system for stack allocator in non-CGO environments. Equals to
/memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.752512e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system
for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
/memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.752512e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.58965032e+08
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS
setting, or the number of operating system threads that can execute
user-level Go code simultaneously. Sourced from
/sched/gomaxprocs:threads
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 14
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 25
# HELP grpc_server_grpc_status_code_total Grpc status code counter
# TYPE grpc_server_grpc_status_code_total counter
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"}
2
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"}
1
# HELP grpc_server_request_counter_total Grpc request counter
# TYPE grpc_server_request_counter_total counter
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""}
2
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""}
1
# HELP grpc_server_total_request_counter_total Total grpc request
counter
# TYPE grpc_server_total_request_counter_total counter
grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""}
5
# HELP otel_scope_info Instrumentation Scope metadata
# TYPE otel_scope_info gauge
otel_scope_info{otel_scope_name="",otel_scope_version=""} 1
otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""}
1
# HELP projection_events_processed_total Number of events reduced to
process projection updates
# TYPE projection_events_processed_total counter
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"}
0
# HELP projection_handle_timer_seconds Time taken to process a
projection update
# TYPE projection_handle_timer_seconds histogram
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.007344541
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.014258458
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP projection_state_latency_seconds When finishing processing a
batch of events, this track the age of the last events seen from current
time
# TYPE projection_state_latency_seconds histogram
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.012979
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.0199
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP promhttp_metric_handler_requests_in_flight Current number of
scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by
HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP river_insert_count_total Number of jobs inserted
# TYPE river_insert_count_total counter
river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_count_total Number of job batches inserted (all
jobs are inserted in a batch, but batches may be one job)
# TYPE river_insert_many_count_total counter
river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_histogram_seconds Duration of job
batch insertion (histogram)
# TYPE river_insert_many_duration_histogram_seconds histogram
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"}
0
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"}
1
river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_seconds Duration of job batch
insertion
# TYPE river_insert_many_duration_seconds gauge
river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
# HELP river_work_count_total Number of jobs worked
# TYPE river_work_count_total counter
river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_histogram_seconds Duration of job being
worked (histogram)
# TYPE river_work_duration_histogram_seconds histogram
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_seconds Duration of job being worked
# TYPE river_work_duration_seconds gauge
river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"}
1
</details>
Example grafana dashboard:

- Closes #10043
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit
|
||
![]() |
fefeaea56a |
perf: improve org and org domain creation (#10232)
# Which Problems Are Solved When an organization domain is verified, e.g. also when creating a new organization (incl. generated domain), existing usernames are checked if the domain has been claimed. The query was not optimized for instances with many users and organizations. # How the Problems Are Solved - Replace the query, which was searching over the users projection with (computed loginnames) with a dedicated query checking the loginnames projection directly. - All occurrences have been updated to use the new query. # Additional Changes None # Additional Context - reported through support - requires backport to v3.x |
||
![]() |
0598abe7e6 |
chore(login): fix close pr action (#10234)
# Which Problems Are Solved The close PR action fails https://github.com/zitadel/typescript/actions/runs/16196332400/job/45723668837?pr=511 # How the Problems Are Solved A backtick is escaped. # Additional Context - Completes #10229 |
||
![]() |
f9cad0f3e5 |
chore(typescript): improve close PR action (#10229)
# Which Problems Are Solved The close PR action currently fails because of unescaped backticks. # How the Problems Are Solved Backticks are escaped. # Additional Changes - Adding a login remote immediately fetches for better UX. - Adding a subtree is not necessary, as it is already added in the repo. - Fix and clarify PR migration steps. - Add workflow dispatch event |
||
![]() |
ffe6d41588 |
fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved When authenticating with email or phone number in the login V1, users were not able to request a password reset and would be given a "User not found" error. This was due to a check of the loginname of the auth request, which in those cases would not match the user's stored loginname. # How the Problems Are Solved Switch to a check of the resolved userID in the auth request. (We still check the user again, since the ID might be a placeholder for an unknown user and we do not want to disclose any information by omitting a check and reduce the response time.) # Additional Changes None # Additional Context - reported through support - requires backport to v3.x |
||
![]() |
2821f41c3a |
fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved Customers reported, that if the session / access token in Console expired and they re-authenticated, the user list would be empty. While reproducing the issue, we discovered that the necessary organization information, would be missing in the access token, since this would already be missing in the OIDC session creation when using an id_token_hint. # How the Problems Are Solved - Ensure the user's organization is set in the login v1 auth request. This is used to create the OIDC and token information. # Additional Changes None # Additional Context - reported by customers - requires backport to v3.x |
||
![]() |
f937f90504 |
chore: update review comment (#10210)
make review comment more clear what is expected |
||
![]() |
0ceec60637 |
fix: sorting options of the ListInstanceTrustedDomains() gRPC endpoint (#10172)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved 1. The sorting columns in the gRPC endpoint `ListInstanceTrustedDomains()` are incorrect, and return the following error when invalid sorting options are chosen: ``` Unknown (2) ERROR: missing FROM-clause entry for table "instance_domains" (SQLSTATE 42P01) ``` The sorting columns that are valid to list `instance_trusted_domains` are * `trusted_domain_field_name_unspecified` * `trusted_domain_field_name_domain` * `trusted_domain_field_name_creation_date` However, the currently configured sorting columns are * `domain_field_name_unspecified` * `domain_field_name_domain` * `domain_field_name_primary` * `domain_field_name_generated` * `domain_field_name_creation_date` Configuring the actual columns of `instance_trusted_domains` makes this endpoint **backward incompatible**. Therefore, the fix in this PR is to no longer return an error when an invalid sorting column (non-existing column) is chosen and to sort the results by `creation_date` for invalid sorting columns. 2. This PR also fixes the `sorting_column` included in the responses of both `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints, as they now point to the default option irrespective of the chosen option in the request i.e., * `TRUSTED_DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceTrustedDomains()`, and * `DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceDomains()` # How the Problems Are Solved * Map the sorting columns to valid columns of `instance_trusted_domain` - If the sorting column is not one of the columns, the mapping defaults to `creation_date` * Set the `sorting_column` explicitly (from the request) in the `ListInstanceDomainsResponse` and `ListInstanceTrustedDomainsResponse` # Additional Changes A small fix to return the chosen `sorting_column` in the responses of the `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints # Additional Context - Closes #9839 |
||
![]() |
4b7443ba78 |
chore(docs): add llms.txt (#10133)
This pull request enhances the documentation site configuration by introducing a new plugin and making minor adjustments to existing settings. The primary focus is on integrating the `@signalwire/docusaurus-plugin-llms-txt` plugin to improve content handling and adding relevant dependencies. ### Plugin Integration: * [`docs/docusaurus.config.js`](diffhunk://#diff-28742c737e523f302e6de471b7fc27284dc8cf720be639e6afe4c17a550cd654R245-R255): Added the `@signalwire/docusaurus-plugin-llms-txt` plugin with configuration options, including a depth of 3, log level of 1, exclusion of certain routes, and enabling markdown file support. * [`docs/package.json`](diffhunk://#diff-adfa337ce44dc2902621da20152a048dac41878cf3716dfc4cc56d03aa212a56R33): Included the `@signalwire/docusaurus-plugin-llms-txt` dependency (version `^1.2.0`) to support the new plugin integration. ### Configuration Adjustments: * [`docs/docusaurus.config.js`](diffhunk://#diff-28742c737e523f302e6de471b7fc27284dc8cf720be639e6afe4c17a550cd654L221): Removed the `docItemComponent` property under the `module.exports` configuration. |
||
![]() |
253beb4d39 |
fix(login): encode formpost data to cookie (#10173)
This PR implements a SAML cookie which is used to save information to complete the form post. It is primarily used to avoid sending the information as url search params and therefore reducing its length. |
||
![]() |
aa8edee50b |
chore(docs): prevent readme overwrite (#10170)
# Which Problems Are Solved To generate the docs, we rely on a protoc plugin to generate an openAPI definition from connectRPC / proto. Since the plugin is not available on buf.build, we currently download the released version. As the tar contains a licence and a readme, this overwrote existing internal files. # How the Problems Are Solved Download and extract the plugin in a separate folder and update buf.gen.yaml accordingly. # Additional Changes None # Additional Context relates to #9483 |
||
![]() |
27cd1d8518 |
docs(api): add new beta services to api reference (#10018)
# Which Problems Are Solved The unreleased new resource apis have been removed from the docs: https://github.com/zitadel/zitadel/pull/10015 # How the Problems Are Solved Add them to the docs sidenav again, since they're now released. # Additional Changes none # Additional Context none --------- Co-authored-by: Fabienne <fabienne.gerschwiler@gmail.com> Co-authored-by: Marco Ardizzone <marco@zitadel.com> |
||
![]() |
8f0b7ebf02 |
BREAKING CHANGE: release candidate v4
BREAKING CHANGE: release candidate v4 |
||
![]() |
c8d37ac5af | Merge remote-tracking branch 'origin/main' into next-rc | ||
![]() |
5403be7c4b |
feat: user profile requests in resource APIs (#10151)
# Which Problems Are Solved The commands for the resource based v2beta AuthorizationService API are added. Authorizations, previously knows as user grants, give a user in a specific organization and project context roles. The project can be owned or granted. The given roles can be used to restrict access within the projects applications. The commands for the resource based v2beta InteralPermissionService API are added. Administrators, previously knows as memberships, give a user in a specific organization and project context roles. The project can be owned or granted. The give roles give the user permissions to manage different resources in Zitadel. API definitions from https://github.com/zitadel/zitadel/issues/9165 are implemented. Contains endpoints for user metadata. # How the Problems Are Solved ### New Methods - CreateAuthorization - UpdateAuthorization - DeleteAuthorization - ActivateAuthorization - DeactivateAuthorization - ListAuthorizations - CreateAdministrator - UpdateAdministrator - DeleteAdministrator - ListAdministrators - SetUserMetadata to set metadata on a user - DeleteUserMetadata to delete metadata on a user - ListUserMetadata to query for metadata of a user ## Deprecated Methods ### v1.ManagementService - GetUserGrantByID - ListUserGrants - AddUserGrant - UpdateUserGrant - DeactivateUserGrant - ReactivateUserGrant - RemoveUserGrant - BulkRemoveUserGrant ### v1.AuthService - ListMyUserGrants - ListMyProjectPermissions # Additional Changes - Permission checks for metadata functionality on query and command side - correct existence checks for resources, for example you can only be an administrator on an existing project - combined all member tables to singular query for the administrators - add permission checks for command an query side functionality - combined functions on command side where necessary for easier maintainability # Additional Context Closes #9165 --------- Co-authored-by: Elio Bischof <elio@zitadel.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Livio Spring <livio.a@gmail.com> |
||
![]() |
1d13911a4a |
Merge branch 'main' into next-rc
# Conflicts: # cmd/defaults.yaml # cmd/setup/config.go # cmd/setup/setup.go # cmd/start/start.go # docs/yarn.lock # go.mod # go.sum # internal/api/grpc/action/v2beta/execution.go # internal/api/grpc/action/v2beta/query.go # internal/api/grpc/action/v2beta/server.go # internal/api/grpc/action/v2beta/target.go # internal/api/grpc/feature/v2/converter.go # internal/api/grpc/feature/v2/converter_test.go # internal/api/grpc/feature/v2/integration_test/feature_test.go # internal/api/grpc/feature/v2beta/converter.go # internal/api/grpc/feature/v2beta/converter_test.go # internal/api/grpc/feature/v2beta/integration_test/feature_test.go # internal/api/oidc/key.go # internal/api/oidc/op.go # internal/command/idp_intent_test.go # internal/command/instance_features.go # internal/command/instance_features_test.go # internal/command/system_features.go # internal/command/system_features_test.go # internal/feature/feature.go # internal/feature/key_enumer.go # internal/integration/client.go # internal/query/instance_features.go # internal/query/system_features.go # internal/repository/feature/feature_v2/feature.go # proto/zitadel/feature/v2/instance.proto # proto/zitadel/feature/v2/system.proto # proto/zitadel/feature/v2beta/instance.proto # proto/zitadel/feature/v2beta/system.proto |
||
![]() |
9ebf2316c6 |
feat: exchange gRPC server implementation to connectRPC (#10145)
# Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com> |
||
![]() |
82cd1cee08 |
fix(service ping): correct endpoint, validate and randomize default interval (#10166)
# Which Problems Are Solved The production endpoint of the service ping was wrong. Additionally we discussed in the sprint review, that we could randomize the default interval to prevent all systems to report data at the very same time and also require a minimal interval. # How the Problems Are Solved - fixed the endpoint - If the interval is set to @daily (default), we generate a random time (minute, hour) as a cron format. - Check if the interval is more than 30min and return an error if not. - Fixed yaml indent on `ResourceCount` # Additional Changes None # Additional Context as discussed internally |
||
![]() |
26ec29a513 |
chore(deps): upgrade oidc and chi for dependabot alert (#10160)
# Which Problems Are Solved Solve dependabot alerts for Go packages. # How the Problems Are Solved - Upgrade to latest github.com/zitadel/oidc, which already pulls the fixed version of chi. - Upgrade mapstructure # Additional Changes - none # Additional Context - https://github.com/zitadel/zitadel/security/dependabot/323 - https://github.com/zitadel/zitadel/security/dependabot/324 |
||
![]() |
12656235e2 |
chore: fix login image with sha release (#10157)
# Which Problems Are Solved Fixes the releasing of multi-architecture login images. # How the Problems Are Solved - The login-container workflow extends the bake definition with a file docker-bake-release.hcl wich adds the platforms linux/arm and linux/amd to all relevant build targets. The used technique is similar to how the docker metadata action allows to extend the bake definitions. - The local login tag is moved to the metadata bake target, which is always inherited and overwritten in the pipeline - Packages write permission is added # Additional Changes - The MIT license is noted in container labels and annotations - The Image is built from root so that the local proto files are used --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
||
![]() |
47f0486ee8 |
fix(login): email or phone query, session context from loginname (#10158)
This PR fixes an issue where the orQuery for phone and email was not correctly set. |
||
![]() |
8c39779533 |
chore(deps): bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 in /login/apps/login-test-acceptance/idp/oidc in the go_modules group across 1 directory (#10152)
Bumps the go_modules group with 1 update in the /login/apps/login-test-acceptance/idp/oidc directory: [github.com/go-chi/chi/v5](https://github.com/go-chi/chi). Updates `github.com/go-chi/chi/v5` from 5.2.1 to 5.2.2 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/go-chi/chi/releases">github.com/go-chi/chi/v5's releases</a>.</em></p> <blockquote> <h2>v5.2.2</h2> <h2>What's Changed</h2> <ul> <li>Use strings.Cut in a few places by <a href="https://github.com/JRaspass"><code>@JRaspass</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/971">go-chi/chi#971</a></li> <li>Fix non-constant format strings in t.Fatalf by <a href="https://github.com/JRaspass"><code>@JRaspass</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/972">go-chi/chi#972</a></li> <li>Apply fieldalignment fixes to optimize struct memory layout by <a href="https://github.com/pixel365"><code>@pixel365</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/974">go-chi/chi#974</a></li> <li>go 1.24 by <a href="https://github.com/pkieltyka"><code>@pkieltyka</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/977">go-chi/chi#977</a></li> <li>chore: delint ioutil usage by <a href="https://github.com/costela"><code>@costela</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/962">go-chi/chi#962</a></li> <li>Fixed typo in Router interface definition by <a href="https://github.com/mithileshgupta12"><code>@mithileshgupta12</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/958">go-chi/chi#958</a></li> <li>Add support for TinyGo by <a href="https://github.com/efraimbart"><code>@efraimbart</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/978">go-chi/chi#978</a></li> <li>Exclude middleware/profiler.go in TinyGo, as there's no net/http/pprof pkg by <a href="https://github.com/cxjava"><code>@cxjava</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/982">go-chi/chi#982</a></li> <li>Make use of strings.Cut by <a href="https://github.com/scop"><code>@scop</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/1005">go-chi/chi#1005</a></li> <li>Change install command format to code block by <a href="https://github.com/sglkc"><code>@sglkc</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/1001">go-chi/chi#1001</a></li> <li>Correct documentation by <a href="https://github.com/mrdomino"><code>@mrdomino</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/992">go-chi/chi#992</a></li> </ul> <h2>Security fix</h2> <ul> <li>Fixes <a href="https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93">GHSA-vrw8-fxc6-2r93</a> - "Host Header Injection Leads to Open Redirect in RedirectSlashes" <a href=" |
||
![]() |
f93a35c7a8 |
feat: implement service ping (#10080)
This PR is still WIP and needs changes to at least the tests. # Which Problems Are Solved To be able to report analytical / telemetry data from deployed Zitadel systems back to a central endpoint, we designed a "service ping" functionality. See also https://github.com/zitadel/zitadel/issues/9706. This PR adds the first implementation to allow collection base data as well as report amount of resources such as organizations, users per organization and more. # How the Problems Are Solved - Added a worker to handle the different `ReportType` variations. - Schedule a periodic job to start a `ServicePingReport` - Configuration added to allow customization of what data will be reported - Setup step to generate and store a `systemID` # Additional Changes None # Additional Context relates to #9869 |
||
![]() |
71575e8d67 |
fix(webauthn): allow to use "old" passkeys/u2f credentials on session API (#10150)
# Which Problems Are Solved To prevent presenting unusable WebAuthN credentials to the user / browser, we filtered out all credentials, which do not match the requested RP ID. Since credentials set up through Login V1 and Console do not have an RP ID stored, they never matched. This was previously intended, since the Login V2 could be served on a separate domain. The problem is, that if it is hosted on the same domain, the credentials would also be filtered out and user would not be able to login. # How the Problems Are Solved Change the filtering to return credentials, if no RP ID is stored and the requested RP ID matches the instance domain. # Additional Changes None # Additional Context Noted internally when testing the login v2 |
||
![]() |
325aa1f184 |
fix(login): ensure correct i18n locale context (#10156)
This PR ensures that the correct locale context is set for the new login |
||
![]() |
a02a534cd2 |
feat: initial admin PAT has IAM_LOGIN_CLIENT (#10143)
# Which Problems Are Solved We provide a seamless way to initialize Zitadel and the login together. # How the Problems Are Solved Additionally to the `IAM_OWNER` role, a set up admin user also gets the `IAM_LOGIN_CLIENT` role if it is a machine user with a PAT. # Additional Changes - Simplifies the load balancing example, as the intermediate configuration step is not needed anymore. # Additional Context - Depends on #10116 - Contributes to https://github.com/zitadel/zitadel-charts/issues/332 - Contributes to https://github.com/zitadel/zitadel/issues/10016 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> |
||
![]() |
2928c6ac2b |
chore(login): migrate nextjs login to monorepo (#10134)
# Which Problems Are Solved We move the login code to the zitadel repo. # How the Problems Are Solved The login repo is added to ./login as a git subtree pulled from the dockerize-ci branch. Apart from the login code, this PR contains the changes from #10116 # Additional Context - Closes https://github.com/zitadel/typescript/issues/474 - Also merges #10116 - Merging is blocked by failing check because of: - https://github.com/zitadel/zitadel/pull/10134#issuecomment-3012086106 --------- Co-authored-by: Max Peintner <peintnerm@gmail.com> Co-authored-by: Max Peintner <max@caos.ch> Co-authored-by: Florian Forster <florian@zitadel.com> |
||
![]() |
fce9e770ac |
feat: App Keys API v2 (#10140)
# Which Problems Are Solved This PR *partially* addresses #9450 . Specifically, it implements the resource based API for app keys. This PR, together with https://github.com/zitadel/zitadel/pull/10077 completes #9450 . # How the Problems Are Solved - Implementation of the following endpoints: `CreateApplicationKey`, `DeleteApplicationKey`, `GetApplicationKey`, `ListApplicationKeys` - `ListApplicationKeys` can filter by project, app or organization ID. Sorting is also possible according to some criteria. - All endpoints use permissions V2 # TODO - [x] Deprecate old endpoints # Additional Context Closes #9450 |
||
![]() |
64a03fba28 |
fix(api): return typed saml form post data in idp intent (#10136)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved The current user V2 API returns a `[]byte` containing a whole HTML document including the form on `StartIdentifyProviderIntent` for intents based on form post (e.g. SAML POST bindings). This is not usable for most clients as they cannot handle that and render a whole page inside their app. For redirect based intents, the url to which the client needs to redirect is returned. # How the Problems Are Solved - Changed the returned type to a new `FormData` message containing the url and a `fields` map. - internal changes: - Session.GetAuth now returns an `Auth` interfacce and error instead of (content string, redirect bool) - Auth interface has two implementations: `RedirectAuth` and `FormAuth` - All use of the GetAuth function now type switch on the returned auth object - A template has been added to the login UI to execute the form post automatically (as is). # Additional Changes - Some intent integration test did not check the redirect url and were wrongly configured. # Additional Context - relates to zitadel/typescript#410 |
||
![]() |
b7d447e313 |
docs(legal): Update account-lockout-policy.md (#10124)
Review finished for the account lockout policy. Main changes: - Revised wording - Removed free account from the policy scope - Fixed broken link to the support form in the customer portal --------- Co-authored-by: Maximilian <mpa@zitadel.com> |
||
![]() |
4cd52f33eb |
chore(oidc): remove feature flag for introspection triggers (#10132)
# Which Problems Are Solved Remove the feature flag that allowed triggers in introspection. This option was a fallback in case introspection would not function properly without triggers. The API documentation asked for anyone using this flag to raise an issue. No such issue was received, hence we concluded it is safe to remove it. # How the Problems Are Solved - Remove flags from the system and instance level feature APIs. - Remove trigger functions that are no longer used - Adjust tests that used the flag. # Additional Changes - none # Additional Context - Closes #10026 - Flag was introduced in #7356 --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> |
||
![]() |
14b45b58eb | chore: add inkeep search and ai to docs (#10119) | ||
![]() |
2691dae2b6 |
feat: App API v2 (#10077)
# Which Problems Are Solved This PR *partially* addresses #9450 . Specifically, it implements the resource based API for the apps. APIs for app keys ARE not part of this PR. # How the Problems Are Solved - `CreateApplication`, `PatchApplication` (update) and `RegenerateClientSecret` endpoints are now unique for all app types: API, SAML and OIDC apps. - All new endpoints have integration tests - All new endpoints are using permission checks V2 # Additional Changes - The `ListApplications` endpoint allows to do sorting (see protobuf for details) and filtering by app type (see protobuf). - SAML and OIDC update endpoint can now receive requests for partial updates # Additional Context Partially addresses #9450 |
||
![]() |
016676e1dc |
chore(oidc): graduate webkey to stable (#10122)
# Which Problems Are Solved Stabilize the usage of webkeys. # How the Problems Are Solved - Remove all legacy signing key code from the OIDC API - Remove the webkey feature flag from proto - Remove the webkey feature flag from console - Cleanup documentation # Additional Changes - Resolved some canonical header linter errors in OIDC - Use the constant for `projections.lock` in the saml package. # Additional Context - Closes #10029 - After #10105 - After #10061 |
||
![]() |
1ebbe275b9 |
chore(oidc): remove legacy storage methods (#10061)
# Which Problems Are Solved Stabilize the optimized introspection code and cleanup unused code. # How the Problems Are Solved - `oidc_legacy_introspection` feature flag is removed and reserved. - `OPStorage` which are no longer needed have their bodies removed. - The method definitions need to remain in place so the interface remains implemented. - A panic is thrown in case any such method is still called # Additional Changes - A number of `OPStorage` methods related to token creation were already unused. These are also cleaned up. # Additional Context - Closes #10027 - #7822 --------- Co-authored-by: Livio Spring <livio.a@gmail.com> |
||
![]() |
27f88a6390 |
docs(migration): Added step-by-step guide for the Auth0 to Zitadel migration (#10118)
Added a step-by-step guide for the Auth0 to Zitadel migration in preparation for the upcoming workshop. |
||
![]() |
5da5ccda5c |
fix: correct user v2 api docs for v3 (#10112)
# Which Problems Are Solved As documentation is published from the main branch and the releases get created from another branch, they are not always correctly equal. # How the Problems Are Solved Remove the unnecessary changes in the documentation for now, and create a second PR which can then be used to update the documentation. # Additional Changes Correct integration tests which also use the endpoints. # Additional Context Closes #10083 --------- Co-authored-by: Fabienne Bühler <fabienne@zitadel.com> |
||
![]() |
1719bbaba5 |
chore(docs): update docusaurus to 3.8.1 (#10115)
This pull request updates several dependencies in the `docs/package.json` file to their latest minor versions, ensuring compatibility and access to the latest features and fixes. Dependency updates: * Updated `@docusaurus/core`, `@docusaurus/faster`, `@docusaurus/preset-classic`, `@docusaurus/theme-mermaid`, and `@docusaurus/theme-search-algolia` from version `^3.8.0` to `^3.8.1` in the `dependencies` section. * Updated `@docusaurus/module-type-aliases` and `@docusaurus/types` from version `^3.8.0` to `^3.8.1` in the `devDependencies` section. Co-authored-by: Florian Forster <florian@zitadel> |
||
![]() |
fa9de9a0f1 |
feat: generate webkeys setup step (#10105)
# Which Problems Are Solved We are preparing to roll-out and stabilize webkeys in the next version of Zitadel. Before removing legacy signing-key code, we must ensure all existing instances have their webkeys generated. # How the Problems Are Solved Add a setup step which generate 2 webkeys for each existing instance that didn't have webkeys yet. # Additional Changes Return an error from the config type-switch, when the type is unknown. # Additional Context - Part 1/2 of https://github.com/zitadel/zitadel/issues/10029 - Should be back-ported to v3 |
||
![]() |
3a4298c179 |
fix(scim): add type attribute to ScimEmail (#9690)
# Which Problems Are Solved - SCIM PATCH operations for users from Entra ID for the `emails` attribute fails due to missing `type` subattribute # How the Problems Are Solved - Adds the `type` attribute to the `ScimUser` struct and sets the default value to `"work"` in the `mapWriteModelToScimUser()` method. # Additional Changes # Additional Context The SCIM handlers for POST and PUT ignore multiple emails and only uses the primary email for a given user, or falls back to the first email if none are marked as primary. PATCH operations however, will attempt to resolve the provided filter in `operations[].path`. Some services, such as Entra ID, only support patching emails by filtering for `emails[type eq "(work|home|other)"].value`, which fails with Zitadel as the ScimUser struct (and thus the generated schema) doesn't include the `type` field. This commit adds the `type` field to work around this issue, while still preserving compatibility with filters such as `emails[primary eq true].value`. - https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191 --------- Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no> Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no> |
||
![]() |
28f7218ea1 |
feat: Hosted login translation API (#10011)
# Which Problems Are Solved This PR implements https://github.com/zitadel/zitadel/issues/9850 # How the Problems Are Solved - New protobuf definition - Implementation of retrieval of system translations - Implementation of retrieval and persistence of organization and instance level translations # Additional Context - Closes #9850 # TODO - [x] Integration tests for Get and Set hosted login translation endpoints - [x] DB migration test - [x] Command function tests - [x] Command util functions tests - [x] Query function test - [x] Query util functions tests |
||
![]() |
cddbd3dd47 |
docs: Correct API docs of unlock user (#10064)
# Which Problems Are Solved The API docs of unlock user show the description of the lock user. # How the Problems Are Solved Correct API docs for unlock user are added |
||
![]() |
83839fc2ef |
fix: enable opentelemetry metrics for river queue (#10044)
# Which Problems Are Solved Right now we have no visibility into river queue's job processing times and queue sizes. This makes it difficult to reliably know if notifications are actually being published in a reasonable time and current queue size. # How the Problems Are Solved Integrates River's OpenTelemetry middleware with Zitadel's metrics system by adding the otelriver middleware to the queue configuration. # Additional Changes - Updated dependencies to include required `otelriver` package # Additional Context Example output from `/debug/metrics` <details> <summary>output</summary> # HELP failed_deliveries_json_total Failed JSON message deliveries # TYPE failed_deliveries_json_total counter failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"} 2 # HELP go_gc_duration_seconds A summary of the wall-time pause (stop-the-world) duration in garbage collection cycles. # TYPE go_gc_duration_seconds summary go_gc_duration_seconds{quantile="0"} 3.8e-05 go_gc_duration_seconds{quantile="0.25"} 6.3916e-05 go_gc_duration_seconds{quantile="0.5"} 7.5584e-05 go_gc_duration_seconds{quantile="0.75"} 9.2584e-05 go_gc_duration_seconds{quantile="1"} 0.000204292 go_gc_duration_seconds_sum 0.003028502 go_gc_duration_seconds_count 34 # HELP go_gc_gogc_percent Heap size target percentage configured by the user, otherwise 100. This value is set by the GOGC environment variable, and the runtime/debug.SetGCPercent function. Sourced from /gc/gogc:percent # TYPE go_gc_gogc_percent gauge go_gc_gogc_percent 100 # HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT environment variable, and the runtime/debug.SetMemoryLimit function. Sourced from /gc/gomemlimit:bytes # TYPE go_gc_gomemlimit_bytes gauge go_gc_gomemlimit_bytes 9.223372036854776e+18 # HELP go_goroutines Number of goroutines that currently exist. # TYPE go_goroutines gauge go_goroutines 231 # HELP go_info Information about the Go environment. # TYPE go_info gauge go_info{version="go1.24.3"} 1 # HELP go_memstats_alloc_bytes Number of bytes allocated in heap and currently in use. Equals to /memory/classes/heap/objects:bytes. # TYPE go_memstats_alloc_bytes gauge go_memstats_alloc_bytes 7.7565832e+07 # HELP go_memstats_alloc_bytes_total Total number of bytes allocated in heap until now, even if released already. Equals to /gc/heap/allocs:bytes. # TYPE go_memstats_alloc_bytes_total counter go_memstats_alloc_bytes_total 7.3319844e+08 # HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table. Equals to /memory/classes/profiling/buckets:bytes. # TYPE go_memstats_buck_hash_sys_bytes gauge go_memstats_buck_hash_sys_bytes 1.63816e+06 # HELP go_memstats_frees_total Total number of heap objects frees. Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects. # TYPE go_memstats_frees_total counter go_memstats_frees_total 1.1496925e+07 # HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata. Equals to /memory/classes/metadata/other:bytes. # TYPE go_memstats_gc_sys_bytes gauge go_memstats_gc_sys_bytes 5.182776e+06 # HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and currently in use, same as go_memstats_alloc_bytes. Equals to /memory/classes/heap/objects:bytes. # TYPE go_memstats_heap_alloc_bytes gauge go_memstats_heap_alloc_bytes 7.7565832e+07 # HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used. Equals to /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes. # TYPE go_memstats_heap_idle_bytes gauge go_memstats_heap_idle_bytes 5.8179584e+07 # HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes # TYPE go_memstats_heap_inuse_bytes gauge go_memstats_heap_inuse_bytes 8.5868544e+07 # HELP go_memstats_heap_objects Number of currently allocated objects. Equals to /gc/heap/objects:objects. # TYPE go_memstats_heap_objects gauge go_memstats_heap_objects 573723 # HELP go_memstats_heap_released_bytes Number of heap bytes released to OS. Equals to /memory/classes/heap/released:bytes. # TYPE go_memstats_heap_released_bytes gauge go_memstats_heap_released_bytes 7.20896e+06 # HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system. Equals to /memory/classes/heap/objects:bytes + /memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes + /memory/classes/heap/free:bytes. # TYPE go_memstats_heap_sys_bytes gauge go_memstats_heap_sys_bytes 1.44048128e+08 # HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection. # TYPE go_memstats_last_gc_time_seconds gauge go_memstats_last_gc_time_seconds 1.749491558214289e+09 # HELP go_memstats_mallocs_total Total number of heap objects allocated, both live and gc-ed. Semantically a counter version for go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects + /gc/heap/tiny/allocs:objects. # TYPE go_memstats_mallocs_total counter go_memstats_mallocs_total 1.2070648e+07 # HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures. Equals to /memory/classes/metadata/mcache/inuse:bytes. # TYPE go_memstats_mcache_inuse_bytes gauge go_memstats_mcache_inuse_bytes 16912 # HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system. Equals to /memory/classes/metadata/mcache/inuse:bytes + /memory/classes/metadata/mcache/free:bytes. # TYPE go_memstats_mcache_sys_bytes gauge go_memstats_mcache_sys_bytes 31408 # HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures. Equals to /memory/classes/metadata/mspan/inuse:bytes. # TYPE go_memstats_mspan_inuse_bytes gauge go_memstats_mspan_inuse_bytes 1.3496e+06 # HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system. Equals to /memory/classes/metadata/mspan/inuse:bytes + /memory/classes/metadata/mspan/free:bytes. # TYPE go_memstats_mspan_sys_bytes gauge go_memstats_mspan_sys_bytes 2.18688e+06 # HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place. Equals to /gc/heap/goal:bytes. # TYPE go_memstats_next_gc_bytes gauge go_memstats_next_gc_bytes 1.34730994e+08 # HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations. Equals to /memory/classes/other:bytes. # TYPE go_memstats_other_sys_bytes gauge go_memstats_other_sys_bytes 3.125168e+06 # HELP go_memstats_stack_inuse_bytes Number of bytes obtained from system for stack allocator in non-CGO environments. Equals to /memory/classes/heap/stacks:bytes. # TYPE go_memstats_stack_inuse_bytes gauge go_memstats_stack_inuse_bytes 2.752512e+06 # HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator. Equals to /memory/classes/heap/stacks:bytes + /memory/classes/os-stacks:bytes. # TYPE go_memstats_stack_sys_bytes gauge go_memstats_stack_sys_bytes 2.752512e+06 # HELP go_memstats_sys_bytes Number of bytes obtained from system. Equals to /memory/classes/total:byte. # TYPE go_memstats_sys_bytes gauge go_memstats_sys_bytes 1.58965032e+08 # HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS setting, or the number of operating system threads that can execute user-level Go code simultaneously. Sourced from /sched/gomaxprocs:threads # TYPE go_sched_gomaxprocs_threads gauge go_sched_gomaxprocs_threads 14 # HELP go_threads Number of OS threads created. # TYPE go_threads gauge go_threads 25 # HELP grpc_server_grpc_status_code_total Grpc status code counter # TYPE grpc_server_grpc_status_code_total counter grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"} 1 grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"} 2 grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"} 1 grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"} 1 # HELP grpc_server_request_counter_total Grpc request counter # TYPE grpc_server_request_counter_total counter grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""} 1 grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""} 2 grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""} 1 grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""} 1 # HELP grpc_server_total_request_counter_total Total grpc request counter # TYPE grpc_server_total_request_counter_total counter grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""} 5 # HELP otel_scope_info Instrumentation Scope metadata # TYPE otel_scope_info gauge otel_scope_info{otel_scope_name="",otel_scope_version=""} 1 otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""} 1 # HELP projection_events_processed_total Number of events reduced to process projection updates # TYPE projection_events_processed_total counter projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"} 1 projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"} 0 projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"} 0 projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"} 1 projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"} 0 projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"} 0 projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"} 0 # HELP projection_handle_timer_seconds Time taken to process a projection update # TYPE projection_handle_timer_seconds histogram projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"} 0 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"} 1 projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"} 0.007344541 projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"} 0 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"} 0 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"} 1 projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"} 1 projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"} 0.014258458 projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"} 1 # HELP projection_state_latency_seconds When finishing processing a batch of events, this track the age of the last events seen from current time # TYPE projection_state_latency_seconds histogram projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"} 1 projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"} 0.012979 projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"} 1 projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"} 1 projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"} 0.0199 projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"} 1 # HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served. # TYPE promhttp_metric_handler_requests_in_flight gauge promhttp_metric_handler_requests_in_flight 1 # HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code. # TYPE promhttp_metric_handler_requests_total counter promhttp_metric_handler_requests_total{code="200"} 1 promhttp_metric_handler_requests_total{code="500"} 0 promhttp_metric_handler_requests_total{code="503"} 0 # HELP river_insert_count_total Number of jobs inserted # TYPE river_insert_count_total counter river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"} 1 # HELP river_insert_many_count_total Number of job batches inserted (all jobs are inserted in a batch, but batches may be one job) # TYPE river_insert_many_count_total counter river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"} 1 # HELP river_insert_many_duration_histogram_seconds Duration of job batch insertion (histogram) # TYPE river_insert_many_duration_histogram_seconds histogram river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"} 0 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"} 1 river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"} 1 river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"} 0.002905666 river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"} 1 # HELP river_insert_many_duration_seconds Duration of job batch insertion # TYPE river_insert_many_duration_seconds gauge river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"} 0.002905666 # HELP river_work_count_total Number of jobs worked # TYPE river_work_count_total counter river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 1 river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 1 # HELP river_work_duration_histogram_seconds Duration of job being worked (histogram) # TYPE river_work_duration_histogram_seconds histogram river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"} 0 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"} 1 river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"} 1 river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 0.029241083 river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"} 0 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"} 1 river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"} 1 river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 0.0408745 river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 1 # HELP river_work_duration_seconds Duration of job being worked # TYPE river_work_duration_seconds gauge river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 0.029241083 river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"} 0.0408745 # HELP target_info Target metadata # TYPE target_info gauge target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"} 1 </details> Example grafana dashboard:  - Closes #10043 --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> |
||
![]() |
2488875364 |
feat: Display Authentication Method Name on Application Page (#9639)
# Which Problems Are Solved
The Authentication Method name is currently not displayed on the
Application Page, this screenshot is taken from the linked issue:
<img width="946" alt="417991175-a6c8497f-9c4f-4042-8ffa-c5f995ab5039"
src="https://github.com/user-attachments/assets/aea0e956-27e3-4e32-bcb1-0a7456480084"
/>
I can also add other fields if necessary, but the layout may need to be
redesigned to keep a good looking UI since there are already a lot of
fields.
# How the Problems Are Solved
Display the Authentication Method name between the `Status` and the `ID`
fields from either the `oidcConfig` or the `apiConfig` objects.
Here are some screenshots of the result:
None:

Private JWT:

Post:

API Basic:

# Additional Changes
None
# Additional Context
- Closes #9435
---------
Co-authored-by: Ramon <mail@conblem.me>
(cherry picked from commit
|
||
![]() |
6ed9f3da3b |
fix(FE): allow only enabled factors to be displayed on user page (#9313)
# Which Problems Are Solved
- Hides for users MFA options are not allowed by org policy.
- Fix for "ng test" across "console"
# How the Problems Are Solved
- Before displaying MFA options we call "listMyMultiFactors" from parent
component to filter MFA allowed by org
# Additional Changes
- Dependency Injection was fixed around ng unit tests
# Additional Context
admin view
<img width="698" alt="Screenshot 2025-02-06 at 00 26 50"
src="https://github.com/user-attachments/assets/1b642c8a-a640-4bdd-a1ca-bde70c263567"
/>
user view
<img width="751" alt="Screenshot 2025-02-06 at 00 27 16"
src="https://github.com/user-attachments/assets/e1c99907-3226-46ce-b8bc-e993af4b4cae"
/>
test
<img width="1500" alt="Screenshot 2025-02-06 at 00 01 36"
src="https://github.com/user-attachments/assets/d2d8ead1-9f0f-4916-a2fc-f4db9c71cfa8"
/>
The issue: https://github.com/zitadel/zitadel/issues/9176
The bug report:
https://discord.com/channels/927474939156643850/1307006457815896094
---------
Co-authored-by: a k <rdyto1@macbook-pro-1.home>
Co-authored-by: a k <rdyto1@macbook-pro.home>
Co-authored-by: a k <rdyto1@macbook-pro-2.home>
Co-authored-by: Ramon <mail@conblem.me>
(cherry picked from commit
|