mirror of
https://github.com/zitadel/zitadel.git
synced 2025-12-06 02:12:06 +00:00
b892fc9b28751087898a97b4b11caf6879bb7087
2129 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
f0fa89747d |
fix: actions v2beta with api design for GA (#10303)
# Which Problems Are Solved Actions v2beta API does not adhere to the [API design](https://github.com/zitadel/zitadel/blob/main/API_DESIGN.md) fully. # How the Problems Are Solved - Correct body usage for ListExecutions - Correct REST path for ListTargets and ListExecutions - Correct attribute names for ListTargetsResponse and ListExecutionsResponse # Additional Changes - Remove unused object import. # Additional Context Closes #10138 --------- Co-authored-by: Marco A. <marco@zitadel.com> |
||
|
|
fe3ccc85d6 |
fix: invite code generation after multiple verification failures (#10323)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved If a wrong verification code is used three or more times during verification, or if the verification code is expired, the user state is marked as [deleted](https://github.com/zitadel/zitadel/blob/main/internal/command/user_v2_invite_model.go#L69). This prevents the creation of a new code with the following [error](https://github.com/zitadel/zitadel/blob/main/internal/command/user_v2_invite.go#L60): `Errors.User.NotFound`. This PR aims to fix this bug. # How the Problems Are Solved This issue is solved by invalidating the previously issued invite code and setting the value of `UserV2InviteWriteModel.CodeReturned` as `false` # Additional Changes N/A # Additional Context - Closes #9860 - Follow-up: API doc update |
||
|
|
8fff45d8f4 |
fix(scim): add a metadata config to ignore random password sent during SCIM create (#10296)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved Okta sends a random password in the request to create a user during SCIM provisioning, irrespective of whether the `Sync Password` option is enabled or disabled on Okta, and this password does not comply with the default password complexity set in Zitadel. This PR adds a workaround to create users without issues in such cases. # How the Problems Are Solved - A new metadata configuration called `urn:zitadel:scim:ignorePasswordOnCreate` is added to the Machine User that is used for provisioning - During SCIM user creation requests, if the `urn:zitadel:scim:ignorePasswordOnCreate` is set to `true` in the Machine User's metadata, the password set in the create request is ignored # Additional Changes # Additional Context The random password is ignored (if set in the metadata) only during customer creation. This change does not affect SCIM password updates. - Closes #10009 --------- Co-authored-by: Marco A. <marco@zitadel.com> |
||
|
|
25adfd91a2 |
feat: add Turkish language support (#10198)
- Turkish language support is added.
- Updated other language files to add Turkish selection.
# Which Problems Are Solved
- Zitadel was not supporting Turkish language. Now supporting.
# How the Problems Are Solved
- Turkish language files are added and updated other language files in
below paths to add Turkish support;
- /console/src/assets/i18n/
- /internal/api/ui/login/static/i18n
- /internal/notification/static/i18n
- /internal/static/i18n
# Additional Changes
- Made changes below files for codes/docs changes;
- /console/src/app/utils/language.ts
- /console/src/app/app.module.ts
- /docs/docs/guides/manage/customize/texts.md
- /internal/api/ui/login/static/templates/external_not_found_option.html
- /internal/query/v2-default.json
- /login/apps/login/src/lib/i18n.ts
---------
Co-authored-by: Marco A. <marco@zitadel.com>
|
||
|
|
870fefe3dc |
fix(org): adding unique constrants to not allow an org to be added twice with same id (#10243)
# Which Problems Are Solved When adding 2 orgs with the same ID, you get a positive response from the API, later when the org is projected, it errors due to the id already in use # How the Problems Are Solved Check org with orgID specified does not already exist before adding events # Additional Changes Added additional test case for adding same org with same name twice # Additional Context - Closes https://github.com/zitadel/zitadel/issues/10127 --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> |
||
|
|
6d11145c77 |
fix(saml): Push AuthenticationSucceededOnApplication milestone for SAML sessions (#10263)
# Which Problems Are Solved The SAML session (v2 login) currently does not push a `AuthenticationSucceededOnApplication` milestone upon successful SAML login for the first time. The changes in this PR address this issue. # How the Problems Are Solved Add a new function to set the appropriate milestone, and call this function after a successful SAML request. # Additional Changes N/A # Additional Context - Closes #9592 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> |
||
|
|
40094bee87 |
fix: permission checks on session API
# Which Problems Are Solved
The session API allowed any authenticated user to update sessions by their ID without any further check.
This was unintentionally introduced with version 2.53.0 when the requirement of providing the latest session token on every session update was removed and no other permission check (e.g. session.write) was ensured.
# How the Problems Are Solved
- Granted `session.write` to `IAM_OWNER` and `IAM_LOGIN_CLIENT` in the defaults.yaml
- Granted `session.read` to `IAM_ORG_MANAGER`, `IAM_USER_MANAGER` and `ORG_OWNER` in the defaults.yaml
- Pass the session token to the UpdateSession command.
- Check for `session.write` permission on session creation and update.
- Alternatively, the (latest) sessionToken can be used to update the session.
- Setting an auth request to failed on the OIDC Service `CreateCallback` endpoint now ensures it's either the same user as used to create the auth request (for backwards compatibilty) or requires `session.link` permission.
- Setting an device auth request to failed on the OIDC Service `AuthorizeOrDenyDeviceAuthorization` endpoint now requires `session.link` permission.
- Setting an auth request to failed on the SAML Service `CreateResponse` endpoint now requires `session.link` permission.
# Additional Changes
none
# Additional Context
none
(cherry picked from commit
|
||
|
|
c4e0342c5f |
chore(tests): fix tests (#10267)
# Which Problems Are Solved
The latest merge on main corrupted some unit tests.
# How the Problems Are Solved
Fix them as intended on the PR.
# Additional Changes
None
# Additional Context
relates to
|
||
|
|
b76d8d37cb |
fix: permission checks on session API
# Which Problems Are Solved
The session API allowed any authenticated user to update sessions by their ID without any further check.
This was unintentionally introduced with version 2.53.0 when the requirement of providing the latest session token on every session update was removed and no other permission check (e.g. session.write) was ensured.
# How the Problems Are Solved
- Granted `session.write` to `IAM_OWNER` and `IAM_LOGIN_CLIENT` in the defaults.yaml
- Granted `session.read` to `IAM_ORG_MANAGER`, `IAM_USER_MANAGER` and `ORG_OWNER` in the defaults.yaml
- Pass the session token to the UpdateSession command.
- Check for `session.write` permission on session creation and update.
- Alternatively, the (latest) sessionToken can be used to update the session.
- Setting an auth request to failed on the OIDC Service `CreateCallback` endpoint now ensures it's either the same user as used to create the auth request (for backwards compatibilty) or requires `session.link` permission.
- Setting an device auth request to failed on the OIDC Service `AuthorizeOrDenyDeviceAuthorization` endpoint now requires `session.link` permission.
- Setting an auth request to failed on the SAML Service `CreateResponse` endpoint now requires `session.link` permission.
# Additional Changes
none
# Additional Context
none
(cherry picked from commit
|
||
|
|
4c942f3477 |
Merge commit from fork
* fix: require permission to create and update session * fix: require permission to fail auth requests * merge main and fix integration tests * fix merge * fix integration tests * fix integration tests * fix saml permission check |
||
|
|
d5d6d37a25 |
test(org): enahcning test for creating org with custom id (#10247)
# Which Problems Are Solved Enhancing integration test for creating org; currently the test does not check if the created org has the assigned custom id, this will resolve this issue. |
||
|
|
79fcc2f2b6 |
chore(tests): name integration test packages correctly to let them run (#10242)
# Which Problems Are Solved After changing some internal logic, which should have failed the integration test, but didn't, I noticed that some integration tests were never executed. The make command lists all `integration_test` packages, but some are named `integration` # How the Problems Are Solved Correct wrong integration test package names. # Additional Changes None # Additional Context - noticed internally - backport to 3.x and 2.x |
||
|
|
23d6d24bc8 |
fix(login): changed permission check for sending invite code on log in (#10197)
# Which Problems Are Solved Fixes issue when users would get an error message when attempting to resend invitation code when logging in # How the Problems Are Solved Changing the permission check for looking for `org.write` to `ommand.checkPermissionUpdateUser()` # Additional Context - Closes https://github.com/zitadel/zitadel/issues/10100 - backport to 3.x |
||
|
|
c787cdf7b4 |
fix(login v1): correctly auto-link users on organizations with suffixed usernames (#10205)
(cherry picked from commit
|
||
|
|
1b01fc6c40 |
fix(api): CORS for connectRPC and grpc-web (#10227)
# Which Problems Are Solved The CORS handler for the new connectRPC handlers was missing, leading to unhandled preflight requests and a unusable api for browser based calls, e.g. cross domain gRPC-web requests. # How the Problems Are Solved - Added the http CORS middleware to the connectRPC handlers. - Added `Grpc-Timeout`, `Connect-Protocol-Version`,`Connect-Timeout-Ms` to the default allowed headers (this improves also the old grpc-web handling) - Added `Grpc-Status`, `Grpc-Message`, `Grpc-Status-Details-Bin` to the default exposed headers (this improves also the old grpc-web handling) # Additional Changes None # Additional Context noticed internally while testing other issues |
||
|
|
8f61b24532 | fix(login v1): correctly auto-link users on organizations with suffixed usernames (#10205) | ||
|
|
42663a29bd |
perf: improve org and org domain creation (#10232)
# Which Problems Are Solved
When an organization domain is verified, e.g. also when creating a new
organization (incl. generated domain), existing usernames are checked if
the domain has been claimed.
The query was not optimized for instances with many users and
organizations.
# How the Problems Are Solved
- Replace the query, which was searching over the users projection with
(computed loginnames) with a dedicated query checking the loginnames
projection directly.
- All occurrences have been updated to use the new query.
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit
|
||
|
|
d537e86345 |
fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved
When authenticating with email or phone number in the login V1, users
were not able to request a password reset and would be given a "User not
found" error.
This was due to a check of the loginname of the auth request, which in
those cases would not match the user's stored loginname.
# How the Problems Are Solved
Switch to a check of the resolved userID in the auth request. (We still
check the user again, since the ID might be a placeholder for an unknown
user and we do not want to disclose any information by omitting a check
and reduce the response time.)
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit
|
||
|
|
b638ed528d |
fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved
Customers reported, that if the session / access token in Console
expired and they re-authenticated, the user list would be empty.
While reproducing the issue, we discovered that the necessary
organization information, would be missing in the access token, since
this would already be missing in the OIDC session creation when using an
id_token_hint.
# How the Problems Are Solved
- Ensure the user's organization is set in the login v1 auth request.
This is used to create the OIDC and token information.
# Additional Changes
None
# Additional Context
- reported by customers
- requires backport to v3.x
(cherry picked from commit
|
||
|
|
1d409f7959 |
fix(webauthn): allow to use "old" passkeys/u2f credentials on session API (#10150)
# Which Problems Are Solved
To prevent presenting unusable WebAuthN credentials to the user /
browser, we filtered out all credentials, which do not match the
requested RP ID. Since credentials set up through Login V1 and Console
do not have an RP ID stored, they never matched. This was previously
intended, since the Login V2 could be served on a separate domain.
The problem is, that if it is hosted on the same domain, the credentials
would also be filtered out and user would not be able to login.
# How the Problems Are Solved
Change the filtering to return credentials, if no RP ID is stored and
the requested RP ID matches the instance domain.
# Additional Changes
None
# Additional Context
Noted internally when testing the login v2
(cherry picked from commit
|
||
|
|
c2c49679cb |
fix(scim): add type attribute to ScimEmail (#9690)
# Which Problems Are Solved
- SCIM PATCH operations for users from Entra ID for the `emails`
attribute fails due to missing `type` subattribute
# How the Problems Are Solved
- Adds the `type` attribute to the `ScimUser` struct and sets the
default value to `"work"` in the `mapWriteModelToScimUser()` method.
# Additional Changes
# Additional Context
The SCIM handlers for POST and PUT ignore multiple emails and only uses
the primary email for a given user, or falls back to the first email if
none are marked as primary. PATCH operations however, will attempt to
resolve the provided filter in `operations[].path`.
Some services, such as Entra ID, only support patching emails by
filtering for `emails[type eq "(work|home|other)"].value`, which fails
with Zitadel as the ScimUser struct (and thus the generated schema)
doesn't include the `type` field.
This commit adds the `type` field to work around this issue, while still
preserving compatibility with filters such as `emails[primary eq
true].value`.
-
https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191
---------
Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no>
Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no>
(cherry picked from commit
|
||
|
|
056399bdb4 |
fix: enable opentelemetry metrics for river queue (#10044)
# Which Problems Are Solved
Right now we have no visibility into river queue's job processing times
and queue sizes. This makes it difficult to reliably know if
notifications are actually being published in a reasonable time and
current queue size.
# How the Problems Are Solved
Integrates River's OpenTelemetry middleware with Zitadel's metrics
system by adding the otelriver middleware to the queue configuration.
# Additional Changes
- Updated dependencies to include required `otelriver` package
# Additional Context
Example output from `/debug/metrics`
<details>
<summary>output</summary>
# HELP failed_deliveries_json_total Failed JSON message deliveries
# TYPE failed_deliveries_json_total counter
failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"}
2
# HELP go_gc_duration_seconds A summary of the wall-time pause
(stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8e-05
go_gc_duration_seconds{quantile="0.25"} 6.3916e-05
go_gc_duration_seconds{quantile="0.5"} 7.5584e-05
go_gc_duration_seconds{quantile="0.75"} 9.2584e-05
go_gc_duration_seconds{quantile="1"} 0.000204292
go_gc_duration_seconds_sum 0.003028502
go_gc_duration_seconds_count 34
# HELP go_gc_gogc_percent Heap size target percentage configured by the
user, otherwise 100. This value is set by the GOGC environment variable,
and the runtime/debug.SetGCPercent function. Sourced from
/gc/gogc:percent
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the
user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT
environment variable, and the runtime/debug.SetMemoryLimit function.
Sourced from /gc/gomemlimit:bytes
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 231
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and
currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.7565832e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in
heap until now, even if released already. Equals to
/gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 7.3319844e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the
profiling bucket hash table. Equals to
/memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.63816e+06
# HELP go_memstats_frees_total Total number of heap objects frees.
Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.1496925e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage
collection system metadata. Equals to
/memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.182776e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and
currently in use, same as go_memstats_alloc_bytes. Equals to
/memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.7565832e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be
used. Equals to /memory/classes/heap/released:bytes +
/memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.8179584e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in
use. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.5868544e+07
# HELP go_memstats_heap_objects Number of currently allocated objects.
Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 573723
# HELP go_memstats_heap_released_bytes Number of heap bytes released to
OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 7.20896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from
system. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes
+ /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.44048128e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of
last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.749491558214289e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated,
both live and gc-ed. Semantically a counter version for
go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects +
/gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.2070648e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache
structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 16912
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache
structures obtained from system. Equals to
/memory/classes/metadata/mcache/inuse:bytes +
/memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan
structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.3496e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan
structures obtained from system. Equals to
/memory/classes/metadata/mspan/inuse:bytes +
/memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 2.18688e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage
collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.34730994e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system
allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 3.125168e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from
system for stack allocator in non-CGO environments. Equals to
/memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.752512e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system
for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
/memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.752512e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.58965032e+08
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS
setting, or the number of operating system threads that can execute
user-level Go code simultaneously. Sourced from
/sched/gomaxprocs:threads
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 14
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 25
# HELP grpc_server_grpc_status_code_total Grpc status code counter
# TYPE grpc_server_grpc_status_code_total counter
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"}
2
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"}
1
# HELP grpc_server_request_counter_total Grpc request counter
# TYPE grpc_server_request_counter_total counter
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""}
2
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""}
1
# HELP grpc_server_total_request_counter_total Total grpc request
counter
# TYPE grpc_server_total_request_counter_total counter
grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""}
5
# HELP otel_scope_info Instrumentation Scope metadata
# TYPE otel_scope_info gauge
otel_scope_info{otel_scope_name="",otel_scope_version=""} 1
otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""}
1
# HELP projection_events_processed_total Number of events reduced to
process projection updates
# TYPE projection_events_processed_total counter
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"}
0
# HELP projection_handle_timer_seconds Time taken to process a
projection update
# TYPE projection_handle_timer_seconds histogram
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.007344541
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.014258458
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP projection_state_latency_seconds When finishing processing a
batch of events, this track the age of the last events seen from current
time
# TYPE projection_state_latency_seconds histogram
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.012979
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.0199
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP promhttp_metric_handler_requests_in_flight Current number of
scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by
HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP river_insert_count_total Number of jobs inserted
# TYPE river_insert_count_total counter
river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_count_total Number of job batches inserted (all
jobs are inserted in a batch, but batches may be one job)
# TYPE river_insert_many_count_total counter
river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_histogram_seconds Duration of job
batch insertion (histogram)
# TYPE river_insert_many_duration_histogram_seconds histogram
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"}
0
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"}
1
river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_seconds Duration of job batch
insertion
# TYPE river_insert_many_duration_seconds gauge
river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
# HELP river_work_count_total Number of jobs worked
# TYPE river_work_count_total counter
river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_histogram_seconds Duration of job being
worked (histogram)
# TYPE river_work_duration_histogram_seconds histogram
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_seconds Duration of job being worked
# TYPE river_work_duration_seconds gauge
river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"}
1
</details>
Example grafana dashboard:

- Closes #10043
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit
|
||
|
|
fefeaea56a |
perf: improve org and org domain creation (#10232)
# Which Problems Are Solved When an organization domain is verified, e.g. also when creating a new organization (incl. generated domain), existing usernames are checked if the domain has been claimed. The query was not optimized for instances with many users and organizations. # How the Problems Are Solved - Replace the query, which was searching over the users projection with (computed loginnames) with a dedicated query checking the loginnames projection directly. - All occurrences have been updated to use the new query. # Additional Changes None # Additional Context - reported through support - requires backport to v3.x |
||
|
|
ffe6d41588 |
fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved When authenticating with email or phone number in the login V1, users were not able to request a password reset and would be given a "User not found" error. This was due to a check of the loginname of the auth request, which in those cases would not match the user's stored loginname. # How the Problems Are Solved Switch to a check of the resolved userID in the auth request. (We still check the user again, since the ID might be a placeholder for an unknown user and we do not want to disclose any information by omitting a check and reduce the response time.) # Additional Changes None # Additional Context - reported through support - requires backport to v3.x |
||
|
|
2821f41c3a |
fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved Customers reported, that if the session / access token in Console expired and they re-authenticated, the user list would be empty. While reproducing the issue, we discovered that the necessary organization information, would be missing in the access token, since this would already be missing in the OIDC session creation when using an id_token_hint. # How the Problems Are Solved - Ensure the user's organization is set in the login v1 auth request. This is used to create the OIDC and token information. # Additional Changes None # Additional Context - reported by customers - requires backport to v3.x |
||
|
|
0ceec60637 |
fix: sorting options of the ListInstanceTrustedDomains() gRPC endpoint (#10172)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved 1. The sorting columns in the gRPC endpoint `ListInstanceTrustedDomains()` are incorrect, and return the following error when invalid sorting options are chosen: ``` Unknown (2) ERROR: missing FROM-clause entry for table "instance_domains" (SQLSTATE 42P01) ``` The sorting columns that are valid to list `instance_trusted_domains` are * `trusted_domain_field_name_unspecified` * `trusted_domain_field_name_domain` * `trusted_domain_field_name_creation_date` However, the currently configured sorting columns are * `domain_field_name_unspecified` * `domain_field_name_domain` * `domain_field_name_primary` * `domain_field_name_generated` * `domain_field_name_creation_date` Configuring the actual columns of `instance_trusted_domains` makes this endpoint **backward incompatible**. Therefore, the fix in this PR is to no longer return an error when an invalid sorting column (non-existing column) is chosen and to sort the results by `creation_date` for invalid sorting columns. 2. This PR also fixes the `sorting_column` included in the responses of both `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints, as they now point to the default option irrespective of the chosen option in the request i.e., * `TRUSTED_DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceTrustedDomains()`, and * `DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceDomains()` # How the Problems Are Solved * Map the sorting columns to valid columns of `instance_trusted_domain` - If the sorting column is not one of the columns, the mapping defaults to `creation_date` * Set the `sorting_column` explicitly (from the request) in the `ListInstanceDomainsResponse` and `ListInstanceTrustedDomainsResponse` # Additional Changes A small fix to return the chosen `sorting_column` in the responses of the `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints # Additional Context - Closes #9839 |
||
|
|
c8d37ac5af | Merge remote-tracking branch 'origin/main' into next-rc | ||
|
|
5403be7c4b |
feat: user profile requests in resource APIs (#10151)
# Which Problems Are Solved The commands for the resource based v2beta AuthorizationService API are added. Authorizations, previously knows as user grants, give a user in a specific organization and project context roles. The project can be owned or granted. The given roles can be used to restrict access within the projects applications. The commands for the resource based v2beta InteralPermissionService API are added. Administrators, previously knows as memberships, give a user in a specific organization and project context roles. The project can be owned or granted. The give roles give the user permissions to manage different resources in Zitadel. API definitions from https://github.com/zitadel/zitadel/issues/9165 are implemented. Contains endpoints for user metadata. # How the Problems Are Solved ### New Methods - CreateAuthorization - UpdateAuthorization - DeleteAuthorization - ActivateAuthorization - DeactivateAuthorization - ListAuthorizations - CreateAdministrator - UpdateAdministrator - DeleteAdministrator - ListAdministrators - SetUserMetadata to set metadata on a user - DeleteUserMetadata to delete metadata on a user - ListUserMetadata to query for metadata of a user ## Deprecated Methods ### v1.ManagementService - GetUserGrantByID - ListUserGrants - AddUserGrant - UpdateUserGrant - DeactivateUserGrant - ReactivateUserGrant - RemoveUserGrant - BulkRemoveUserGrant ### v1.AuthService - ListMyUserGrants - ListMyProjectPermissions # Additional Changes - Permission checks for metadata functionality on query and command side - correct existence checks for resources, for example you can only be an administrator on an existing project - combined all member tables to singular query for the administrators - add permission checks for command an query side functionality - combined functions on command side where necessary for easier maintainability # Additional Context Closes #9165 --------- Co-authored-by: Elio Bischof <elio@zitadel.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Livio Spring <livio.a@gmail.com> |
||
|
|
1d13911a4a |
Merge branch 'main' into next-rc
# Conflicts: # cmd/defaults.yaml # cmd/setup/config.go # cmd/setup/setup.go # cmd/start/start.go # docs/yarn.lock # go.mod # go.sum # internal/api/grpc/action/v2beta/execution.go # internal/api/grpc/action/v2beta/query.go # internal/api/grpc/action/v2beta/server.go # internal/api/grpc/action/v2beta/target.go # internal/api/grpc/feature/v2/converter.go # internal/api/grpc/feature/v2/converter_test.go # internal/api/grpc/feature/v2/integration_test/feature_test.go # internal/api/grpc/feature/v2beta/converter.go # internal/api/grpc/feature/v2beta/converter_test.go # internal/api/grpc/feature/v2beta/integration_test/feature_test.go # internal/api/oidc/key.go # internal/api/oidc/op.go # internal/command/idp_intent_test.go # internal/command/instance_features.go # internal/command/instance_features_test.go # internal/command/system_features.go # internal/command/system_features_test.go # internal/feature/feature.go # internal/feature/key_enumer.go # internal/integration/client.go # internal/query/instance_features.go # internal/query/system_features.go # internal/repository/feature/feature_v2/feature.go # proto/zitadel/feature/v2/instance.proto # proto/zitadel/feature/v2/system.proto # proto/zitadel/feature/v2beta/instance.proto # proto/zitadel/feature/v2beta/system.proto |
||
|
|
9ebf2316c6 |
feat: exchange gRPC server implementation to connectRPC (#10145)
# Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com> |
||
|
|
82cd1cee08 |
fix(service ping): correct endpoint, validate and randomize default interval (#10166)
# Which Problems Are Solved The production endpoint of the service ping was wrong. Additionally we discussed in the sprint review, that we could randomize the default interval to prevent all systems to report data at the very same time and also require a minimal interval. # How the Problems Are Solved - fixed the endpoint - If the interval is set to @daily (default), we generate a random time (minute, hour) as a cron format. - Check if the interval is more than 30min and return an error if not. - Fixed yaml indent on `ResourceCount` # Additional Changes None # Additional Context as discussed internally |
||
|
|
f93a35c7a8 |
feat: implement service ping (#10080)
This PR is still WIP and needs changes to at least the tests. # Which Problems Are Solved To be able to report analytical / telemetry data from deployed Zitadel systems back to a central endpoint, we designed a "service ping" functionality. See also https://github.com/zitadel/zitadel/issues/9706. This PR adds the first implementation to allow collection base data as well as report amount of resources such as organizations, users per organization and more. # How the Problems Are Solved - Added a worker to handle the different `ReportType` variations. - Schedule a periodic job to start a `ServicePingReport` - Configuration added to allow customization of what data will be reported - Setup step to generate and store a `systemID` # Additional Changes None # Additional Context relates to #9869 |
||
|
|
71575e8d67 |
fix(webauthn): allow to use "old" passkeys/u2f credentials on session API (#10150)
# Which Problems Are Solved To prevent presenting unusable WebAuthN credentials to the user / browser, we filtered out all credentials, which do not match the requested RP ID. Since credentials set up through Login V1 and Console do not have an RP ID stored, they never matched. This was previously intended, since the Login V2 could be served on a separate domain. The problem is, that if it is hosted on the same domain, the credentials would also be filtered out and user would not be able to login. # How the Problems Are Solved Change the filtering to return credentials, if no RP ID is stored and the requested RP ID matches the instance domain. # Additional Changes None # Additional Context Noted internally when testing the login v2 |
||
|
|
a02a534cd2 |
feat: initial admin PAT has IAM_LOGIN_CLIENT (#10143)
# Which Problems Are Solved We provide a seamless way to initialize Zitadel and the login together. # How the Problems Are Solved Additionally to the `IAM_OWNER` role, a set up admin user also gets the `IAM_LOGIN_CLIENT` role if it is a machine user with a PAT. # Additional Changes - Simplifies the load balancing example, as the intermediate configuration step is not needed anymore. # Additional Context - Depends on #10116 - Contributes to https://github.com/zitadel/zitadel-charts/issues/332 - Contributes to https://github.com/zitadel/zitadel/issues/10016 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> |
||
|
|
2928c6ac2b |
chore(login): migrate nextjs login to monorepo (#10134)
# Which Problems Are Solved We move the login code to the zitadel repo. # How the Problems Are Solved The login repo is added to ./login as a git subtree pulled from the dockerize-ci branch. Apart from the login code, this PR contains the changes from #10116 # Additional Context - Closes https://github.com/zitadel/typescript/issues/474 - Also merges #10116 - Merging is blocked by failing check because of: - https://github.com/zitadel/zitadel/pull/10134#issuecomment-3012086106 --------- Co-authored-by: Max Peintner <peintnerm@gmail.com> Co-authored-by: Max Peintner <max@caos.ch> Co-authored-by: Florian Forster <florian@zitadel.com> |
||
|
|
fce9e770ac |
feat: App Keys API v2 (#10140)
# Which Problems Are Solved This PR *partially* addresses #9450 . Specifically, it implements the resource based API for app keys. This PR, together with https://github.com/zitadel/zitadel/pull/10077 completes #9450 . # How the Problems Are Solved - Implementation of the following endpoints: `CreateApplicationKey`, `DeleteApplicationKey`, `GetApplicationKey`, `ListApplicationKeys` - `ListApplicationKeys` can filter by project, app or organization ID. Sorting is also possible according to some criteria. - All endpoints use permissions V2 # TODO - [x] Deprecate old endpoints # Additional Context Closes #9450 |
||
|
|
64a03fba28 |
fix(api): return typed saml form post data in idp intent (#10136)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved The current user V2 API returns a `[]byte` containing a whole HTML document including the form on `StartIdentifyProviderIntent` for intents based on form post (e.g. SAML POST bindings). This is not usable for most clients as they cannot handle that and render a whole page inside their app. For redirect based intents, the url to which the client needs to redirect is returned. # How the Problems Are Solved - Changed the returned type to a new `FormData` message containing the url and a `fields` map. - internal changes: - Session.GetAuth now returns an `Auth` interfacce and error instead of (content string, redirect bool) - Auth interface has two implementations: `RedirectAuth` and `FormAuth` - All use of the GetAuth function now type switch on the returned auth object - A template has been added to the login UI to execute the form post automatically (as is). # Additional Changes - Some intent integration test did not check the redirect url and were wrongly configured. # Additional Context - relates to zitadel/typescript#410 |
||
|
|
4cd52f33eb |
chore(oidc): remove feature flag for introspection triggers (#10132)
# Which Problems Are Solved Remove the feature flag that allowed triggers in introspection. This option was a fallback in case introspection would not function properly without triggers. The API documentation asked for anyone using this flag to raise an issue. No such issue was received, hence we concluded it is safe to remove it. # How the Problems Are Solved - Remove flags from the system and instance level feature APIs. - Remove trigger functions that are no longer used - Adjust tests that used the flag. # Additional Changes - none # Additional Context - Closes #10026 - Flag was introduced in #7356 --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> |
||
|
|
2691dae2b6 |
feat: App API v2 (#10077)
# Which Problems Are Solved This PR *partially* addresses #9450 . Specifically, it implements the resource based API for the apps. APIs for app keys ARE not part of this PR. # How the Problems Are Solved - `CreateApplication`, `PatchApplication` (update) and `RegenerateClientSecret` endpoints are now unique for all app types: API, SAML and OIDC apps. - All new endpoints have integration tests - All new endpoints are using permission checks V2 # Additional Changes - The `ListApplications` endpoint allows to do sorting (see protobuf for details) and filtering by app type (see protobuf). - SAML and OIDC update endpoint can now receive requests for partial updates # Additional Context Partially addresses #9450 |
||
|
|
016676e1dc |
chore(oidc): graduate webkey to stable (#10122)
# Which Problems Are Solved Stabilize the usage of webkeys. # How the Problems Are Solved - Remove all legacy signing key code from the OIDC API - Remove the webkey feature flag from proto - Remove the webkey feature flag from console - Cleanup documentation # Additional Changes - Resolved some canonical header linter errors in OIDC - Use the constant for `projections.lock` in the saml package. # Additional Context - Closes #10029 - After #10105 - After #10061 |
||
|
|
1ebbe275b9 |
chore(oidc): remove legacy storage methods (#10061)
# Which Problems Are Solved Stabilize the optimized introspection code and cleanup unused code. # How the Problems Are Solved - `oidc_legacy_introspection` feature flag is removed and reserved. - `OPStorage` which are no longer needed have their bodies removed. - The method definitions need to remain in place so the interface remains implemented. - A panic is thrown in case any such method is still called # Additional Changes - A number of `OPStorage` methods related to token creation were already unused. These are also cleaned up. # Additional Context - Closes #10027 - #7822 --------- Co-authored-by: Livio Spring <livio.a@gmail.com> |
||
|
|
fa9de9a0f1 |
feat: generate webkeys setup step (#10105)
# Which Problems Are Solved We are preparing to roll-out and stabilize webkeys in the next version of Zitadel. Before removing legacy signing-key code, we must ensure all existing instances have their webkeys generated. # How the Problems Are Solved Add a setup step which generate 2 webkeys for each existing instance that didn't have webkeys yet. # Additional Changes Return an error from the config type-switch, when the type is unknown. # Additional Context - Part 1/2 of https://github.com/zitadel/zitadel/issues/10029 - Should be back-ported to v3 |
||
|
|
3a4298c179 |
fix(scim): add type attribute to ScimEmail (#9690)
# Which Problems Are Solved - SCIM PATCH operations for users from Entra ID for the `emails` attribute fails due to missing `type` subattribute # How the Problems Are Solved - Adds the `type` attribute to the `ScimUser` struct and sets the default value to `"work"` in the `mapWriteModelToScimUser()` method. # Additional Changes # Additional Context The SCIM handlers for POST and PUT ignore multiple emails and only uses the primary email for a given user, or falls back to the first email if none are marked as primary. PATCH operations however, will attempt to resolve the provided filter in `operations[].path`. Some services, such as Entra ID, only support patching emails by filtering for `emails[type eq "(work|home|other)"].value`, which fails with Zitadel as the ScimUser struct (and thus the generated schema) doesn't include the `type` field. This commit adds the `type` field to work around this issue, while still preserving compatibility with filters such as `emails[primary eq true].value`. - https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191 --------- Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no> Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no> |
||
|
|
28f7218ea1 |
feat: Hosted login translation API (#10011)
# Which Problems Are Solved This PR implements https://github.com/zitadel/zitadel/issues/9850 # How the Problems Are Solved - New protobuf definition - Implementation of retrieval of system translations - Implementation of retrieval and persistence of organization and instance level translations # Additional Context - Closes #9850 # TODO - [x] Integration tests for Get and Set hosted login translation endpoints - [x] DB migration test - [x] Command function tests - [x] Command util functions tests - [x] Query function test - [x] Query util functions tests |
||
|
|
83839fc2ef |
fix: enable opentelemetry metrics for river queue (#10044)
# Which Problems Are Solved
Right now we have no visibility into river queue's job processing times
and queue sizes. This makes it difficult to reliably know if
notifications are actually being published in a reasonable time and
current queue size.
# How the Problems Are Solved
Integrates River's OpenTelemetry middleware with Zitadel's metrics
system by adding the otelriver middleware to the queue configuration.
# Additional Changes
- Updated dependencies to include required `otelriver` package
# Additional Context
Example output from `/debug/metrics`
<details>
<summary>output</summary>
# HELP failed_deliveries_json_total Failed JSON message deliveries
# TYPE failed_deliveries_json_total counter
failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"}
2
# HELP go_gc_duration_seconds A summary of the wall-time pause
(stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8e-05
go_gc_duration_seconds{quantile="0.25"} 6.3916e-05
go_gc_duration_seconds{quantile="0.5"} 7.5584e-05
go_gc_duration_seconds{quantile="0.75"} 9.2584e-05
go_gc_duration_seconds{quantile="1"} 0.000204292
go_gc_duration_seconds_sum 0.003028502
go_gc_duration_seconds_count 34
# HELP go_gc_gogc_percent Heap size target percentage configured by the
user, otherwise 100. This value is set by the GOGC environment variable,
and the runtime/debug.SetGCPercent function. Sourced from
/gc/gogc:percent
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the
user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT
environment variable, and the runtime/debug.SetMemoryLimit function.
Sourced from /gc/gomemlimit:bytes
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 231
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and
currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.7565832e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in
heap until now, even if released already. Equals to
/gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 7.3319844e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the
profiling bucket hash table. Equals to
/memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.63816e+06
# HELP go_memstats_frees_total Total number of heap objects frees.
Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.1496925e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage
collection system metadata. Equals to
/memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.182776e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and
currently in use, same as go_memstats_alloc_bytes. Equals to
/memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.7565832e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be
used. Equals to /memory/classes/heap/released:bytes +
/memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.8179584e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in
use. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.5868544e+07
# HELP go_memstats_heap_objects Number of currently allocated objects.
Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 573723
# HELP go_memstats_heap_released_bytes Number of heap bytes released to
OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 7.20896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from
system. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes
+ /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.44048128e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of
last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.749491558214289e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated,
both live and gc-ed. Semantically a counter version for
go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects +
/gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.2070648e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache
structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 16912
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache
structures obtained from system. Equals to
/memory/classes/metadata/mcache/inuse:bytes +
/memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan
structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.3496e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan
structures obtained from system. Equals to
/memory/classes/metadata/mspan/inuse:bytes +
/memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 2.18688e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage
collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.34730994e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system
allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 3.125168e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from
system for stack allocator in non-CGO environments. Equals to
/memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.752512e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system
for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
/memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.752512e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.58965032e+08
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS
setting, or the number of operating system threads that can execute
user-level Go code simultaneously. Sourced from
/sched/gomaxprocs:threads
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 14
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 25
# HELP grpc_server_grpc_status_code_total Grpc status code counter
# TYPE grpc_server_grpc_status_code_total counter
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"}
2
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"}
1
# HELP grpc_server_request_counter_total Grpc request counter
# TYPE grpc_server_request_counter_total counter
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""}
2
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""}
1
# HELP grpc_server_total_request_counter_total Total grpc request
counter
# TYPE grpc_server_total_request_counter_total counter
grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""}
5
# HELP otel_scope_info Instrumentation Scope metadata
# TYPE otel_scope_info gauge
otel_scope_info{otel_scope_name="",otel_scope_version=""} 1
otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""}
1
# HELP projection_events_processed_total Number of events reduced to
process projection updates
# TYPE projection_events_processed_total counter
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"}
0
# HELP projection_handle_timer_seconds Time taken to process a
projection update
# TYPE projection_handle_timer_seconds histogram
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.007344541
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.014258458
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP projection_state_latency_seconds When finishing processing a
batch of events, this track the age of the last events seen from current
time
# TYPE projection_state_latency_seconds histogram
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.012979
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.0199
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP promhttp_metric_handler_requests_in_flight Current number of
scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by
HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP river_insert_count_total Number of jobs inserted
# TYPE river_insert_count_total counter
river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_count_total Number of job batches inserted (all
jobs are inserted in a batch, but batches may be one job)
# TYPE river_insert_many_count_total counter
river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_histogram_seconds Duration of job
batch insertion (histogram)
# TYPE river_insert_many_duration_histogram_seconds histogram
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"}
0
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"}
1
river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_seconds Duration of job batch
insertion
# TYPE river_insert_many_duration_seconds gauge
river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
# HELP river_work_count_total Number of jobs worked
# TYPE river_work_count_total counter
river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_histogram_seconds Duration of job being
worked (histogram)
# TYPE river_work_duration_histogram_seconds histogram
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_seconds Duration of job being worked
# TYPE river_work_duration_seconds gauge
river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"}
1
</details>
Example grafana dashboard:

- Closes #10043
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
|
||
|
|
3022ca9e76 |
feat: JWT IdP intent (#9966)
# Which Problems Are Solved
The login v1 allowed to use JWTs as IdP using the JWT IDP. The login V2
uses idp intents for such cases, which were not yet able to handle JWT
IdPs.
# How the Problems Are Solved
- Added handling of JWT IdPs in `StartIdPIntent` and `RetrieveIdPIntent`
- The redirect returned by the start, uses the existing `authRequestID`
and `userAgentID` parameter names for compatibility reasons.
- Added `/idps/jwt` endpoint to handle the proxied (callback) endpoint ,
which extracts and validates the JWT against the configured endpoint.
# Additional Changes
None
# Additional Context
- closes #9758
(cherry picked from commit
|
||
|
|
8ac4b61ee6 |
perf(query): reduce user query duration (#10037)
# Which Problems Are Solved
The resource usage to query user(s) on the database was high and
therefore could have performance impact.
# How the Problems Are Solved
Database queries involving the users and loginnames table were improved
and an index was added for user by email query.
# Additional Changes
- spellchecks
- updated apis on load tests
# additional info
needs cherry pick to v3
(cherry picked from commit
|
||
|
|
77f0a10c1e | fix(import/export): fix for deactivated user/organization being imported as active (#9992) | ||
|
|
4df138286b |
perf(query): reduce user query duration (#10037)
# Which Problems Are Solved The resource usage to query user(s) on the database was high and therefore could have performance impact. # How the Problems Are Solved Database queries involving the users and loginnames table were improved and an index was added for user by email query. # Additional Changes - spellchecks - updated apis on load tests # additional info needs cherry pick to v3 |
||
|
|
647b3b57cf |
fix: correct id filter for project service (#10035)
# Which Problems Are Solved IDs filter definition was changed in another PR and not changed in the Project service. # How the Problems Are Solved Correctly use the IDs filter. # Additional Changes Add timeout to the integration tests. # Additional Context None |