# Which Problems Are Solved
[Permissions v2](https://github.com/zitadel/zitadel/issues/9972) is not
possible in the current implementation.
# How the Problems Are Solved
We remove Permissions v2 from project grants related API calls, to
alleviate this problems.
Resulting in some removals of testing, implementations and performance
impact
# Additional Changes
None
# Additional Context
None
# Which Problems Are Solved
There is no information contained in the context info sent to Actions
v2.
# How the Problems Are Solved
Add application information to the context information sent to Actions
v2, to give more information about the execution.
# Additional Changes
None
# Additional Context
Closes#9377
# Which Problems Are Solved
The OIDC session endpoint allows to pass a `id_token_hint` to identify
the session to terminate. In case the application is not able to pass
that, e.g. Console currently allows multiple sessions to be open, but
will only store the id_token of the current session, allowing to pass
the `logout_hint` to identify the user adds some new possibilities.
# How the Problems Are Solved
In case the end_session_endpoint is called with no `id_token_hint`, but
a `logout_hint` and the v2 login UI is configured, the information is
passed to the login UI also as `login_hint` parameter to allow the login
UI to determine the session to be terminated, resp. let the user decide.
# Additional Changes
Also added the `ui_locales` as parameter to handle and pass to the V2
login UI.
# Dependencies ⚠️
~These changes depend on https://github.com/zitadel/oidc/pull/774~
# Additional Context
closes#9847
---------
Co-authored-by: Marco Ardizzone <marco@zitadel.com>
Changed externalUser() to externalUser in external-authentication.md
file.
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->
# Which Problems Are Solved
Replace this example text with a concise list of problems that this PR
solves.
For example:
- External Authentication documentation states that to retrieve the
external user in the post authentication flow, use externalUser().
However, it needs to be externalUser (the method).
# How the Problems Are Solved
Replace this example text with a concise list of changes that this PR
introduces.
For example:
- Changing externalUser() to externalUser in the
external-authentication.md file solves this issue.
# Additional Changes
Replace this example text with a concise list of additional changes that
this PR introduces, that are not directly solving the initial problem
but are related.
For example:
- No additional changes were made.
# Additional Context
Replace this example with links to related issues, discussions, discord
threads, or other sources with more context.
Use the Closing #issue syntax for issues that are resolved with this PR.
- Closes#9893
# Which Problems Are Solved
Actions v2beta API does not adhere to the [API
design](https://github.com/zitadel/zitadel/blob/main/API_DESIGN.md)
fully.
# How the Problems Are Solved
- Correct body usage for ListExecutions
- Correct REST path for ListTargets and ListExecutions
- Correct attribute names for ListTargetsResponse and
ListExecutionsResponse
# Additional Changes
- Remove unused object import.
# Additional Context
Closes#10138
---------
Co-authored-by: Marco A. <marco@zitadel.com>
# Which Problems Are Solved
Since #10305 we have the following two files in `/apps/login`
- /apps/login/README.md
- /apps/login/readme.md
This confused case insensitive file systems, causing strange Git
behavior.
# How the Problems Are Solved
We remove the obsolete /apps/login/README.md file.
# Which Problems Are Solved
- The previous monorepo in monorepo structure for the login app and its
related packages was fragmented, complicated and buggy.
- The process for building and testing the login container was
inconsistent between local development and CI.
- Lack of clear documentation as well as easy and reliable ways for
non-frontend developers to reproduce and fix failing PR checks locally.
# How the Problems Are Solved
- Consolidated the login app and its related npm packages by moving the
main package to `apps/login/apps/login` and merging
`apps/login/packages/integration` and `apps/login/packages/acceptance`
into the main `apps/login` package.
- Migrated from Docker Compose-based test setups to dev container-based
setups, adding support for multiple dev container configurations:
- `.devcontainer/base`
- `.devcontainer/turbo-lint-unit`
- `.devcontainer/turbo-lint-unit-debug`
- `.devcontainer/login-integration`
- `.devcontainer/login-integration-debug`
- Added npm scripts to run the new dev container setups, enabling exact
reproduction of GitHub PR checks locally, and updated the pipeline to
use these containers.
- Cleaned up Dockerfiles and docker-bake.hcl files to only build the
production image for the login app.
- Cleaned up compose files to focus on dev environments in dev
containers.
- Updated `CONTRIBUTING.md` with guidance on running and debugging PR
checks locally using the new dev container approach.
- Introduced separate Dockerfiles for the login app to distinguish
between using published client packages and building clients from local
protos.
- Ensured the login container is always built in the pipeline for use in
integration and acceptance tests.
- Updated Makefile and GitHub Actions workflows to use
`--frozen-lockfile` for installing pnpm packages, ensuring reproducible
installs.
- Disabled GitHub release creation by the changeset action.
- Refactored the `/build` directory structure for clarity and
maintainability.
- Added a `clean` command to `docks/package.json`.
- Experimentally added `knip` to the `zitadel-client` package for
improved linting of dependencies and exports.
# Additional Changes
- Fixed Makefile commands for consistency and reliability.
- Improved the structure and clarity of the `/build` directory to
support seamless integration of the login build.
- Enhanced documentation and developer experience for running and
debugging CI checks locally.
# Additional Context
- See updated `CONTRIBUTING.md` for new local development and debugging
instructions.
- These changes are a prerequisite for further improvements to the CI
pipeline and local development workflow.
- Closes#10276
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->
# Which Problems Are Solved
Okta sends a random password in the request to create a user during SCIM
provisioning, irrespective of whether the `Sync Password` option is
enabled or disabled on Okta, and this password does not comply with the
default password complexity set in Zitadel. This PR adds a workaround to
create users without issues in such cases.
# How the Problems Are Solved
- A new metadata configuration called
`urn:zitadel:scim:ignorePasswordOnCreate` is added to the Machine User
that is used for provisioning
- During SCIM user creation requests, if the
`urn:zitadel:scim:ignorePasswordOnCreate` is set to `true` in the
Machine User's metadata, the password set in the create request is
ignored
# Additional Changes
# Additional Context
The random password is ignored (if set in the metadata) only during
customer creation. This change does not affect SCIM password updates.
- Closes#10009
---------
Co-authored-by: Marco A. <marco@zitadel.com>
# Which Problems Are Solved
- Broken or incorrect links on the "SDK Examples" introduction page. The
links to the new client libraries section all reference the "java"
section. This fixes it.
# How the Problems Are Solved
- Fixed the links to ensure they correctly point to the relevant
sections in the documentation.
# Additional Changes
None.
# Additional Context
None.
This PR changes the cookie settings for the SAML post bindings. It sets
"secure": true and "SameSite" to "Strict" for production environments.
It removes the fallback serialization as we have proven this is not
required anymore.
# Which Problems Are Solved
The recently released client libraries were missing documentation, which
made it difficult for developers to understand and use the new features.
# How the Problems Are Solved
This pull request introduces the necessary documentation for the new
client libraries, covering their installation and basic usage.
# Additional Changes
None.
# Additional Context
This documentation supports the recent client library release.
- Turkish language support is added.
- Updated other language files to add Turkish selection.
# Which Problems Are Solved
- Zitadel was not supporting Turkish language. Now supporting.
# How the Problems Are Solved
- Turkish language files are added and updated other language files in
below paths to add Turkish support;
- /console/src/assets/i18n/
- /internal/api/ui/login/static/i18n
- /internal/notification/static/i18n
- /internal/static/i18n
# Additional Changes
- Made changes below files for codes/docs changes;
- /console/src/app/utils/language.ts
- /console/src/app/app.module.ts
- /docs/docs/guides/manage/customize/texts.md
- /internal/api/ui/login/static/templates/external_not_found_option.html
- /internal/query/v2-default.json
- /login/apps/login/src/lib/i18n.ts
---------
Co-authored-by: Marco A. <marco@zitadel.com>
# Which Problems Are Solved
- Role deletion or update API returns `404 Not Found` when the role key
contains a slash (`/`), even if URL encoded.
- This breaks management of hierarchical role keys like
`admin/org/reader`.
# How the Problems Are Solved
- Updated the HTTP binding in the protobuf definition for the affected
endpoints to use `{role_key=**}` instead of `{role_key}`.
- This change enables proper decoding and handling of slashes in role
keys as a single path variable.
# Additional Changes
None
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/9948
Co-authored-by: Masum Patel <patelmasum98@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
# Which Problems Are Solved
When adding 2 orgs with the same ID, you get a positive response from
the API, later when the org is projected, it errors due to the id
already in use
# How the Problems Are Solved
Check org with orgID specified does not already exist before adding
events
# Additional Changes
Added additional test case for adding same org with same name twice
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/10127
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
# Which Problems Are Solved
The SAML session (v2 login) currently does not push a
`AuthenticationSucceededOnApplication` milestone upon successful SAML
login for the first time. The changes in this PR address this issue.
# How the Problems Are Solved
Add a new function to set the appropriate milestone, and call this
function after a successful SAML request.
# Additional Changes
N/A
# Additional Context
- Closes#9592
---------
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
# Which Problems Are Solved
- Dependabot creates noisy PRs to the mirror repo zitadel/typescript.
# How the Problems Are Solved
- We mark the dependabot file as an example, effectively disabling
dependabot.
- For cases this isn't intuitive enough, we add a guiding sentence to
the README.md
- Dependabot for the login [is already enabled in the zitadel
repo](https://github.com/zitadel/zitadel/blob/main/.github/dependabot.yml#L25-L37).
# Additional Changes
- Updates the CONTRIBUTING.md with instructions about how to submit
changes related to the mirror repo.
- @stebenz please dismiss the relevant Vanta checks if necessary.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
# Which Problems Are Solved
Action runs on PRs from forks can't authenticate at depot.
# How the Problems Are Solved
- The GitHub secret DEPOT_TOKEN is statically passed as env variable to
the steps that use the depot CLI, as described
[here](https://github.com/depot/setup-action#authentication).
- Removed the oidc argument from the depot/setup-action, as we pass the
env statically to the relevant steps.
- The `id-token: write` permission is removed from all workflows, as
it's not needed anymore.
# Additional Changes
Removed the obsolete comment
```yaml
# latest if branch is main, otherwise image version which is the pull request number
```
# Additional Context
Required by these approved PRs so their checks can be executed:
- https://github.com/zitadel/zitadel/pull/9982
- https://github.com/zitadel/zitadel/pull/9958
# Which Problems Are Solved
The session API allowed any authenticated user to update sessions by their ID without any further check.
This was unintentionally introduced with version 2.53.0 when the requirement of providing the latest session token on every session update was removed and no other permission check (e.g. session.write) was ensured.
# How the Problems Are Solved
- Granted `session.write` to `IAM_OWNER` and `IAM_LOGIN_CLIENT` in the defaults.yaml
- Granted `session.read` to `IAM_ORG_MANAGER`, `IAM_USER_MANAGER` and `ORG_OWNER` in the defaults.yaml
- Pass the session token to the UpdateSession command.
- Check for `session.write` permission on session creation and update.
- Alternatively, the (latest) sessionToken can be used to update the session.
- Setting an auth request to failed on the OIDC Service `CreateCallback` endpoint now ensures it's either the same user as used to create the auth request (for backwards compatibilty) or requires `session.link` permission.
- Setting an device auth request to failed on the OIDC Service `AuthorizeOrDenyDeviceAuthorization` endpoint now requires `session.link` permission.
- Setting an auth request to failed on the SAML Service `CreateResponse` endpoint now requires `session.link` permission.
# Additional Changes
none
# Additional Context
none
(cherry picked from commit 4c942f3477)
# Which Problems Are Solved
The latest merge on main corrupted some unit tests.
# How the Problems Are Solved
Fix them as intended on the PR.
# Additional Changes
None
# Additional Context
relates to
4c942f3477
# Which Problems Are Solved
The session API allowed any authenticated user to update sessions by their ID without any further check.
This was unintentionally introduced with version 2.53.0 when the requirement of providing the latest session token on every session update was removed and no other permission check (e.g. session.write) was ensured.
# How the Problems Are Solved
- Granted `session.write` to `IAM_OWNER` and `IAM_LOGIN_CLIENT` in the defaults.yaml
- Granted `session.read` to `IAM_ORG_MANAGER`, `IAM_USER_MANAGER` and `ORG_OWNER` in the defaults.yaml
- Pass the session token to the UpdateSession command.
- Check for `session.write` permission on session creation and update.
- Alternatively, the (latest) sessionToken can be used to update the session.
- Setting an auth request to failed on the OIDC Service `CreateCallback` endpoint now ensures it's either the same user as used to create the auth request (for backwards compatibilty) or requires `session.link` permission.
- Setting an device auth request to failed on the OIDC Service `AuthorizeOrDenyDeviceAuthorization` endpoint now requires `session.link` permission.
- Setting an auth request to failed on the SAML Service `CreateResponse` endpoint now requires `session.link` permission.
# Additional Changes
none
# Additional Context
none
(cherry picked from commit 4c942f3477)
# Which Problems Are Solved
When changes are pulled or pushed from or to a login repository, they
can't be merged to zitadel, because the commit histories differ.
# How the Problems Are Solved
Changed the commands to allow diverging commit histories.
Pulling takes a lot of commits into the zitadel repo branch like this.
This is fine, as we anyway squash-merge PRs to a single commit.
So we don't care about a branches commit history.
# Additional Changes
Added an exception to the close-pr.yml workflow so sync PRs are not
auto-closed.
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Florian Forster <florian@zitadel.com>
Co-authored-by: Max Peintner <peintnerm@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
# Which Problems Are Solved
Enhancing integration test for creating org; currently the test does not
check if the created org has the assigned custom id, this will resolve
this issue.
# Which Problems Are Solved
After changing some internal logic, which should have failed the
integration test, but didn't, I noticed that some integration tests were
never executed. The make command lists all `integration_test` packages,
but some are named `integration`
# How the Problems Are Solved
Correct wrong integration test package names.
# Additional Changes
None
# Additional Context
- noticed internally
- backport to 3.x and 2.x
# Which Problems Are Solved
Fixes issue when users would get an error message when attempting to
resend invitation code when logging in
# How the Problems Are Solved
Changing the permission check for looking for `org.write` to
`ommand.checkPermissionUpdateUser()`
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/10100
- backport to 3.x
# Which Problems Are Solved
The CORS handler for the new connectRPC handlers was missing, leading to
unhandled preflight requests and a unusable api for browser based calls,
e.g. cross domain gRPC-web requests.
# How the Problems Are Solved
- Added the http CORS middleware to the connectRPC handlers.
- Added `Grpc-Timeout`, `Connect-Protocol-Version`,`Connect-Timeout-Ms`
to the default allowed headers (this improves also the old grpc-web
handling)
- Added `Grpc-Status`, `Grpc-Message`, `Grpc-Status-Details-Bin` to the
default exposed headers (this improves also the old grpc-web handling)
# Additional Changes
None
# Additional Context
noticed internally while testing other issues
# Which Problems Are Solved
When an organization domain is verified, e.g. also when creating a new
organization (incl. generated domain), existing usernames are checked if
the domain has been claimed.
The query was not optimized for instances with many users and
organizations.
# How the Problems Are Solved
- Replace the query, which was searching over the users projection with
(computed loginnames) with a dedicated query checking the loginnames
projection directly.
- All occurrences have been updated to use the new query.
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit fefeaea56a)
# Which Problems Are Solved
When authenticating with email or phone number in the login V1, users
were not able to request a password reset and would be given a "User not
found" error.
This was due to a check of the loginname of the auth request, which in
those cases would not match the user's stored loginname.
# How the Problems Are Solved
Switch to a check of the resolved userID in the auth request. (We still
check the user again, since the ID might be a placeholder for an unknown
user and we do not want to disclose any information by omitting a check
and reduce the response time.)
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit ffe6d41588)
# Which Problems Are Solved
Customers reported, that if the session / access token in Console
expired and they re-authenticated, the user list would be empty.
While reproducing the issue, we discovered that the necessary
organization information, would be missing in the access token, since
this would already be missing in the OIDC session creation when using an
id_token_hint.
# How the Problems Are Solved
- Ensure the user's organization is set in the login v1 auth request.
This is used to create the OIDC and token information.
# Additional Changes
None
# Additional Context
- reported by customers
- requires backport to v3.x
(cherry picked from commit 2821f41c3a)
# Which Problems Are Solved
To prevent presenting unusable WebAuthN credentials to the user /
browser, we filtered out all credentials, which do not match the
requested RP ID. Since credentials set up through Login V1 and Console
do not have an RP ID stored, they never matched. This was previously
intended, since the Login V2 could be served on a separate domain.
The problem is, that if it is hosted on the same domain, the credentials
would also be filtered out and user would not be able to login.
# How the Problems Are Solved
Change the filtering to return credentials, if no RP ID is stored and
the requested RP ID matches the instance domain.
# Additional Changes
None
# Additional Context
Noted internally when testing the login v2
(cherry picked from commit 71575e8d67)
# Which Problems Are Solved
- SCIM PATCH operations for users from Entra ID for the `emails`
attribute fails due to missing `type` subattribute
# How the Problems Are Solved
- Adds the `type` attribute to the `ScimUser` struct and sets the
default value to `"work"` in the `mapWriteModelToScimUser()` method.
# Additional Changes
# Additional Context
The SCIM handlers for POST and PUT ignore multiple emails and only uses
the primary email for a given user, or falls back to the first email if
none are marked as primary. PATCH operations however, will attempt to
resolve the provided filter in `operations[].path`.
Some services, such as Entra ID, only support patching emails by
filtering for `emails[type eq "(work|home|other)"].value`, which fails
with Zitadel as the ScimUser struct (and thus the generated schema)
doesn't include the `type` field.
This commit adds the `type` field to work around this issue, while still
preserving compatibility with filters such as `emails[primary eq
true].value`.
-
https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191
---------
Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no>
Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no>
(cherry picked from commit 3a4298c179)
# Which Problems Are Solved
Right now we have no visibility into river queue's job processing times
and queue sizes. This makes it difficult to reliably know if
notifications are actually being published in a reasonable time and
current queue size.
# How the Problems Are Solved
Integrates River's OpenTelemetry middleware with Zitadel's metrics
system by adding the otelriver middleware to the queue configuration.
# Additional Changes
- Updated dependencies to include required `otelriver` package
# Additional Context
Example output from `/debug/metrics`
<details>
<summary>output</summary>
# HELP failed_deliveries_json_total Failed JSON message deliveries
# TYPE failed_deliveries_json_total counter
failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"}
2
# HELP go_gc_duration_seconds A summary of the wall-time pause
(stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8e-05
go_gc_duration_seconds{quantile="0.25"} 6.3916e-05
go_gc_duration_seconds{quantile="0.5"} 7.5584e-05
go_gc_duration_seconds{quantile="0.75"} 9.2584e-05
go_gc_duration_seconds{quantile="1"} 0.000204292
go_gc_duration_seconds_sum 0.003028502
go_gc_duration_seconds_count 34
# HELP go_gc_gogc_percent Heap size target percentage configured by the
user, otherwise 100. This value is set by the GOGC environment variable,
and the runtime/debug.SetGCPercent function. Sourced from
/gc/gogc:percent
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the
user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT
environment variable, and the runtime/debug.SetMemoryLimit function.
Sourced from /gc/gomemlimit:bytes
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 231
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and
currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.7565832e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in
heap until now, even if released already. Equals to
/gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 7.3319844e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the
profiling bucket hash table. Equals to
/memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.63816e+06
# HELP go_memstats_frees_total Total number of heap objects frees.
Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.1496925e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage
collection system metadata. Equals to
/memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.182776e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and
currently in use, same as go_memstats_alloc_bytes. Equals to
/memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.7565832e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be
used. Equals to /memory/classes/heap/released:bytes +
/memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.8179584e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in
use. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.5868544e+07
# HELP go_memstats_heap_objects Number of currently allocated objects.
Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 573723
# HELP go_memstats_heap_released_bytes Number of heap bytes released to
OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 7.20896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from
system. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes
+ /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.44048128e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of
last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.749491558214289e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated,
both live and gc-ed. Semantically a counter version for
go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects +
/gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.2070648e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache
structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 16912
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache
structures obtained from system. Equals to
/memory/classes/metadata/mcache/inuse:bytes +
/memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan
structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.3496e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan
structures obtained from system. Equals to
/memory/classes/metadata/mspan/inuse:bytes +
/memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 2.18688e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage
collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.34730994e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system
allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 3.125168e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from
system for stack allocator in non-CGO environments. Equals to
/memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.752512e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system
for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
/memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.752512e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.58965032e+08
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS
setting, or the number of operating system threads that can execute
user-level Go code simultaneously. Sourced from
/sched/gomaxprocs:threads
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 14
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 25
# HELP grpc_server_grpc_status_code_total Grpc status code counter
# TYPE grpc_server_grpc_status_code_total counter
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"}
2
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"}
1
# HELP grpc_server_request_counter_total Grpc request counter
# TYPE grpc_server_request_counter_total counter
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""}
2
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""}
1
# HELP grpc_server_total_request_counter_total Total grpc request
counter
# TYPE grpc_server_total_request_counter_total counter
grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""}
5
# HELP otel_scope_info Instrumentation Scope metadata
# TYPE otel_scope_info gauge
otel_scope_info{otel_scope_name="",otel_scope_version=""} 1
otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""}
1
# HELP projection_events_processed_total Number of events reduced to
process projection updates
# TYPE projection_events_processed_total counter
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"}
0
# HELP projection_handle_timer_seconds Time taken to process a
projection update
# TYPE projection_handle_timer_seconds histogram
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.007344541
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.014258458
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP projection_state_latency_seconds When finishing processing a
batch of events, this track the age of the last events seen from current
time
# TYPE projection_state_latency_seconds histogram
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.012979
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.0199
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP promhttp_metric_handler_requests_in_flight Current number of
scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by
HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP river_insert_count_total Number of jobs inserted
# TYPE river_insert_count_total counter
river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_count_total Number of job batches inserted (all
jobs are inserted in a batch, but batches may be one job)
# TYPE river_insert_many_count_total counter
river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_histogram_seconds Duration of job
batch insertion (histogram)
# TYPE river_insert_many_duration_histogram_seconds histogram
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"}
0
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"}
1
river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_seconds Duration of job batch
insertion
# TYPE river_insert_many_duration_seconds gauge
river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
# HELP river_work_count_total Number of jobs worked
# TYPE river_work_count_total counter
river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_histogram_seconds Duration of job being
worked (histogram)
# TYPE river_work_duration_histogram_seconds histogram
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_seconds Duration of job being worked
# TYPE river_work_duration_seconds gauge
river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"}
1
</details>
Example grafana dashboard:

- Closes#10043
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit 83839fc2ef)
# Which Problems Are Solved
When an organization domain is verified, e.g. also when creating a new
organization (incl. generated domain), existing usernames are checked if
the domain has been claimed.
The query was not optimized for instances with many users and
organizations.
# How the Problems Are Solved
- Replace the query, which was searching over the users projection with
(computed loginnames) with a dedicated query checking the loginnames
projection directly.
- All occurrences have been updated to use the new query.
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
# Which Problems Are Solved
The close PR action currently fails because of unescaped backticks.
# How the Problems Are Solved
Backticks are escaped.
# Additional Changes
- Adding a login remote immediately fetches for better UX.
- Adding a subtree is not necessary, as it is already added in the repo.
- Fix and clarify PR migration steps.
- Add workflow dispatch event