mirror of
https://github.com/zitadel/zitadel.git
synced 2025-11-03 12:14:20 +00:00
07887487b52f8a74f9722c3b1af39b48387e0381
4289 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
968b08e041 |
fix(login): saml cookie settings (#10266)
This PR changes the cookie settings for the SAML post bindings. It sets "secure": true and "SameSite" to "Strict" for production environments. It removes the fallback serialization as we have proven this is not required anymore. |
||
|
|
63b894908c |
docs: add initial docs for the new client-libraries (#10230)
# Which Problems Are Solved The recently released client libraries were missing documentation, which made it difficult for developers to understand and use the new features. # How the Problems Are Solved This pull request introduces the necessary documentation for the new client libraries, covering their installation and basic usage. # Additional Changes None. # Additional Context This documentation supports the recent client library release. |
||
|
|
25adfd91a2 |
feat: add Turkish language support (#10198)
- Turkish language support is added.
- Updated other language files to add Turkish selection.
# Which Problems Are Solved
- Zitadel was not supporting Turkish language. Now supporting.
# How the Problems Are Solved
- Turkish language files are added and updated other language files in
below paths to add Turkish support;
- /console/src/assets/i18n/
- /internal/api/ui/login/static/i18n
- /internal/notification/static/i18n
- /internal/static/i18n
# Additional Changes
- Made changes below files for codes/docs changes;
- /console/src/app/utils/language.ts
- /console/src/app/app.module.ts
- /docs/docs/guides/manage/customize/texts.md
- /internal/api/ui/login/static/templates/external_not_found_option.html
- /internal/query/v2-default.json
- /login/apps/login/src/lib/i18n.ts
---------
Co-authored-by: Marco A. <marco@zitadel.com>
|
||
|
|
1a24b10702 |
fix(mgmt_api) : role deletion/update fails when role key contains a slash (#9958)
# Which Problems Are Solved
- Role deletion or update API returns `404 Not Found` when the role key
contains a slash (`/`), even if URL encoded.
- This breaks management of hierarchical role keys like
`admin/org/reader`.
# How the Problems Are Solved
- Updated the HTTP binding in the protobuf definition for the affected
endpoints to use `{role_key=**}` instead of `{role_key}`.
- This change enables proper decoding and handling of slashes in role
keys as a single path variable.
# Additional Changes
None
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/9948
Co-authored-by: Masum Patel <patelmasum98@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
|
||
|
|
870fefe3dc |
fix(org): adding unique constrants to not allow an org to be added twice with same id (#10243)
# Which Problems Are Solved When adding 2 orgs with the same ID, you get a positive response from the API, later when the org is projected, it errors due to the id already in use # How the Problems Are Solved Check org with orgID specified does not already exist before adding events # Additional Changes Added additional test case for adding same org with same name twice # Additional Context - Closes https://github.com/zitadel/zitadel/issues/10127 --------- Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com> |
||
|
|
312b7b6010 |
chore: 🚀 Migrate monorepo from Yarn to pnpm + Turbo integration + Configuration cleanup (#10165)
This PR modernizes the ZITADEL monorepo build system by migrating from Yarn to pnpm, introducing Turbo for improved build orchestration, and cleaning up configuration inconsistencies across all apps and packages. ### 🎯 Key Improvements #### 📦 **Package Manager Migration (Yarn → pnpm)** - **Performance**: Faster installs with pnpm's efficient symlink-based node_modules structure - **Disk space**: Significant reduction in disk usage through content-addressable storage - **Lockfile**: More reliable dependency resolution with pnpm-lock.yaml - **Workspace support**: Better monorepo dependency management #### ⚡ **Turbo Integration** - **Build orchestration**: Dependency-aware task execution across the monorepo - **Intelligent caching**: Dramatically faster builds on CI/CD and local development - **Parallel execution**: Optimal task scheduling based on dependency graphs - **Vercel optimization**: Enhanced build performance and caching on Vercel deployments #### 🧹 **Configuration Cleanup & Unification** - **Removed config packages**: Eliminated `@zitadel/*-config` packages and inlined configurations - **Simplified dependencies**: Reduced complexity in package.json files across all apps - **Consistent tooling**: Unified prettier, ESLint, and TypeScript configurations - **Standalone support**: Improved prepare-standalone.js script for subtree deployments ### 📋 Detailed Changes #### **🔧 Build System & Dependencies** - ✅ Updated all package.json scripts to use `pnpm` instead of `yarn` - ✅ Replaced `yarn.lock` with pnpm-lock.yaml and regenerated dependencies - ✅ Added Turbo configuration (turbo.json) to root and individual packages - ✅ Configured proper dependency chains: `@zitadel/proto#generate` → `@zitadel/client#build` → `console#build` - ✅ Added missing `@bufbuild/protobuf` dependency to console app for TypeScript compilation #### **🚀 CI/CD & Workflows** - ✅ Updated all GitHub Actions workflows to use `pnpm/action-setup@v4` - ✅ Migrated build processes to use Turbo with directory-based filters (`--filter=./console`) - ✅ **New**: Added `docs.yml` workflow for building documentation locally (helpful for contributors without Vercel access) - ✅ Fixed dependency resolution issues in lint workflows - ✅ Ensured proto generation always runs before builds and linting #### **📚 Documentation & Proto Generation** - ✅ **Robust plugin management**: Enhanced plugin-download.sh with retry logic and error handling - ✅ **Vercel compatibility**: Fixed protoc-gen-connect-openapi plugin availability in Vercel builds - ✅ **API docs generation**: Resolved Docusaurus build errors with OpenAPI plugin configuration - ✅ **Type safety**: Improved TypeScript type extraction patterns in Angular components #### **🛠️ Developer Experience** - ✅ Updated all README files to reference pnpm commands - ✅ Improved Makefile targets to use Turbo for consistent builds - ✅ Enhanced standalone build process for login app subtree deployments - ✅ Added debug utilities for troubleshooting build issues #### **🗂️ File Structure & Cleanup** - ✅ Removed obsolete configuration packages and their references - ✅ Cleaned up Docker files to remove non-existent package copies - ✅ Updated workspace references and import paths - ✅ Streamlined turbo.json configurations across all packages ### 🎉 Benefits 1. **⚡ Faster Builds**: Turbo's caching and parallel execution significantly reduce build times 2. **🔄 Better Caching**: Improved cache hits on Vercel and CI/CD environments 3. **🛠️ Simplified Maintenance**: Unified tooling and configuration management 4. **📈 Developer Productivity**: Faster local development with optimized dependency resolution 5. **🚀 Enhanced CI/CD**: More reliable and faster automated builds and deployments 6. **📖 Better Documentation**: Comprehensive build documentation and troubleshooting guides ### 🧪 Testing - ✅ All apps build successfully with new pnpm + Turbo setup - ✅ Proto generation works correctly across console, login, and docs - ✅ GitHub Actions workflows pass with new configuration - ✅ Vercel deployments work with enhanced plugin management - ✅ Local development workflow verified and documented This migration sets a solid foundation for future development while maintaining backward compatibility and improving the overall developer experience. --------- Co-authored-by: Elio Bischof <elio@zitadel.com> |
||
|
|
6d11145c77 |
fix(saml): Push AuthenticationSucceededOnApplication milestone for SAML sessions (#10263)
# Which Problems Are Solved The SAML session (v2 login) currently does not push a `AuthenticationSucceededOnApplication` milestone upon successful SAML login for the first time. The changes in this PR address this issue. # How the Problems Are Solved Add a new function to set the appropriate milestone, and call this function after a successful SAML request. # Additional Changes N/A # Additional Context - Closes #9592 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> |
||
|
|
e1f112d59b |
chore: disable dependabot on login (#10265)
# Which Problems Are Solved - Dependabot creates noisy PRs to the mirror repo zitadel/typescript. # How the Problems Are Solved - We mark the dependabot file as an example, effectively disabling dependabot. - For cases this isn't intuitive enough, we add a guiding sentence to the README.md - Dependabot for the login [is already enabled in the zitadel repo](https://github.com/zitadel/zitadel/blob/main/.github/dependabot.yml#L25-L37). # Additional Changes - Updates the CONTRIBUTING.md with instructions about how to submit changes related to the mirror repo. - @stebenz please dismiss the relevant Vanta checks if necessary. --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
||
|
|
ee13d4be7d |
chore: use DEPOT_TOKEN secret (#10237)
# Which Problems Are Solved Action runs on PRs from forks can't authenticate at depot. # How the Problems Are Solved - The GitHub secret DEPOT_TOKEN is statically passed as env variable to the steps that use the depot CLI, as described [here](https://github.com/depot/setup-action#authentication). - Removed the oidc argument from the depot/setup-action, as we pass the env statically to the relevant steps. - The `id-token: write` permission is removed from all workflows, as it's not needed anymore. # Additional Changes Removed the obsolete comment ```yaml # latest if branch is main, otherwise image version which is the pull request number ``` # Additional Context Required by these approved PRs so their checks can be executed: - https://github.com/zitadel/zitadel/pull/9982 - https://github.com/zitadel/zitadel/pull/9958 |
||
|
|
40094bee87 |
fix: permission checks on session API
# Which Problems Are Solved
The session API allowed any authenticated user to update sessions by their ID without any further check.
This was unintentionally introduced with version 2.53.0 when the requirement of providing the latest session token on every session update was removed and no other permission check (e.g. session.write) was ensured.
# How the Problems Are Solved
- Granted `session.write` to `IAM_OWNER` and `IAM_LOGIN_CLIENT` in the defaults.yaml
- Granted `session.read` to `IAM_ORG_MANAGER`, `IAM_USER_MANAGER` and `ORG_OWNER` in the defaults.yaml
- Pass the session token to the UpdateSession command.
- Check for `session.write` permission on session creation and update.
- Alternatively, the (latest) sessionToken can be used to update the session.
- Setting an auth request to failed on the OIDC Service `CreateCallback` endpoint now ensures it's either the same user as used to create the auth request (for backwards compatibilty) or requires `session.link` permission.
- Setting an device auth request to failed on the OIDC Service `AuthorizeOrDenyDeviceAuthorization` endpoint now requires `session.link` permission.
- Setting an auth request to failed on the SAML Service `CreateResponse` endpoint now requires `session.link` permission.
# Additional Changes
none
# Additional Context
none
(cherry picked from commit
v4.0.0-rc.2
|
||
|
|
c4e0342c5f |
chore(tests): fix tests (#10267)
# Which Problems Are Solved
The latest merge on main corrupted some unit tests.
# How the Problems Are Solved
Fix them as intended on the PR.
# Additional Changes
None
# Additional Context
relates to
|
||
|
|
b76d8d37cb |
fix: permission checks on session API
# Which Problems Are Solved
The session API allowed any authenticated user to update sessions by their ID without any further check.
This was unintentionally introduced with version 2.53.0 when the requirement of providing the latest session token on every session update was removed and no other permission check (e.g. session.write) was ensured.
# How the Problems Are Solved
- Granted `session.write` to `IAM_OWNER` and `IAM_LOGIN_CLIENT` in the defaults.yaml
- Granted `session.read` to `IAM_ORG_MANAGER`, `IAM_USER_MANAGER` and `ORG_OWNER` in the defaults.yaml
- Pass the session token to the UpdateSession command.
- Check for `session.write` permission on session creation and update.
- Alternatively, the (latest) sessionToken can be used to update the session.
- Setting an auth request to failed on the OIDC Service `CreateCallback` endpoint now ensures it's either the same user as used to create the auth request (for backwards compatibilty) or requires `session.link` permission.
- Setting an device auth request to failed on the OIDC Service `AuthorizeOrDenyDeviceAuthorization` endpoint now requires `session.link` permission.
- Setting an auth request to failed on the SAML Service `CreateResponse` endpoint now requires `session.link` permission.
# Additional Changes
none
# Additional Context
none
(cherry picked from commit
v3.3.2
|
||
|
|
4c942f3477 |
Merge commit from fork
* fix: require permission to create and update session * fix: require permission to fail auth requests * merge main and fix integration tests * fix merge * fix integration tests * fix integration tests * fix saml permission check |
||
|
|
91487a0b23 |
chore: fix login sync (#10250)
# Which Problems Are Solved When changes are pulled or pushed from or to a login repository, they can't be merged to zitadel, because the commit histories differ. # How the Problems Are Solved Changed the commands to allow diverging commit histories. Pulling takes a lot of commits into the zitadel repo branch like this. This is fine, as we anyway squash-merge PRs to a single commit. So we don't care about a branches commit history. # Additional Changes Added an exception to the close-pr.yml workflow so sync PRs are not auto-closed. --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Florian Forster <florian@zitadel.com> Co-authored-by: Max Peintner <peintnerm@gmail.com> Co-authored-by: Max Peintner <max@caos.ch> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
||
|
|
14a5946db8 |
fix(login): better error handling for saml cookie serialization (#10259)
Fixes issues where SAML identity provider authentication would fail silently, leaving users unable to complete the login flow through external SAML providers. changes `saml.ts`: - Enhanced [setSAMLFormCookie()](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) with proper error handling and logging - Improved [getSAMLFormCookie()](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) with detailed error reporting - Added cookie size validation and warnings changes `zitadel.ts`: - Enhanced [startIdentityProviderFlow()](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html) with robust form data handling - Added detailed logging for protobuf object structure analysis - Implemented safe fallback serialization for complex objects - Added comprehensive error handling for JSON operations |
||
|
|
d5d6d37a25 |
test(org): enahcning test for creating org with custom id (#10247)
# Which Problems Are Solved Enhancing integration test for creating org; currently the test does not check if the created org has the assigned custom id, this will resolve this issue. |
||
|
|
79fcc2f2b6 |
chore(tests): name integration test packages correctly to let them run (#10242)
# Which Problems Are Solved After changing some internal logic, which should have failed the integration test, but didn't, I noticed that some integration tests were never executed. The make command lists all `integration_test` packages, but some are named `integration` # How the Problems Are Solved Correct wrong integration test package names. # Additional Changes None # Additional Context - noticed internally - backport to 3.x and 2.x |
||
|
|
23d6d24bc8 |
fix(login): changed permission check for sending invite code on log in (#10197)
# Which Problems Are Solved Fixes issue when users would get an error message when attempting to resend invitation code when logging in # How the Problems Are Solved Changing the permission check for looking for `org.write` to `ommand.checkPermissionUpdateUser()` # Additional Context - Closes https://github.com/zitadel/zitadel/issues/10100 - backport to 3.x |
||
|
|
c787cdf7b4 |
fix(login v1): correctly auto-link users on organizations with suffixed usernames (#10205)
(cherry picked from commit
v3.3.1
|
||
|
|
1b01fc6c40 |
fix(api): CORS for connectRPC and grpc-web (#10227)
# Which Problems Are Solved The CORS handler for the new connectRPC handlers was missing, leading to unhandled preflight requests and a unusable api for browser based calls, e.g. cross domain gRPC-web requests. # How the Problems Are Solved - Added the http CORS middleware to the connectRPC handlers. - Added `Grpc-Timeout`, `Connect-Protocol-Version`,`Connect-Timeout-Ms` to the default allowed headers (this improves also the old grpc-web handling) - Added `Grpc-Status`, `Grpc-Message`, `Grpc-Status-Details-Bin` to the default exposed headers (this improves also the old grpc-web handling) # Additional Changes None # Additional Context noticed internally while testing other issues |
||
|
|
8f61b24532 | fix(login v1): correctly auto-link users on organizations with suffixed usernames (#10205) | ||
|
|
42663a29bd |
perf: improve org and org domain creation (#10232)
# Which Problems Are Solved
When an organization domain is verified, e.g. also when creating a new
organization (incl. generated domain), existing usernames are checked if
the domain has been claimed.
The query was not optimized for instances with many users and
organizations.
# How the Problems Are Solved
- Replace the query, which was searching over the users projection with
(computed loginnames) with a dedicated query checking the loginnames
projection directly.
- All occurrences have been updated to use the new query.
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit
|
||
|
|
d537e86345 |
fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved
When authenticating with email or phone number in the login V1, users
were not able to request a password reset and would be given a "User not
found" error.
This was due to a check of the loginname of the auth request, which in
those cases would not match the user's stored loginname.
# How the Problems Are Solved
Switch to a check of the resolved userID in the auth request. (We still
check the user again, since the ID might be a placeholder for an unknown
user and we do not want to disclose any information by omitting a check
and reduce the response time.)
# Additional Changes
None
# Additional Context
- reported through support
- requires backport to v3.x
(cherry picked from commit
|
||
|
|
b638ed528d |
fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved
Customers reported, that if the session / access token in Console
expired and they re-authenticated, the user list would be empty.
While reproducing the issue, we discovered that the necessary
organization information, would be missing in the access token, since
this would already be missing in the OIDC session creation when using an
id_token_hint.
# How the Problems Are Solved
- Ensure the user's organization is set in the login v1 auth request.
This is used to create the OIDC and token information.
# Additional Changes
None
# Additional Context
- reported by customers
- requires backport to v3.x
(cherry picked from commit
|
||
|
|
8aa7801f40 |
chore(deps): upgrade oidc and chi for dependabot alert (#10160)
# Which Problems Are Solved
Solve dependabot alerts for Go packages.
# How the Problems Are Solved
- Upgrade to latest github.com/zitadel/oidc, which already pulls the
fixed version of chi.
- Upgrade mapstructure
# Additional Changes
- none
# Additional Context
- https://github.com/zitadel/zitadel/security/dependabot/323
- https://github.com/zitadel/zitadel/security/dependabot/324
(cherry picked from commit
|
||
|
|
1d409f7959 |
fix(webauthn): allow to use "old" passkeys/u2f credentials on session API (#10150)
# Which Problems Are Solved
To prevent presenting unusable WebAuthN credentials to the user /
browser, we filtered out all credentials, which do not match the
requested RP ID. Since credentials set up through Login V1 and Console
do not have an RP ID stored, they never matched. This was previously
intended, since the Login V2 could be served on a separate domain.
The problem is, that if it is hosted on the same domain, the credentials
would also be filtered out and user would not be able to login.
# How the Problems Are Solved
Change the filtering to return credentials, if no RP ID is stored and
the requested RP ID matches the instance domain.
# Additional Changes
None
# Additional Context
Noted internally when testing the login v2
(cherry picked from commit
|
||
|
|
c2c49679cb |
fix(scim): add type attribute to ScimEmail (#9690)
# Which Problems Are Solved
- SCIM PATCH operations for users from Entra ID for the `emails`
attribute fails due to missing `type` subattribute
# How the Problems Are Solved
- Adds the `type` attribute to the `ScimUser` struct and sets the
default value to `"work"` in the `mapWriteModelToScimUser()` method.
# Additional Changes
# Additional Context
The SCIM handlers for POST and PUT ignore multiple emails and only uses
the primary email for a given user, or falls back to the first email if
none are marked as primary. PATCH operations however, will attempt to
resolve the provided filter in `operations[].path`.
Some services, such as Entra ID, only support patching emails by
filtering for `emails[type eq "(work|home|other)"].value`, which fails
with Zitadel as the ScimUser struct (and thus the generated schema)
doesn't include the `type` field.
This commit adds the `type` field to work around this issue, while still
preserving compatibility with filters such as `emails[primary eq
true].value`.
-
https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191
---------
Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no>
Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no>
(cherry picked from commit
|
||
|
|
056399bdb4 |
fix: enable opentelemetry metrics for river queue (#10044)
# Which Problems Are Solved
Right now we have no visibility into river queue's job processing times
and queue sizes. This makes it difficult to reliably know if
notifications are actually being published in a reasonable time and
current queue size.
# How the Problems Are Solved
Integrates River's OpenTelemetry middleware with Zitadel's metrics
system by adding the otelriver middleware to the queue configuration.
# Additional Changes
- Updated dependencies to include required `otelriver` package
# Additional Context
Example output from `/debug/metrics`
<details>
<summary>output</summary>
# HELP failed_deliveries_json_total Failed JSON message deliveries
# TYPE failed_deliveries_json_total counter
failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"}
2
# HELP go_gc_duration_seconds A summary of the wall-time pause
(stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8e-05
go_gc_duration_seconds{quantile="0.25"} 6.3916e-05
go_gc_duration_seconds{quantile="0.5"} 7.5584e-05
go_gc_duration_seconds{quantile="0.75"} 9.2584e-05
go_gc_duration_seconds{quantile="1"} 0.000204292
go_gc_duration_seconds_sum 0.003028502
go_gc_duration_seconds_count 34
# HELP go_gc_gogc_percent Heap size target percentage configured by the
user, otherwise 100. This value is set by the GOGC environment variable,
and the runtime/debug.SetGCPercent function. Sourced from
/gc/gogc:percent
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the
user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT
environment variable, and the runtime/debug.SetMemoryLimit function.
Sourced from /gc/gomemlimit:bytes
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 231
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and
currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.7565832e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in
heap until now, even if released already. Equals to
/gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 7.3319844e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the
profiling bucket hash table. Equals to
/memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.63816e+06
# HELP go_memstats_frees_total Total number of heap objects frees.
Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.1496925e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage
collection system metadata. Equals to
/memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.182776e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and
currently in use, same as go_memstats_alloc_bytes. Equals to
/memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.7565832e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be
used. Equals to /memory/classes/heap/released:bytes +
/memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.8179584e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in
use. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.5868544e+07
# HELP go_memstats_heap_objects Number of currently allocated objects.
Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 573723
# HELP go_memstats_heap_released_bytes Number of heap bytes released to
OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 7.20896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from
system. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes
+ /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.44048128e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of
last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.749491558214289e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated,
both live and gc-ed. Semantically a counter version for
go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects +
/gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.2070648e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache
structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 16912
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache
structures obtained from system. Equals to
/memory/classes/metadata/mcache/inuse:bytes +
/memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan
structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.3496e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan
structures obtained from system. Equals to
/memory/classes/metadata/mspan/inuse:bytes +
/memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 2.18688e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage
collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.34730994e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system
allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 3.125168e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from
system for stack allocator in non-CGO environments. Equals to
/memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.752512e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system
for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
/memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.752512e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.58965032e+08
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS
setting, or the number of operating system threads that can execute
user-level Go code simultaneously. Sourced from
/sched/gomaxprocs:threads
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 14
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 25
# HELP grpc_server_grpc_status_code_total Grpc status code counter
# TYPE grpc_server_grpc_status_code_total counter
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"}
2
grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"}
1
grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"}
1
# HELP grpc_server_request_counter_total Grpc request counter
# TYPE grpc_server_request_counter_total counter
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""}
2
grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""}
1
grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""}
1
# HELP grpc_server_total_request_counter_total Total grpc request
counter
# TYPE grpc_server_total_request_counter_total counter
grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""}
5
# HELP otel_scope_info Instrumentation Scope metadata
# TYPE otel_scope_info gauge
otel_scope_info{otel_scope_name="",otel_scope_version=""} 1
otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""}
1
# HELP projection_events_processed_total Number of events reduced to
process projection updates
# TYPE projection_events_processed_total counter
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"}
1
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"}
0
projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"}
0
# HELP projection_handle_timer_seconds Time taken to process a
projection update
# TYPE projection_handle_timer_seconds histogram
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.007344541
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"}
0
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"}
1
projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.014258458
projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP projection_state_latency_seconds When finishing processing a
batch of events, this track the age of the last events seen from current
time
# TYPE projection_state_latency_seconds histogram
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.012979
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"}
1
projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1
projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.0199
projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP promhttp_metric_handler_requests_in_flight Current number of
scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by
HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP river_insert_count_total Number of jobs inserted
# TYPE river_insert_count_total counter
river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_count_total Number of job batches inserted (all
jobs are inserted in a batch, but batches may be one job)
# TYPE river_insert_many_count_total counter
river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_histogram_seconds Duration of job
batch insertion (histogram)
# TYPE river_insert_many_duration_histogram_seconds histogram
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"}
0
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"}
1
river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"}
1
river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_seconds Duration of job batch
insertion
# TYPE river_insert_many_duration_seconds gauge
river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
# HELP river_work_count_total Number of jobs worked
# TYPE river_work_count_total counter
river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_histogram_seconds Duration of job being
worked (histogram)
# TYPE river_work_duration_histogram_seconds histogram
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1
river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1
river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_seconds Duration of job being worked
# TYPE river_work_duration_seconds gauge
river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083
river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
# HELP target_info Target metadata
# TYPE target_info gauge
target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"}
1
</details>
Example grafana dashboard:

- Closes #10043
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit
|
||
|
|
fefeaea56a |
perf: improve org and org domain creation (#10232)
# Which Problems Are Solved When an organization domain is verified, e.g. also when creating a new organization (incl. generated domain), existing usernames are checked if the domain has been claimed. The query was not optimized for instances with many users and organizations. # How the Problems Are Solved - Replace the query, which was searching over the users projection with (computed loginnames) with a dedicated query checking the loginnames projection directly. - All occurrences have been updated to use the new query. # Additional Changes None # Additional Context - reported through support - requires backport to v3.x |
||
|
|
0598abe7e6 |
chore(login): fix close pr action (#10234)
# Which Problems Are Solved The close PR action fails https://github.com/zitadel/typescript/actions/runs/16196332400/job/45723668837?pr=511 # How the Problems Are Solved A backtick is escaped. # Additional Context - Completes #10229 |
||
|
|
f9cad0f3e5 |
chore(typescript): improve close PR action (#10229)
# Which Problems Are Solved The close PR action currently fails because of unescaped backticks. # How the Problems Are Solved Backticks are escaped. # Additional Changes - Adding a login remote immediately fetches for better UX. - Adding a subtree is not necessary, as it is already added in the repo. - Fix and clarify PR migration steps. - Add workflow dispatch event |
||
|
|
ffe6d41588 |
fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved When authenticating with email or phone number in the login V1, users were not able to request a password reset and would be given a "User not found" error. This was due to a check of the loginname of the auth request, which in those cases would not match the user's stored loginname. # How the Problems Are Solved Switch to a check of the resolved userID in the auth request. (We still check the user again, since the ID might be a placeholder for an unknown user and we do not want to disclose any information by omitting a check and reduce the response time.) # Additional Changes None # Additional Context - reported through support - requires backport to v3.x |
||
|
|
2821f41c3a |
fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved Customers reported, that if the session / access token in Console expired and they re-authenticated, the user list would be empty. While reproducing the issue, we discovered that the necessary organization information, would be missing in the access token, since this would already be missing in the OIDC session creation when using an id_token_hint. # How the Problems Are Solved - Ensure the user's organization is set in the login v1 auth request. This is used to create the OIDC and token information. # Additional Changes None # Additional Context - reported by customers - requires backport to v3.x |
||
|
|
f937f90504 |
chore: update review comment (#10210)
make review comment more clear what is expected |
||
|
|
0ceec60637 |
fix: sorting options of the ListInstanceTrustedDomains() gRPC endpoint (#10172)
<!-- Please inform yourself about the contribution guidelines on submitting a PR here: https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr. Take note of how PR/commit titles should be written and replace the template texts in the sections below. Don't remove any of the sections. It is important that the commit history clearly shows what is changed and why. Important: By submitting a contribution you agree to the terms from our Licensing Policy as described here: https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions. --> # Which Problems Are Solved 1. The sorting columns in the gRPC endpoint `ListInstanceTrustedDomains()` are incorrect, and return the following error when invalid sorting options are chosen: ``` Unknown (2) ERROR: missing FROM-clause entry for table "instance_domains" (SQLSTATE 42P01) ``` The sorting columns that are valid to list `instance_trusted_domains` are * `trusted_domain_field_name_unspecified` * `trusted_domain_field_name_domain` * `trusted_domain_field_name_creation_date` However, the currently configured sorting columns are * `domain_field_name_unspecified` * `domain_field_name_domain` * `domain_field_name_primary` * `domain_field_name_generated` * `domain_field_name_creation_date` Configuring the actual columns of `instance_trusted_domains` makes this endpoint **backward incompatible**. Therefore, the fix in this PR is to no longer return an error when an invalid sorting column (non-existing column) is chosen and to sort the results by `creation_date` for invalid sorting columns. 2. This PR also fixes the `sorting_column` included in the responses of both `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints, as they now point to the default option irrespective of the chosen option in the request i.e., * `TRUSTED_DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceTrustedDomains()`, and * `DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceDomains()` # How the Problems Are Solved * Map the sorting columns to valid columns of `instance_trusted_domain` - If the sorting column is not one of the columns, the mapping defaults to `creation_date` * Set the `sorting_column` explicitly (from the request) in the `ListInstanceDomainsResponse` and `ListInstanceTrustedDomainsResponse` # Additional Changes A small fix to return the chosen `sorting_column` in the responses of the `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints # Additional Context - Closes #9839 |
||
|
|
4b7443ba78 |
chore(docs): add llms.txt (#10133)
This pull request enhances the documentation site configuration by introducing a new plugin and making minor adjustments to existing settings. The primary focus is on integrating the `@signalwire/docusaurus-plugin-llms-txt` plugin to improve content handling and adding relevant dependencies. ### Plugin Integration: * [`docs/docusaurus.config.js`](diffhunk://#diff-28742c737e523f302e6de471b7fc27284dc8cf720be639e6afe4c17a550cd654R245-R255): Added the `@signalwire/docusaurus-plugin-llms-txt` plugin with configuration options, including a depth of 3, log level of 1, exclusion of certain routes, and enabling markdown file support. * [`docs/package.json`](diffhunk://#diff-adfa337ce44dc2902621da20152a048dac41878cf3716dfc4cc56d03aa212a56R33): Included the `@signalwire/docusaurus-plugin-llms-txt` dependency (version `^1.2.0`) to support the new plugin integration. ### Configuration Adjustments: * [`docs/docusaurus.config.js`](diffhunk://#diff-28742c737e523f302e6de471b7fc27284dc8cf720be639e6afe4c17a550cd654L221): Removed the `docItemComponent` property under the `module.exports` configuration. |
||
|
|
253beb4d39 |
fix(login): encode formpost data to cookie (#10173)
This PR implements a SAML cookie which is used to save information to complete the form post. It is primarily used to avoid sending the information as url search params and therefore reducing its length. |
||
|
|
aa8edee50b |
chore(docs): prevent readme overwrite (#10170)
# Which Problems Are Solved To generate the docs, we rely on a protoc plugin to generate an openAPI definition from connectRPC / proto. Since the plugin is not available on buf.build, we currently download the released version. As the tar contains a licence and a readme, this overwrote existing internal files. # How the Problems Are Solved Download and extract the plugin in a separate folder and update buf.gen.yaml accordingly. # Additional Changes None # Additional Context relates to #9483 |
||
|
|
27cd1d8518 |
docs(api): add new beta services to api reference (#10018)
# Which Problems Are Solved The unreleased new resource apis have been removed from the docs: https://github.com/zitadel/zitadel/pull/10015 # How the Problems Are Solved Add them to the docs sidenav again, since they're now released. # Additional Changes none # Additional Context none --------- Co-authored-by: Fabienne <fabienne.gerschwiler@gmail.com> Co-authored-by: Marco Ardizzone <marco@zitadel.com> |
||
|
|
8f0b7ebf02 |
BREAKING CHANGE: release candidate v4
BREAKING CHANGE: release candidate v4v4.0.0-rc.1 |
||
|
|
c8d37ac5af | Merge remote-tracking branch 'origin/main' into next-rc | ||
|
|
5403be7c4b |
feat: user profile requests in resource APIs (#10151)
# Which Problems Are Solved The commands for the resource based v2beta AuthorizationService API are added. Authorizations, previously knows as user grants, give a user in a specific organization and project context roles. The project can be owned or granted. The given roles can be used to restrict access within the projects applications. The commands for the resource based v2beta InteralPermissionService API are added. Administrators, previously knows as memberships, give a user in a specific organization and project context roles. The project can be owned or granted. The give roles give the user permissions to manage different resources in Zitadel. API definitions from https://github.com/zitadel/zitadel/issues/9165 are implemented. Contains endpoints for user metadata. # How the Problems Are Solved ### New Methods - CreateAuthorization - UpdateAuthorization - DeleteAuthorization - ActivateAuthorization - DeactivateAuthorization - ListAuthorizations - CreateAdministrator - UpdateAdministrator - DeleteAdministrator - ListAdministrators - SetUserMetadata to set metadata on a user - DeleteUserMetadata to delete metadata on a user - ListUserMetadata to query for metadata of a user ## Deprecated Methods ### v1.ManagementService - GetUserGrantByID - ListUserGrants - AddUserGrant - UpdateUserGrant - DeactivateUserGrant - ReactivateUserGrant - RemoveUserGrant - BulkRemoveUserGrant ### v1.AuthService - ListMyUserGrants - ListMyProjectPermissions # Additional Changes - Permission checks for metadata functionality on query and command side - correct existence checks for resources, for example you can only be an administrator on an existing project - combined all member tables to singular query for the administrators - add permission checks for command an query side functionality - combined functions on command side where necessary for easier maintainability # Additional Context Closes #9165 --------- Co-authored-by: Elio Bischof <elio@zitadel.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Livio Spring <livio.a@gmail.com> |
||
|
|
1d13911a4a |
Merge branch 'main' into next-rc
# Conflicts: # cmd/defaults.yaml # cmd/setup/config.go # cmd/setup/setup.go # cmd/start/start.go # docs/yarn.lock # go.mod # go.sum # internal/api/grpc/action/v2beta/execution.go # internal/api/grpc/action/v2beta/query.go # internal/api/grpc/action/v2beta/server.go # internal/api/grpc/action/v2beta/target.go # internal/api/grpc/feature/v2/converter.go # internal/api/grpc/feature/v2/converter_test.go # internal/api/grpc/feature/v2/integration_test/feature_test.go # internal/api/grpc/feature/v2beta/converter.go # internal/api/grpc/feature/v2beta/converter_test.go # internal/api/grpc/feature/v2beta/integration_test/feature_test.go # internal/api/oidc/key.go # internal/api/oidc/op.go # internal/command/idp_intent_test.go # internal/command/instance_features.go # internal/command/instance_features_test.go # internal/command/system_features.go # internal/command/system_features_test.go # internal/feature/feature.go # internal/feature/key_enumer.go # internal/integration/client.go # internal/query/instance_features.go # internal/query/system_features.go # internal/repository/feature/feature_v2/feature.go # proto/zitadel/feature/v2/instance.proto # proto/zitadel/feature/v2/system.proto # proto/zitadel/feature/v2beta/instance.proto # proto/zitadel/feature/v2beta/system.proto |
||
|
|
9ebf2316c6 |
feat: exchange gRPC server implementation to connectRPC (#10145)
# Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com> |
||
|
|
82cd1cee08 |
fix(service ping): correct endpoint, validate and randomize default interval (#10166)
# Which Problems Are Solved The production endpoint of the service ping was wrong. Additionally we discussed in the sprint review, that we could randomize the default interval to prevent all systems to report data at the very same time and also require a minimal interval. # How the Problems Are Solved - fixed the endpoint - If the interval is set to @daily (default), we generate a random time (minute, hour) as a cron format. - Check if the interval is more than 30min and return an error if not. - Fixed yaml indent on `ResourceCount` # Additional Changes None # Additional Context as discussed internally |
||
|
|
26ec29a513 |
chore(deps): upgrade oidc and chi for dependabot alert (#10160)
# Which Problems Are Solved Solve dependabot alerts for Go packages. # How the Problems Are Solved - Upgrade to latest github.com/zitadel/oidc, which already pulls the fixed version of chi. - Upgrade mapstructure # Additional Changes - none # Additional Context - https://github.com/zitadel/zitadel/security/dependabot/323 - https://github.com/zitadel/zitadel/security/dependabot/324 |
||
|
|
12656235e2 |
chore: fix login image with sha release (#10157)
# Which Problems Are Solved Fixes the releasing of multi-architecture login images. # How the Problems Are Solved - The login-container workflow extends the bake definition with a file docker-bake-release.hcl wich adds the platforms linux/arm and linux/amd to all relevant build targets. The used technique is similar to how the docker metadata action allows to extend the bake definitions. - The local login tag is moved to the metadata bake target, which is always inherited and overwritten in the pipeline - Packages write permission is added # Additional Changes - The MIT license is noted in container labels and annotations - The Image is built from root so that the local proto files are used --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
||
|
|
47f0486ee8 |
fix(login): email or phone query, session context from loginname (#10158)
This PR fixes an issue where the orQuery for phone and email was not correctly set. |
||
|
|
8c39779533 |
chore(deps): bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 in /login/apps/login-test-acceptance/idp/oidc in the go_modules group across 1 directory (#10152)
Bumps the go_modules group with 1 update in the /login/apps/login-test-acceptance/idp/oidc directory: [github.com/go-chi/chi/v5](https://github.com/go-chi/chi). Updates `github.com/go-chi/chi/v5` from 5.2.1 to 5.2.2 <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/go-chi/chi/releases">github.com/go-chi/chi/v5's releases</a>.</em></p> <blockquote> <h2>v5.2.2</h2> <h2>What's Changed</h2> <ul> <li>Use strings.Cut in a few places by <a href="https://github.com/JRaspass"><code>@JRaspass</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/971">go-chi/chi#971</a></li> <li>Fix non-constant format strings in t.Fatalf by <a href="https://github.com/JRaspass"><code>@JRaspass</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/972">go-chi/chi#972</a></li> <li>Apply fieldalignment fixes to optimize struct memory layout by <a href="https://github.com/pixel365"><code>@pixel365</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/974">go-chi/chi#974</a></li> <li>go 1.24 by <a href="https://github.com/pkieltyka"><code>@pkieltyka</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/977">go-chi/chi#977</a></li> <li>chore: delint ioutil usage by <a href="https://github.com/costela"><code>@costela</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/962">go-chi/chi#962</a></li> <li>Fixed typo in Router interface definition by <a href="https://github.com/mithileshgupta12"><code>@mithileshgupta12</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/958">go-chi/chi#958</a></li> <li>Add support for TinyGo by <a href="https://github.com/efraimbart"><code>@efraimbart</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/978">go-chi/chi#978</a></li> <li>Exclude middleware/profiler.go in TinyGo, as there's no net/http/pprof pkg by <a href="https://github.com/cxjava"><code>@cxjava</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/982">go-chi/chi#982</a></li> <li>Make use of strings.Cut by <a href="https://github.com/scop"><code>@scop</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/1005">go-chi/chi#1005</a></li> <li>Change install command format to code block by <a href="https://github.com/sglkc"><code>@sglkc</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/1001">go-chi/chi#1001</a></li> <li>Correct documentation by <a href="https://github.com/mrdomino"><code>@mrdomino</code></a> in <a href="https://redirect.github.com/go-chi/chi/pull/992">go-chi/chi#992</a></li> </ul> <h2>Security fix</h2> <ul> <li>Fixes <a href="https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93">GHSA-vrw8-fxc6-2r93</a> - "Host Header Injection Leads to Open Redirect in RedirectSlashes" <a href=" |
||
|
|
f93a35c7a8 |
feat: implement service ping (#10080)
This PR is still WIP and needs changes to at least the tests. # Which Problems Are Solved To be able to report analytical / telemetry data from deployed Zitadel systems back to a central endpoint, we designed a "service ping" functionality. See also https://github.com/zitadel/zitadel/issues/9706. This PR adds the first implementation to allow collection base data as well as report amount of resources such as organizations, users per organization and more. # How the Problems Are Solved - Added a worker to handle the different `ReportType` variations. - Schedule a periodic job to start a `ServicePingReport` - Configuration added to allow customization of what data will be reported - Setup step to generate and store a `systemID` # Additional Changes None # Additional Context relates to #9869 |