Commit Graph

1898 Commits

Author SHA1 Message Date
Gayathri Vijayan
fe3ccc85d6 fix: invite code generation after multiple verification failures (#10323)
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->

# Which Problems Are Solved

If a wrong verification code is used three or more times during
verification, or if the verification code is expired, the user state is
marked as
[deleted](https://github.com/zitadel/zitadel/blob/main/internal/command/user_v2_invite_model.go#L69).
This prevents the creation of a new code with the following
[error](https://github.com/zitadel/zitadel/blob/main/internal/command/user_v2_invite.go#L60):
`Errors.User.NotFound`.
This PR aims to fix this bug.  

# How the Problems Are Solved

This issue is solved by invalidating the previously issued invite code
and setting the value of `UserV2InviteWriteModel.CodeReturned` as
`false`

# Additional Changes
N/A

# Additional Context
- Closes #9860 
- Follow-up: API doc update
2025-07-24 21:09:48 +02:00
Gayathri Vijayan
8fff45d8f4 fix(scim): add a metadata config to ignore random password sent during SCIM create (#10296)
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->

# Which Problems Are Solved

Okta sends a random password in the request to create a user during SCIM
provisioning, irrespective of whether the `Sync Password` option is
enabled or disabled on Okta, and this password does not comply with the
default password complexity set in Zitadel. This PR adds a workaround to
create users without issues in such cases.

# How the Problems Are Solved

- A new metadata configuration called
`urn:zitadel:scim:ignorePasswordOnCreate` is added to the Machine User
that is used for provisioning
- During SCIM user creation requests, if the
`urn:zitadel:scim:ignorePasswordOnCreate` is set to `true` in the
Machine User's metadata, the password set in the create request is
ignored

# Additional Changes

# Additional Context

The random password is ignored (if set in the metadata) only during
customer creation. This change does not affect SCIM password updates.

- Closes #10009

---------

Co-authored-by: Marco A. <marco@zitadel.com>
2025-07-23 10:47:05 +02:00
Recep YILDIZ
25adfd91a2 feat: add Turkish language support (#10198)
- Turkish language support is added. 
- Updated other language files to add Turkish selection.

# Which Problems Are Solved

- Zitadel was not supporting Turkish language. Now supporting. 

# How the Problems Are Solved

- Turkish language files are added and updated other language files in
below paths to add Turkish support;
    -  /console/src/assets/i18n/
    - /internal/api/ui/login/static/i18n
    - /internal/notification/static/i18n
    - /internal/static/i18n

# Additional Changes

- Made changes below files for codes/docs changes;
    - /console/src/app/utils/language.ts
    - /console/src/app/app.module.ts
    - /docs/docs/guides/manage/customize/texts.md
- /internal/api/ui/login/static/templates/external_not_found_option.html
    - /internal/query/v2-default.json
    - /login/apps/login/src/lib/i18n.ts

---------

Co-authored-by: Marco A. <marco@zitadel.com>
2025-07-18 14:18:22 +02:00
Iraq
870fefe3dc fix(org): adding unique constrants to not allow an org to be added twice with same id (#10243)
# Which Problems Are Solved

When adding 2 orgs with the same ID, you get a positive response from
the API, later when the org is projected, it errors due to the id
already in use

# How the Problems Are Solved

Check org with orgID specified does not already exist before adding
events

# Additional Changes

Added additional test case for adding same org with same name twice


# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/10127

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-07-16 10:07:12 +00:00
Gayathri Vijayan
6d11145c77 fix(saml): Push AuthenticationSucceededOnApplication milestone for SAML sessions (#10263)
# Which Problems Are Solved

The SAML session (v2 login) currently does not push a
`AuthenticationSucceededOnApplication` milestone upon successful SAML
login for the first time. The changes in this PR address this issue.

# How the Problems Are Solved

Add a new function to set the appropriate milestone, and call this
function after a successful SAML request.

# Additional Changes

N/A

# Additional Context

- Closes #9592

---------

Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
2025-07-15 16:03:47 +00:00
Livio Spring
c4e0342c5f chore(tests): fix tests (#10267)
# Which Problems Are Solved

The latest merge on main corrupted some unit tests.

# How the Problems Are Solved

Fix them as intended on the PR.

# Additional Changes

None

# Additional Context

relates to
4c942f3477
2025-07-15 13:09:22 +00:00
Livio Spring
4c942f3477 Merge commit from fork
* fix: require permission to create and update session

* fix: require permission to fail auth requests

* merge main and fix integration tests

* fix merge

* fix integration tests

* fix integration tests

* fix saml permission check
2025-07-15 13:38:00 +02:00
Iraq
d5d6d37a25 test(org): enahcning test for creating org with custom id (#10247)
# Which Problems Are Solved

Enhancing integration test for creating org; currently the test does not
check if the created org has the assigned custom id, this will resolve
this issue.
2025-07-14 18:43:50 +02:00
Livio Spring
79fcc2f2b6 chore(tests): name integration test packages correctly to let them run (#10242)
# Which Problems Are Solved

After changing some internal logic, which should have failed the
integration test, but didn't, I noticed that some integration tests were
never executed. The make command lists all `integration_test` packages,
but some are named `integration`

# How the Problems Are Solved

Correct wrong integration test package names.

# Additional Changes

None

# Additional Context

- noticed internally
- backport to 3.x and 2.x
2025-07-14 08:01:36 +00:00
Iraq
23d6d24bc8 fix(login): changed permission check for sending invite code on log in (#10197)
# Which Problems Are Solved

Fixes issue when users would get an error message when attempting to
resend invitation code when logging in

# How the Problems Are Solved

Changing the permission check for looking for `org.write` to
`ommand.checkPermissionUpdateUser()`

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/10100
- backport to 3.x
2025-07-14 09:19:50 +02:00
Livio Spring
1b01fc6c40 fix(api): CORS for connectRPC and grpc-web (#10227)
# Which Problems Are Solved

The CORS handler for the new connectRPC handlers was missing, leading to
unhandled preflight requests and a unusable api for browser based calls,
e.g. cross domain gRPC-web requests.

# How the Problems Are Solved

- Added the http CORS middleware to the connectRPC handlers.
- Added `Grpc-Timeout`, `Connect-Protocol-Version`,`Connect-Timeout-Ms`
to the default allowed headers (this improves also the old grpc-web
handling)
- Added `Grpc-Status`, `Grpc-Message`, `Grpc-Status-Details-Bin` to the
default exposed headers (this improves also the old grpc-web handling)

# Additional Changes

None

# Additional Context

noticed internally while testing other issues
2025-07-11 09:55:01 +00:00
Livio Spring
8f61b24532 fix(login v1): correctly auto-link users on organizations with suffixed usernames (#10205) 2025-07-11 05:29:27 -04:00
Livio Spring
fefeaea56a perf: improve org and org domain creation (#10232)
# Which Problems Are Solved

When an organization domain is verified, e.g. also when creating a new
organization (incl. generated domain), existing usernames are checked if
the domain has been claimed.
The query was not optimized for instances with many users and
organizations.

# How the Problems Are Solved

- Replace the query, which was searching over the users projection with
(computed loginnames) with a dedicated query checking the loginnames
projection directly.
-  All occurrences have been updated to use the new query.

# Additional Changes

None

# Additional Context

- reported through support
- requires backport to v3.x
2025-07-10 15:17:49 +00:00
Livio Spring
ffe6d41588 fix(login v1): handle password reset when authenticating with email or phone number (#10228)
# Which Problems Are Solved

When authenticating with email or phone number in the login V1, users
were not able to request a password reset and would be given a "User not
found" error.
This was due to a check of the loginname of the auth request, which in
those cases would not match the user's stored loginname.

# How the Problems Are Solved

Switch to a check of the resolved userID in the auth request. (We still
check the user again, since the ID might be a placeholder for an unknown
user and we do not want to disclose any information by omitting a check
and reduce the response time.)

# Additional Changes

None

# Additional Context

- reported through support
- requires backport to v3.x
2025-07-10 09:29:26 +02:00
Livio Spring
2821f41c3a fix(login v1): ensure the user's organization is always set into the token context (#10221)
# Which Problems Are Solved

Customers reported, that if the session / access token in Console
expired and they re-authenticated, the user list would be empty.
While reproducing the issue, we discovered that the necessary
organization information, would be missing in the access token, since
this would already be missing in the OIDC session creation when using an
id_token_hint.

# How the Problems Are Solved

- Ensure the user's organization is set in the login v1 auth request.
This is used to create the OIDC and token information.
 
# Additional Changes

None

# Additional Context

- reported by customers
- requires backport to v3.x
2025-07-09 16:51:13 +02:00
Gayathri Vijayan
0ceec60637 fix: sorting options of the ListInstanceTrustedDomains() gRPC endpoint (#10172)
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->

# Which Problems Are Solved

1. The sorting columns in the gRPC endpoint
`ListInstanceTrustedDomains()` are incorrect, and return the following
error when invalid sorting options are chosen:
```
Unknown (2)
ERROR: missing FROM-clause entry for table "instance_domains" (SQLSTATE 42P01)
```

The sorting columns that are valid to list `instance_trusted_domains`
are
* `trusted_domain_field_name_unspecified`
* `trusted_domain_field_name_domain` 
* `trusted_domain_field_name_creation_date`

However, the currently configured sorting columns are 
* `domain_field_name_unspecified`
* `domain_field_name_domain`
* `domain_field_name_primary`
* `domain_field_name_generated`
* `domain_field_name_creation_date`

Configuring the actual columns of `instance_trusted_domains` makes this
endpoint **backward incompatible**. Therefore, the fix in this PR is to
no longer return an error when an invalid sorting column (non-existing
column) is chosen and to sort the results by `creation_date` for invalid
sorting columns.

2. This PR also fixes the `sorting_column` included in the responses of
both `ListInstanceTrustedDomains()` and `ListInstanceDomains()`
endpoints, as they now point to the default option irrespective of the
chosen option in the request i.e.,
* `TRUSTED_DOMAIN_FIELD_NAME_UNSPECIFIED` in case of
`ListInstanceTrustedDomains()`, and
* `DOMAIN_FIELD_NAME_UNSPECIFIED` in case of `ListInstanceDomains()`

# How the Problems Are Solved

* Map the sorting columns to valid columns of `instance_trusted_domain`
- If the sorting column is not one of the columns, the mapping defaults
to `creation_date`
* Set the `sorting_column` explicitly (from the request) in the
`ListInstanceDomainsResponse` and `ListInstanceTrustedDomainsResponse`

# Additional Changes

A small fix to return the chosen `sorting_column` in the responses of
the `ListInstanceTrustedDomains()` and `ListInstanceDomains()` endpoints

# Additional Context
- Closes #9839
2025-07-08 16:47:43 +02:00
Stefan Benz
5403be7c4b feat: user profile requests in resource APIs (#10151)
# Which Problems Are Solved

The commands for the resource based v2beta AuthorizationService API are
added.
Authorizations, previously knows as user grants, give a user in a
specific organization and project context roles.
The project can be owned or granted.
The given roles can be used to restrict access within the projects
applications.

The commands for the resource based v2beta InteralPermissionService API
are added.
Administrators, previously knows as memberships, give a user in a
specific organization and project context roles.
The project can be owned or granted.
The give roles give the user permissions to manage different resources
in Zitadel.

API definitions from https://github.com/zitadel/zitadel/issues/9165 are
implemented.

Contains endpoints for user metadata.

# How the Problems Are Solved

### New Methods

- CreateAuthorization
- UpdateAuthorization
- DeleteAuthorization
- ActivateAuthorization
- DeactivateAuthorization
- ListAuthorizations
- CreateAdministrator
- UpdateAdministrator
- DeleteAdministrator
- ListAdministrators
- SetUserMetadata to set metadata on a user
- DeleteUserMetadata to delete metadata on a user
- ListUserMetadata to query for metadata of a user

## Deprecated Methods

### v1.ManagementService
- GetUserGrantByID
- ListUserGrants
- AddUserGrant
- UpdateUserGrant
- DeactivateUserGrant
- ReactivateUserGrant
- RemoveUserGrant
- BulkRemoveUserGrant

### v1.AuthService
- ListMyUserGrants
- ListMyProjectPermissions

# Additional Changes

- Permission checks for metadata functionality on query and command side
- correct existence checks for resources, for example you can only be an
administrator on an existing project
- combined all member tables to singular query for the administrators
- add permission checks for command an query side functionality
- combined functions on command side where necessary for easier
maintainability

# Additional Context

Closes #9165

---------

Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-07-04 18:12:59 +02:00
Livio Spring
9ebf2316c6 feat: exchange gRPC server implementation to connectRPC (#10145)
# Which Problems Are Solved

The current maintained gRPC server in combination with a REST (grpc)
gateway is getting harder and harder to maintain. Additionally, there
have been and still are issues with supporting / displaying `oneOf`s
correctly.
We therefore decided to exchange the server implementation to
connectRPC, which apart from supporting connect as protocol, also also
"standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g.
curl directly call the server without any additional gateway.

# How the Problems Are Solved

- All v2 services are moved to connectRPC implementation. (v1 services
are still served as pure grpc servers)
- All gRPC server interceptors were migrated / copied to a corresponding
connectRPC interceptor.
- API.ListGrpcServices and API. ListGrpcMethods were changed to include
the connect services and endpoints.
- gRPC server reflection was changed to a `StaticReflector` using the
`ListGrpcServices` list.
- The `grpc.Server` interfaces was split into different combinations to
be able to handle the different cases (grpc server and prefixed gateway,
connect server with grpc gateway, connect server only, ...)
- Docs of services serving connectRPC only with no additional gateway
(instance, webkey, project, app, org v2 beta) are changed to expose that
- since the plugin is not yet available on buf, we download it using
`postinstall` hook of the docs

# Additional Changes

- WebKey service is added as v2 service (in addition to the current
v2beta)

# Additional Context

closes #9483

---------

Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 14:06:20 +00:00
Livio Spring
82cd1cee08 fix(service ping): correct endpoint, validate and randomize default interval (#10166)
# Which Problems Are Solved

The production endpoint of the service ping was wrong.
Additionally we discussed in the sprint review, that we could randomize
the default interval to prevent all systems to report data at the very
same time and also require a minimal interval.

# How the Problems Are Solved

- fixed the endpoint
- If the interval is set to @daily (default), we generate a random time
(minute, hour) as a cron format.
- Check if the interval is more than 30min and return an error if not.
- Fixed yaml indent on `ResourceCount`

# Additional Changes

None

# Additional Context

as discussed internally
2025-07-04 13:45:15 +00:00
Livio Spring
f93a35c7a8 feat: implement service ping (#10080)
This PR is still WIP and needs changes to at least the tests.

# Which Problems Are Solved

To be able to report analytical / telemetry data from deployed Zitadel
systems back to a central endpoint, we designed a "service ping"
functionality. See also https://github.com/zitadel/zitadel/issues/9706.
This PR adds the first implementation to allow collection base data as
well as report amount of resources such as organizations, users per
organization and more.

# How the Problems Are Solved

- Added a worker to handle the different `ReportType` variations. 
- Schedule a periodic job to start a `ServicePingReport`
- Configuration added to allow customization of what data will be
reported
- Setup step to generate and store a `systemID`

# Additional Changes

None

# Additional Context

relates to #9869
2025-07-02 13:57:41 +02:00
Livio Spring
71575e8d67 fix(webauthn): allow to use "old" passkeys/u2f credentials on session API (#10150)
# Which Problems Are Solved

To prevent presenting unusable WebAuthN credentials to the user /
browser, we filtered out all credentials, which do not match the
requested RP ID. Since credentials set up through Login V1 and Console
do not have an RP ID stored, they never matched. This was previously
intended, since the Login V2 could be served on a separate domain.
The problem is, that if it is hosted on the same domain, the credentials
would also be filtered out and user would not be able to login.

# How the Problems Are Solved

Change the filtering to return credentials, if no RP ID is stored and
the requested RP ID matches the instance domain.

# Additional Changes

None

# Additional Context

Noted internally when testing the login v2
2025-07-02 11:04:59 +00:00
Elio Bischof
a02a534cd2 feat: initial admin PAT has IAM_LOGIN_CLIENT (#10143)
# Which Problems Are Solved

We provide a seamless way to initialize Zitadel and the login together.

# How the Problems Are Solved

Additionally to the `IAM_OWNER` role, a set up admin user also gets the
`IAM_LOGIN_CLIENT` role if it is a machine user with a PAT.

# Additional Changes

- Simplifies the load balancing example, as the intermediate
configuration step is not needed anymore.

# Additional Context

- Depends on #10116 
- Contributes to https://github.com/zitadel/zitadel-charts/issues/332
- Contributes to https://github.com/zitadel/zitadel/issues/10016

---------

Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
2025-07-02 09:14:36 +00:00
Elio Bischof
2928c6ac2b chore(login): migrate nextjs login to monorepo (#10134)
# Which Problems Are Solved

We move the login code to the zitadel repo.

# How the Problems Are Solved

The login repo is added to ./login as a git subtree pulled from the
dockerize-ci branch.
Apart from the login code, this PR contains the changes from #10116

# Additional Context

- Closes https://github.com/zitadel/typescript/issues/474
- Also merges #10116  
- Merging is blocked by failing check because of:
- https://github.com/zitadel/zitadel/pull/10134#issuecomment-3012086106

---------

Co-authored-by: Max Peintner <peintnerm@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Florian Forster <florian@zitadel.com>
2025-07-02 10:04:19 +02:00
Marco A.
fce9e770ac feat: App Keys API v2 (#10140)
# Which Problems Are Solved

This PR *partially* addresses #9450 . Specifically, it implements the
resource based API for app keys.

This PR, together with https://github.com/zitadel/zitadel/pull/10077
completes #9450 .

# How the Problems Are Solved

- Implementation of the following endpoints: `CreateApplicationKey`,
`DeleteApplicationKey`, `GetApplicationKey`, `ListApplicationKeys`
- `ListApplicationKeys` can filter by project, app or organization ID.
Sorting is also possible according to some criteria.
  - All endpoints use permissions V2

# TODO

 - [x] Deprecate old endpoints

# Additional Context

Closes #9450
2025-07-02 07:34:19 +00:00
Livio Spring
64a03fba28 fix(api): return typed saml form post data in idp intent (#10136)
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->

# Which Problems Are Solved

The current user V2 API returns a `[]byte` containing a whole HTML
document including the form on `StartIdentifyProviderIntent` for intents
based on form post (e.g. SAML POST bindings). This is not usable for
most clients as they cannot handle that and render a whole page inside
their app.
For redirect based intents, the url to which the client needs to
redirect is returned.

# How the Problems Are Solved

- Changed the returned type to a new `FormData` message containing the
url and a `fields` map.
- internal changes:
- Session.GetAuth now returns an `Auth` interfacce and error instead of
(content string, redirect bool)
- Auth interface has two implementations: `RedirectAuth` and `FormAuth`
- All use of the GetAuth function now type switch on the returned auth
object
- A template has been added to the login UI to execute the form post
automatically (as is).

# Additional Changes

- Some intent integration test did not check the redirect url and were
wrongly configured.

# Additional Context

- relates to zitadel/typescript#410
2025-06-30 15:07:33 +00:00
Tim Möhlmann
4cd52f33eb chore(oidc): remove feature flag for introspection triggers (#10132)
# Which Problems Are Solved

Remove the feature flag that allowed triggers in introspection. This
option was a fallback in case introspection would not function properly
without triggers. The API documentation asked for anyone using this flag
to raise an issue. No such issue was received, hence we concluded it is
safe to remove it.

# How the Problems Are Solved

- Remove flags from the system and instance level feature APIs.
- Remove trigger functions that are no longer used
- Adjust tests that used the flag.

# Additional Changes

- none

# Additional Context

- Closes #10026 
- Flag was introduced in #7356

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-06-30 05:48:04 +00:00
Marco A.
2691dae2b6 feat: App API v2 (#10077)
# Which Problems Are Solved

This PR *partially* addresses #9450 . Specifically, it implements the
resource based API for the apps. APIs for app keys ARE not part of this
PR.

# How the Problems Are Solved

- `CreateApplication`, `PatchApplication` (update) and
`RegenerateClientSecret` endpoints are now unique for all app types:
API, SAML and OIDC apps.
  - All new endpoints have integration tests
  - All new endpoints are using permission checks V2

# Additional Changes

- The `ListApplications` endpoint allows to do sorting (see protobuf for
details) and filtering by app type (see protobuf).
- SAML and OIDC update endpoint can now receive requests for partial
updates

# Additional Context

Partially addresses #9450
2025-06-27 17:25:44 +02:00
Tim Möhlmann
016676e1dc chore(oidc): graduate webkey to stable (#10122)
# Which Problems Are Solved

Stabilize the usage of webkeys.

# How the Problems Are Solved

- Remove all legacy signing key code from the OIDC API
- Remove the webkey feature flag from proto
- Remove the webkey feature flag from console
- Cleanup documentation

# Additional Changes

- Resolved some canonical header linter errors in OIDC
- Use the constant for `projections.lock` in the saml package.

# Additional Context

- Closes #10029
- After #10105
- After #10061
2025-06-26 19:17:45 +03:00
Tim Möhlmann
1ebbe275b9 chore(oidc): remove legacy storage methods (#10061)
# Which Problems Are Solved

Stabilize the optimized introspection code and cleanup unused code.

# How the Problems Are Solved

- `oidc_legacy_introspection` feature flag is removed and reserved.
- `OPStorage` which are no longer needed have their bodies removed.
- The method definitions need to remain in place so the interface
remains implemented.
  - A panic is thrown in case any such method is still called

# Additional Changes

- A number of `OPStorage` methods related to token creation were already
unused. These are also cleaned up.

# Additional Context

- Closes #10027 
- #7822

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-06-26 08:08:37 +00:00
Tim Möhlmann
fa9de9a0f1 feat: generate webkeys setup step (#10105)
# Which Problems Are Solved

We are preparing to roll-out and stabilize webkeys in the next version
of Zitadel. Before removing legacy signing-key code, we must ensure all
existing instances have their webkeys generated.

# How the Problems Are Solved

Add a setup step which generate 2 webkeys for each existing instance
that didn't have webkeys yet.

# Additional Changes

Return an error from the config type-switch, when the type is unknown.

# Additional Context

- Part 1/2 of https://github.com/zitadel/zitadel/issues/10029
- Should be back-ported to v3
2025-06-24 11:41:41 +02:00
Trong Huu Nguyen
3a4298c179 fix(scim): add type attribute to ScimEmail (#9690)
# Which Problems Are Solved

- SCIM PATCH operations for users from Entra ID for the `emails`
attribute fails due to missing `type` subattribute

# How the Problems Are Solved

- Adds the `type` attribute to the `ScimUser` struct and sets the
default value to `"work"` in the `mapWriteModelToScimUser()` method.

# Additional Changes

# Additional Context

The SCIM handlers for POST and PUT ignore multiple emails and only uses
the primary email for a given user, or falls back to the first email if
none are marked as primary. PATCH operations however, will attempt to
resolve the provided filter in `operations[].path`.

Some services, such as Entra ID, only support patching emails by
filtering for `emails[type eq "(work|home|other)"].value`, which fails
with Zitadel as the ScimUser struct (and thus the generated schema)
doesn't include the `type` field.

This commit adds the `type` field to work around this issue, while still
preserving compatibility with filters such as `emails[primary eq
true].value`.

-
https://discord.com/channels/927474939156643850/927866013545025566/1356556668527448191

---------

Co-authored-by: Christer Edvartsen <christer.edvartsen@nav.no>
Co-authored-by: Thomas Siegfried Krampl <thomas.siegfried.krampl@nav.no>
2025-06-19 09:42:44 +00:00
Marco A.
28f7218ea1 feat: Hosted login translation API (#10011)
# Which Problems Are Solved

This PR implements https://github.com/zitadel/zitadel/issues/9850

# How the Problems Are Solved

  - New protobuf definition
  - Implementation of retrieval of system translations
- Implementation of retrieval and persistence of organization and
instance level translations

# Additional Context

- Closes #9850

# TODO

- [x] Integration tests for Get and Set hosted login translation
endpoints
- [x] DB migration test
- [x] Command function tests
- [x] Command util functions tests
- [x] Query function test
- [x] Query util functions tests
2025-06-18 13:24:39 +02:00
Abhinav Sethi
83839fc2ef fix: enable opentelemetry metrics for river queue (#10044)
# Which Problems Are Solved

Right now we have no visibility into river queue's job processing times
and queue sizes. This makes it difficult to reliably know if
notifications are actually being published in a reasonable time and
current queue size.

# How the Problems Are Solved
Integrates River's OpenTelemetry middleware with Zitadel's metrics
system by adding the otelriver middleware to the queue configuration.


# Additional Changes
- Updated dependencies to include required `otelriver` package

# Additional Context

Example output from `/debug/metrics`

<details>
  <summary>output</summary>

# HELP failed_deliveries_json_total Failed JSON message deliveries
# TYPE failed_deliveries_json_total counter

failed_deliveries_json_total{otel_scope_name="",otel_scope_version="",triggering_event_type="user.human.phone.code.added"}
2
# HELP go_gc_duration_seconds A summary of the wall-time pause
(stop-the-world) duration in garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.8e-05
go_gc_duration_seconds{quantile="0.25"} 6.3916e-05
go_gc_duration_seconds{quantile="0.5"} 7.5584e-05
go_gc_duration_seconds{quantile="0.75"} 9.2584e-05
go_gc_duration_seconds{quantile="1"} 0.000204292
go_gc_duration_seconds_sum 0.003028502
go_gc_duration_seconds_count 34
# HELP go_gc_gogc_percent Heap size target percentage configured by the
user, otherwise 100. This value is set by the GOGC environment variable,
and the runtime/debug.SetGCPercent function. Sourced from
/gc/gogc:percent
# TYPE go_gc_gogc_percent gauge
go_gc_gogc_percent 100
# HELP go_gc_gomemlimit_bytes Go runtime memory limit configured by the
user, otherwise math.MaxInt64. This value is set by the GOMEMLIMIT
environment variable, and the runtime/debug.SetMemoryLimit function.
Sourced from /gc/gomemlimit:bytes
# TYPE go_gc_gomemlimit_bytes gauge
go_gc_gomemlimit_bytes 9.223372036854776e+18
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 231
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.24.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated in heap and
currently in use. Equals to /memory/classes/heap/objects:bytes.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 7.7565832e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated in
heap until now, even if released already. Equals to
/gc/heap/allocs:bytes.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 7.3319844e+08
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the
profiling bucket hash table. Equals to
/memory/classes/profiling/buckets:bytes.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.63816e+06
# HELP go_memstats_frees_total Total number of heap objects frees.
Equals to /gc/heap/frees:objects + /gc/heap/tiny/allocs:objects.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.1496925e+07
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage
collection system metadata. Equals to
/memory/classes/metadata/other:bytes.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 5.182776e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and
currently in use, same as go_memstats_alloc_bytes. Equals to
/memory/classes/heap/objects:bytes.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 7.7565832e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be
used. Equals to /memory/classes/heap/released:bytes +
/memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 5.8179584e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in
use. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 8.5868544e+07
# HELP go_memstats_heap_objects Number of currently allocated objects.
Equals to /gc/heap/objects:objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 573723
# HELP go_memstats_heap_released_bytes Number of heap bytes released to
OS. Equals to /memory/classes/heap/released:bytes.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 7.20896e+06
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from
system. Equals to /memory/classes/heap/objects:bytes +
/memory/classes/heap/unused:bytes + /memory/classes/heap/released:bytes
+ /memory/classes/heap/free:bytes.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 1.44048128e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of
last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.749491558214289e+09
# HELP go_memstats_mallocs_total Total number of heap objects allocated,
both live and gc-ed. Semantically a counter version for
go_memstats_heap_objects gauge. Equals to /gc/heap/allocs:objects +
/gc/heap/tiny/allocs:objects.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.2070648e+07
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache
structures. Equals to /memory/classes/metadata/mcache/inuse:bytes.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 16912
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache
structures obtained from system. Equals to
/memory/classes/metadata/mcache/inuse:bytes +
/memory/classes/metadata/mcache/free:bytes.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 31408
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan
structures. Equals to /memory/classes/metadata/mspan/inuse:bytes.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 1.3496e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan
structures obtained from system. Equals to
/memory/classes/metadata/mspan/inuse:bytes +
/memory/classes/metadata/mspan/free:bytes.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 2.18688e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage
collection will take place. Equals to /gc/heap/goal:bytes.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 1.34730994e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system
allocations. Equals to /memory/classes/other:bytes.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 3.125168e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes obtained from
system for stack allocator in non-CGO environments. Equals to
/memory/classes/heap/stacks:bytes.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.752512e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system
for stack allocator. Equals to /memory/classes/heap/stacks:bytes +
/memory/classes/os-stacks:bytes.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.752512e+06
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
Equals to /memory/classes/total:byte.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 1.58965032e+08
# HELP go_sched_gomaxprocs_threads The current runtime.GOMAXPROCS
setting, or the number of operating system threads that can execute
user-level Go code simultaneously. Sourced from
/sched/gomaxprocs:threads
# TYPE go_sched_gomaxprocs_threads gauge
go_sched_gomaxprocs_threads 14
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 25
# HELP grpc_server_grpc_status_code_total Grpc status code counter
# TYPE grpc_server_grpc_status_code_total counter

grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version="",return_code="200"}
1

grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version="",return_code="200"}
2

grpc_server_grpc_status_code_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version="",return_code="200"}
1

grpc_server_grpc_status_code_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version="",return_code="200"}
1
# HELP grpc_server_request_counter_total Grpc request counter
# TYPE grpc_server_request_counter_total counter

grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserChanges",otel_scope_name="",otel_scope_version=""}
1

grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ListUserMetadata",otel_scope_name="",otel_scope_version=""}
2

grpc_server_request_counter_total{grpc_method="/zitadel.management.v1.ManagementService/ResendHumanPhoneVerification",otel_scope_name="",otel_scope_version=""}
1

grpc_server_request_counter_total{grpc_method="/zitadel.user.v2.UserService/GetUserByID",otel_scope_name="",otel_scope_version=""}
1
# HELP grpc_server_total_request_counter_total Total grpc request
counter
# TYPE grpc_server_total_request_counter_total counter

grpc_server_total_request_counter_total{otel_scope_name="",otel_scope_version=""}
5
# HELP otel_scope_info Instrumentation Scope metadata
# TYPE otel_scope_info gauge
otel_scope_info{otel_scope_name="",otel_scope_version=""} 1

otel_scope_info{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version=""}
1
# HELP projection_events_processed_total Number of events reduced to
process projection updates
# TYPE projection_events_processed_total counter

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",success="true"}
1

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.instance_features2",success="true"}
0

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.login_names3",success="true"}
0

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.notifications",success="true"}
1

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.orgs1",success="true"}
0

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.user_metadata5",success="true"}
0

projection_events_processed_total{otel_scope_name="",otel_scope_version="",projection="projections.users14",success="true"}
0
# HELP projection_handle_timer_seconds Time taken to process a
projection update
# TYPE projection_handle_timer_seconds histogram

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.005"}
0

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.01"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.05"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="120"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1

projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.007344541

projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.005"}
0

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.01"}
0

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.05"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="120"}
1

projection_handle_timer_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1

projection_handle_timer_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.014258458

projection_handle_timer_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP projection_state_latency_seconds When finishing processing a
batch of events, this track the age of the last events seen from current
time
# TYPE projection_state_latency_seconds histogram

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.1"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="0.5"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="5"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="10"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="30"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="60"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="300"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="600"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="1800"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler",le="+Inf"}
1

projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
0.012979

projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.execution_handler"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.1"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="0.5"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="5"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="10"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="30"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="60"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="300"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="600"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="1800"}
1

projection_state_latency_seconds_bucket{otel_scope_name="",otel_scope_version="",projection="projections.notifications",le="+Inf"}
1

projection_state_latency_seconds_sum{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
0.0199

projection_state_latency_seconds_count{otel_scope_name="",otel_scope_version="",projection="projections.notifications"}
1
# HELP promhttp_metric_handler_requests_in_flight Current number of
scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by
HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0
# HELP river_insert_count_total Number of jobs inserted
# TYPE river_insert_count_total counter

river_insert_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_count_total Number of job batches inserted (all
jobs are inserted in a batch, but batches may be one job)
# TYPE river_insert_many_count_total counter

river_insert_many_count_total{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_histogram_seconds Duration of job
batch insertion (histogram)
# TYPE river_insert_many_duration_histogram_seconds histogram

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="0"}
0

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="25"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="50"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="75"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="100"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="250"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="500"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="750"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="1000"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="2500"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="5000"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="7500"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="10000"}
1

river_insert_many_duration_histogram_seconds_bucket{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok",le="+Inf"}
1

river_insert_many_duration_histogram_seconds_sum{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666

river_insert_many_duration_histogram_seconds_count{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
1
# HELP river_insert_many_duration_seconds Duration of job batch
insertion
# TYPE river_insert_many_duration_seconds gauge

river_insert_many_duration_seconds{otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",status="ok"}
0.002905666
# HELP river_work_count_total Number of jobs worked
# TYPE river_work_count_total counter

river_work_count_total{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1

river_work_count_total{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_histogram_seconds Duration of job being
worked (histogram)
# TYPE river_work_duration_histogram_seconds histogram

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1

river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1

river_work_duration_histogram_seconds_sum{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083

river_work_duration_histogram_seconds_count{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="0"}
0

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="25"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="50"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="75"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="100"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="250"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="500"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="750"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="1000"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="2500"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="5000"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="7500"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="10000"}
1

river_work_duration_histogram_seconds_bucket{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]",le="+Inf"}
1

river_work_duration_histogram_seconds_sum{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745

river_work_duration_histogram_seconds_count{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
1
# HELP river_work_duration_seconds Duration of job being worked
# TYPE river_work_duration_seconds gauge

river_work_duration_seconds{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.029241083

river_work_duration_seconds{attempt="2",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="error",tag="[]"}
0.0408745
# HELP target_info Target metadata
# TYPE target_info gauge

target_info{service_name="ZITADEL",service_version="2025-06-09T13:52:29-04:00",telemetry_sdk_language="go",telemetry_sdk_name="opentelemetry",telemetry_sdk_version="1.35.0"}
1

</details>

Example grafana dashboard:
![Screenshot 2025-06-11 at 11 30
06 AM](https://github.com/user-attachments/assets/a2c9b377-8ddd-40b9-a506-7df3b31941da)

- Closes #10043

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-06-12 13:03:25 -04:00
Iraq
77f0a10c1e fix(import/export): fix for deactivated user/organization being imported as active (#9992) 2025-06-11 12:50:31 +01:00
Silvan
4df138286b perf(query): reduce user query duration (#10037)
# Which Problems Are Solved

The resource usage to query user(s) on the database was high and
therefore could have performance impact.

# How the Problems Are Solved

Database queries involving the users and loginnames table were improved
and an index was added for user by email query.

# Additional Changes

- spellchecks
- updated apis on load tests

# additional info

needs cherry pick to v3
2025-06-06 08:48:29 +00:00
Stefan Benz
647b3b57cf fix: correct id filter for project service (#10035)
# Which Problems Are Solved

IDs filter definition was changed in another PR and not changed in the
Project service.

# How the Problems Are Solved

Correctly use the IDs filter.

# Additional Changes

Add timeout to the integration tests.

# Additional Context

None
2025-06-05 13:50:21 +00:00
Silvan
6c309d65c6 fix(fields): project by id and resource owner (#10034)
# Which Problems Are Solved

If the `IMPROVED_PERFORMANCE_PROJECT` feature flag was enabled it was
not possible to remove organizations anymore because the project was
searched in the `eventstore.fields` table without resource owner.

# How the Problems Are Solved

Search now includes resource owner.
2025-06-05 11:42:59 +00:00
Iraq
7df4f76f3c feat(api): reworking AddOrganization() API call to return all admins (#9900) 2025-06-05 09:05:35 +00:00
Stefan Benz
85e3b7449c fix: correct permissions for projects on v2 api (#9973)
# Which Problems Are Solved

Permission checks in project v2beta API did not cover projects and
granted projects correctly.

# How the Problems Are Solved

Add permission checks v1 correctly to the list queries, add correct
permission checks v2 for projects.

# Additional Changes

Correct Pre-Checks for project grants that the right resource owner is
used.

# Additional Context

Permission checks v2 for project grants is still outstanding under
#9972.
2025-06-04 11:46:10 +00:00
Elio Bischof
8fc11a7366 feat: user api requests to resource API (#9794)
# Which Problems Are Solved

This pull request addresses a significant gap in the user service v2
API, which currently lacks methods for managing machine users.

# How the Problems Are Solved

This PR adds new API endpoints to the user service v2 to manage machine
users including their secret, keys and personal access tokens.
Additionally, there's now a CreateUser and UpdateUser endpoints which
allow to create either a human or machine user and update them. The
existing `CreateHumanUser` endpoint has been deprecated along the
corresponding management service endpoints. For details check the
additional context section.

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9349

## More details
- API changes: https://github.com/zitadel/zitadel/pull/9680
- Implementation: https://github.com/zitadel/zitadel/pull/9763
- Tests: https://github.com/zitadel/zitadel/pull/9771

## Follow-ups

- Metadata: support managing user metadata using resource API
https://github.com/zitadel/zitadel/pull/10005
- Machine token type: support managing the machine token type (migrate
to new enum with zero value unspecified?)

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-06-04 07:17:23 +00:00
Tim Möhlmann
b9c1cdf4ad feat(projections): resource counters (#9979)
# Which Problems Are Solved

Add the ability to keep track of the current counts of projection
resources. We want to prevent calling `SELECT COUNT(*)` on tables, as
that forces a full scan and sudden spikes of DB resource uses.

# How the Problems Are Solved

- A resource_counts table is added
- Triggers that increment and decrement the counted values on inserts
and deletes
- Triggers that delete all counts of a table when the source table is
TRUNCATEd. This is not in the business logic, but prevents wrong counts
in case someone want to force a re-projection.
- Triggers that delete all counts if the parent resource is deleted
- Script to pre-populate the resource_counts table when a new source
table is added.

The triggers are reusable for any type of resource, in case we choose to
add more in the future.
Counts are aggregated by a given parent. Currently only `instance` and
`organization` are defined as possible parent. This can later be
extended to other types, such as `project`, should the need arise.

I deliberately chose to use `parent_id` to distinguish from the
de-factor `resource_owner` which is usually an organization ID. For
example:

- For users the parent is an organization and the `parent_id` matches
`resource_owner`.
- For organizations the parent is an instance, but the `resource_owner`
is the `org_id`. In this case the `parent_id` is the `instance_id`.
- Applications would have a similar problem, where the parent is a
project, but the `resource_owner` is the `org_id`


# Additional Context

Closes https://github.com/zitadel/zitadel/issues/9957
2025-06-03 14:15:30 +00:00
Livio Spring
15902f5bc7 fix(cache): prevent org cache overwrite by other instances (#10012)
# Which Problems Are Solved

A customer reported that randomly certain login flows, such as automatic
redirect to the only configured IdP would not work. During the
investigation it was discovered that they used that same primary domain
on two different instances. As they used the domain for preselecting the
organization, one would always overwrite the other in the cache. Since
The organization and especially it's policies could not be retrieved on
the other instance, it would fallback to the default organization
settings, where the external login and the corresponding IdP were not
configured.

# How the Problems Are Solved

Include the instance id in the cache key for organizations to prevent
overwrites.

# Additional Changes

None

# Additional Context

- found because of a support request
- requires backport to 2.70.x, 2.71.x and 3.x
2025-06-03 14:48:15 +02:00
Iraq
ae1a2e93c1 feat(api): moving organization API resourced based (#9943) 2025-06-02 16:27:53 +00:00
Iraq
b46c41e4bf fix(settings): fix for setting restricted languages (#9947)
# Which Problems Are Solved

Zitadel encounters a migration error when setting `restricted languages`
and fails to start.

# How the Problems Are Solved

The problem is that there is a check that checks that at least one of
the restricted languages is the same as the `default language`, however,
in the `authz instance` (where the default language is pulled form) is
never set.

I've added code to set the `default language` in the `authz instance` 

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9787

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-06-02 08:40:19 +00:00
Silvan
b660d6ab9a fix(queue): reset projection list before each Register call (#10001)
# Which Problems Are Solved

if Zitadel was started using `start-from-init` or `start-from-setup`
there were rare cases where a panic occured when
`Notifications.LegacyEnabled` was set to false. The cause was a list
which was not reset before refilling.

# How the Problems Are Solved

The list is now reset before each time it gets filled.

# Additional Changes

Ensure all contexts are canceled for the init and setup functions for
`start-from-init- or `start-from-setup` commands.

# Additional Context

none
2025-06-02 08:16:13 +00:00
Silvan
131f70db34 fix(eventstore): use decimal, correct mirror (#9914)
# Eventstore fixes

- `event.Position` used float64 before which can lead to [precision
loss](https://github.com/golang/go/issues/47300). The type got replaced
by [a type without precision
loss](https://github.com/jackc/pgx-shopspring-decimal)
- the handler reported the wrong error if the current state was updated
and therefore took longer to retry failed events.

# Mirror fixes

- max age of auth requests can be configured to speed up copying data
from `auth.auth_requests` table. Auth requests last updated before the
set age will be ignored. Default is 1 month
- notification projections are skipped because notifications should be
sent by the source system. The projections are set to the latest
position
- ensure that mirror can be executed multiple times

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-05-28 21:54:18 +00:00
Livio Spring
c097887bc5 fix: validate proto header and provide https enforcement (#9975)
# Which Problems Are Solved

ZITADEL uses the notification triggering requests Forwarded or
X-Forwarded-Proto header to build the button link sent in emails for
confirming a password reset with the emailed code. If this header is
overwritten and a user clicks the link to a malicious site in the email,
the secret code can be retrieved and used to reset the users password
and take over his account.

Accounts with MFA or Passwordless enabled can not be taken over by this
attack.

# How the Problems Are Solved

- The `X-Forwarded-Proto` and `proto` of the Forwarded headers are
validated (http / https).
- Additionally, when exposing ZITADEL through https. An overwrite to
http is no longer possible.

# Additional Changes

None

# Additional Context

None
2025-05-28 10:12:27 +02:00
Connor
77b433367e fix(login): Copy to clipboard button in MFA login step now compatible in non-chrome browser (#9880)
related to issue [#9379](https://github.com/zitadel/zitadel/issues/9379)

# Which Problems Are Solved

Copy to clipboard button was not compatible with Webkit/ Firefox
browsers.

# How the Problems Are Solved

The previous function used addEventListener without a callback function
as a second argument. I simply added the callback function and left
existing code intact to fix the bug.

# Additional Changes

Added `type=button` to prevent submitting the form when clicking the
button.

# Additional Context

none

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-05-28 06:06:27 +00:00
Livio Spring
4d66a786c8 feat: JWT IdP intent (#9966)
# Which Problems Are Solved

The login v1 allowed to use JWTs as IdP using the JWT IDP. The login V2
uses idp intents for such cases, which were not yet able to handle JWT
IdPs.

# How the Problems Are Solved

- Added handling of JWT IdPs in `StartIdPIntent` and `RetrieveIdPIntent`
- The redirect returned by the start, uses the existing `authRequestID`
and `userAgentID` parameter names for compatibility reasons.
- Added `/idps/jwt` endpoint to handle the proxied (callback) endpoint ,
which extracts and validates the JWT against the configured endpoint.

# Additional Changes

None

# Additional Context

- closes #9758
2025-05-27 16:26:46 +02:00
Livio Spring
833f6279e1 fix: allow invite codes for users with verified mails (#9962)
# Which Problems Are Solved

Users who started the invitation code verification, but haven't set up
any authentication method, need to be able to do so. This might require
a new invitation code, which was currently not possible since creation
was prevented for users with verified emails.

# How the Problems Are Solved

- Allow creation of invitation emails for users with verified emails.
- Merged the creation and resend into a single method, defaulting the
urlTemplate, applicatioName and authRequestID from the previous code (if
one exists). On the user service API, the `ResendInviteCode` endpoint
has been deprecated in favor of the `CreateInviteCode`

# Additional Changes

None

# Additional Context

- Noticed while investigating something internally.
- requires backport to 2.x and 3.x
2025-05-26 13:59:20 +02:00