# Which Problems Are Solved
Currently ZITADEL supports RP-initiated logout for clients. Back-channel
logout ensures that user sessions are terminated across all connected
applications, even if the user closes their browser or loses
connectivity providing a more secure alternative for certain use cases.
# How the Problems Are Solved
If the feature is activated and the client used for the authentication
has a back_channel_logout_uri configured, a
`session_logout.back_channel` will be registered. Once a user terminates
their session, a (notification) handler will send a SET (form POST) to
the registered uri containing a logout_token (with the user's ID and
session ID).
- A new feature "back_channel_logout" is added on system and instance
level
- A `back_channel_logout_uri` can be managed on OIDC applications
- Added a `session_logout` aggregate to register and inform about sent
`back_channel` notifications
- Added a `SecurityEventToken` channel and `Form`message type in the
notification handlers
- Added `TriggeredAtOrigin` fields to `HumanSignedOut` and
`TerminateSession` events for notification handling
- Exported various functions and types in the `oidc` package to be able
to reuse for token signing in the back_channel notifier.
- To prevent that current existing session termination events will be
handled, a setup step is added to set the `current_states` for the
`projections.notifications_back_channel_logout` to the current position
- [x] requires https://github.com/zitadel/oidc/pull/671
# Additional Changes
- Updated all OTEL dependencies to v1.29.0, since OIDC already updated
some of them to that version.
- Single Session Termination feature is correctly checked (fixed feature
mapping)
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/8467
- TODO:
- Documentation
- UI to be done: https://github.com/zitadel/zitadel/issues/8469
---------
Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
# Which Problems Are Solved
When executing many concurrent authentication requests on a single
machine user, there were performance issues. As the same aggregate is
being searched and written to concurrently, we traced it down to a
locking issue on the used index.
We already optimized the token endpoint by creating a separate OIDC
aggregate.
At the time we decided to push a single event to the user aggregate, for
the user audit log. See [technical advisory
10010](https://zitadel.com/docs/support/advisory/a10010) for more
details.
However, a recent security fix introduced an additional search query on
the user aggregate, causing the locking issue we found.
# How the Problems Are Solved
Add a feature flag which disables pushing of the `user.token.v2.added`.
The event has no importance and was only added for informational
purposes on the user objects. The `oidc_session.access_token.added` is
the actual payload event and is pushed on the OIDC session aggregate and
can still be used for audit trail.
# Additional Changes
- Fix an event mapper type for
`SystemOIDCSingleV1SessionTerminationEventType`
# Additional Context
- Reported by support request
- https://github.com/zitadel/zitadel/pull/7822 changed the token
aggregate
- https://github.com/zitadel/zitadel/pull/8631 introduced user state
check
Load test trace graph with `user.token.v2.added` **enabled**. Query
times are steadily increasing:
![image](https://github.com/user-attachments/assets/4aa25055-8721-4e93-b695-625560979909)
Load test trace graph with `user.token.v2.added` **disabled**. Query
times constant:
![image](https://github.com/user-attachments/assets/a7657f6c-0c55-401b-8291-453da5d5caf9)
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
The end_session_endpoint currently always requires the userAgent cookie
to be able to terminate session created through the hosted login UI.
Only tokens issued through the Login V2 can be used to directly
terminate a specific session and without the need of a cookie.
This PR adds the possibility to terminate a single V1 session or all V1
sessions belonging to the same user agent without the need of the
userAgent cookie by providing an id_token as `id_token_hint` which
contains the id of a V1 session as `sid`.
# How the Problems Are Solved
- #8525 added the `sid` claim for id_tokens issued through the login UI
- The `sid` can now be checked for the `V1_` prefix and queries for
either the userAgentID and depending on the
`OIDCSingleV1SessionTermination` flag all userIDs of active session from
the same user agent id
- The `OIDCSingleV1SessionTermination` flag is added with default value
false to keep the existing behavior of terminating all sessions even in
case of providing an id_token_hint
# Additional Changes
- pass `context.Context` into session view functions for querying the
database with that context
# Additional Context
- relates to #8499
- closes#8501
# Which Problems Are Solved
Currently the OIDC API of ZITADEL only prints parent errors to the logs.
Where 4xx status are typically warn level and 5xx error level. This
makes it hard to debug certain errors for client in multi-instance
environments like ZITADEL cloud, where there is no direct access to
logs. In case of support requests we often can't correlate past
log-lines to the error that was reported.
This change adds the possibility to return the parent error in the
response to the OIDC client. For the moment this only applies to JSON
body responses, not error redirects to the RP.
# How the Problems Are Solved
- New instance-level feature flag: `debug_oidc_parent_error`
- Use the new `WithReturnParentToClient()` function from the oidc lib
introduced in https://github.com/zitadel/oidc/pull/629 for all cases
where `WithParent` was already used and the request context is
available.
# Additional Changes
none
# Additional Context
- Depends on: https://github.com/zitadel/oidc/pull/629
- Related to: https://github.com/zitadel/zitadel/issues/8362
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
Implement a new API service that allows management of OIDC signing web
keys.
This allows users to manage rotation of the instance level keys. which
are currently managed based on expiry.
The API accepts the generation of the following key types and
parameters:
- RSA keys with 2048, 3072 or 4096 bit in size and:
- Signing with SHA-256 (RS256)
- Signing with SHA-384 (RS384)
- Signing with SHA-512 (RS512)
- ECDSA keys with
- P256 curve
- P384 curve
- P512 curve
- ED25519 keys
# How the Problems Are Solved
Keys are serialized for storage using the JSON web key format from the
`jose` library. This is the format that will be used by OIDC for
signing, verification and publication.
Each instance can have a number of key pairs. All existing public keys
are meant to be used for token verification and publication the keys
endpoint. Keys can be activated and the active private key is meant to
sign new tokens. There is always exactly 1 active signing key:
1. When the first key for an instance is generated, it is automatically
activated.
2. Activation of the next key automatically deactivates the previously
active key.
3. Keys cannot be manually deactivated from the API
4. Active keys cannot be deleted
# Additional Changes
- Query methods that later will be used by the OIDC package are already
implemented. Preparation for #8031
- Fix indentation in french translation for instance event
- Move user_schema translations to consistent positions in all
translation files
# Additional Context
- Closes#8030
- Part of #7809
---------
Co-authored-by: Elio Bischof <elio@zitadel.com>
If the feature is enabled the new packages are used to query org by id
Part of: https://github.com/zitadel/zitadel/issues/7639
### Definition of Ready
- [x] I am happy with the code
- [x] Short description of the feature/issue is added in the pr
description
- [x] PR is linked to the corresponding user story
- [ ] Acceptance criteria are met
- [ ] All open todos and follow ups are defined in a new ticket and
justified
- [ ] Deviations from the acceptance criteria and design are agreed with
the PO and documented.
- [x] No debug or dead code
- [x] My code has no repetitions
- [ ] Critical parts are tested automatically
- [ ] Where possible E2E tests are implemented
- [ ] Documentation/examples are up-to-date
- [ ] All non-functional requirements are met
- [x] Functionality of the acceptance criteria is checked manually on
the dev system.
* fix: add action v2 execution to features
* fix: add action v2 execution to features
* fix: add action v2 execution to features
* fix: update internal/command/instance_features_model.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: merge back main
* fix: merge back main
* fix: rename feature and service
* fix: rename feature and service
* fix: review changes
* fix: review changes
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* add token exchange feature flag
* allow setting reason and actor to access tokens
* impersonation
* set token types and scopes in response
* upgrade oidc to working draft state
* fix tests
* audience and scope validation
* id toke and jwt as input
* return id tokens
* add grant type token exchange to app config
* add integration tests
* check and deny actors in api calls
* fix instance setting tests by triggering projection on write and cleanup
* insert sleep statements again
* solve linting issues
* add translations
* pin oidc v3.15.0
* resolve comments, add event translation
* fix refreshtoken test
* use ValidateAuthReqScopes from oidc
* apparently the linter can't make up its mind
* persist actor thru refresh tokens and check in tests
* remove unneeded triggers
This PR adds the functionality to manage user schemas through the new user schema service.
It includes the possibility to create a basic JSON schema and also provides a way on defining permissions (read, write) for owner and self context with an annotation.
Further annotations for OIDC claims and SAML attribute mappings will follow.
A guide on how to create a schema and assign permissions has been started. It will be extended though out the process of implementing the schema and users based on those.
Note:
This feature is in an early stage and therefore not enabled by default. To test it out, please enable the UserSchema feature flag on your instance / system though the feature service.
* fix: assign instance ID to aggregate ID when converting from v1 to v2 feature
This change fixes a mismatch between v1 and v2 aggregate IDs for instance feature events.
The old v1 used a random aggregate ID, while v2 uses the instance ID as aggregate ID.
The adapter was not correctly mapping, which resulted in the projections.instance_features table being filled with wrong instance IDs.
Closes#7501
* fix unit test
* feat(api): feature API proto definitions
* update proto based on discussion with @livio-a
* cleanup old feature flag stuff
* authz instance queries
* align defaults
* projection definitions
* define commands and event reducers
* implement system and instance setter APIs
* api getter implementation
* unit test repository package
* command unit tests
* unit test Get queries
* grpc converter unit tests
* migrate the V1 features
* migrate oidc to dynamic features
* projection unit test
* fix instance by host
* fix instance by id data type in sql
* fix linting errors
* add system projection test
* fix behavior inversion
* resolve proto file comments
* rename SystemDefaultLoginInstanceEventType to SystemLoginDefaultOrgEventType so it's consistent with the instance level event
* use write models and conditional set events
* system features integration tests
* instance features integration tests
* error on empty request
* documentation entry
* typo in feature.proto
* fix start unit tests
* solve linting error on key case switch
* remove system defaults after discussion with @eliobischof
* fix system feature projection
* resolve comments in defaults.yaml
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
Even though this is a feature it's released as fix so that we can back port to earlier revisions.
As reported by multiple users startup of ZITADEL after leaded to downtime and worst case rollbacks to the previously deployed version.
The problem starts rising when there are too many events to process after the start of ZITADEL. The root cause are changes on projections (database tables) which must be recomputed. This PR solves this problem by adding a new step to the setup phase which prefills the projections. The step can be enabled by adding the `--init-projections`-flag to `setup`, `start-from-init` and `start-from-setup`. Setting this flag results in potentially longer duration of the setup phase but reduces the risk of the problems mentioned in the paragraph above.
This implementation increases parallel write capabilities of the eventstore.
Please have a look at the technical advisories: [05](https://zitadel.com/docs/support/advisory/a10005) and [06](https://zitadel.com/docs/support/advisory/a10006).
The implementation of eventstore.push is rewritten and stored events are migrated to a new table `eventstore.events2`.
If you are using cockroach: make sure that the database user of ZITADEL has `VIEWACTIVITY` grant. This is used to query events.
* start feature flags
* base feature events on domain const
* setup default features
* allow setting feature in system api
* allow setting feature in admin api
* set settings in login based on feature
* fix rebasing
* unit tests
* i18n
* update policy after domain discovery
* some changes from review
* check feature and value type
* check feature and value type