* fix: add action v2 execution to features
* fix: add action v2 execution to features
* fix: add action v2 execution to features
* fix: update internal/command/instance_features_model.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: merge back main
* fix: merge back main
* fix: rename feature and service
* fix: rename feature and service
* fix: review changes
* fix: review changes
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* add token exchange feature flag
* allow setting reason and actor to access tokens
* impersonation
* set token types and scopes in response
* upgrade oidc to working draft state
* fix tests
* audience and scope validation
* id toke and jwt as input
* return id tokens
* add grant type token exchange to app config
* add integration tests
* check and deny actors in api calls
* fix instance setting tests by triggering projection on write and cleanup
* insert sleep statements again
* solve linting issues
* add translations
* pin oidc v3.15.0
* resolve comments, add event translation
* fix refreshtoken test
* use ValidateAuthReqScopes from oidc
* apparently the linter can't make up its mind
* persist actor thru refresh tokens and check in tests
* remove unneeded triggers
This PR adds the functionality to manage user schemas through the new user schema service.
It includes the possibility to create a basic JSON schema and also provides a way on defining permissions (read, write) for owner and self context with an annotation.
Further annotations for OIDC claims and SAML attribute mappings will follow.
A guide on how to create a schema and assign permissions has been started. It will be extended though out the process of implementing the schema and users based on those.
Note:
This feature is in an early stage and therefore not enabled by default. To test it out, please enable the UserSchema feature flag on your instance / system though the feature service.
* fix: assign instance ID to aggregate ID when converting from v1 to v2 feature
This change fixes a mismatch between v1 and v2 aggregate IDs for instance feature events.
The old v1 used a random aggregate ID, while v2 uses the instance ID as aggregate ID.
The adapter was not correctly mapping, which resulted in the projections.instance_features table being filled with wrong instance IDs.
Closes#7501
* fix unit test
* feat(api): feature API proto definitions
* update proto based on discussion with @livio-a
* cleanup old feature flag stuff
* authz instance queries
* align defaults
* projection definitions
* define commands and event reducers
* implement system and instance setter APIs
* api getter implementation
* unit test repository package
* command unit tests
* unit test Get queries
* grpc converter unit tests
* migrate the V1 features
* migrate oidc to dynamic features
* projection unit test
* fix instance by host
* fix instance by id data type in sql
* fix linting errors
* add system projection test
* fix behavior inversion
* resolve proto file comments
* rename SystemDefaultLoginInstanceEventType to SystemLoginDefaultOrgEventType so it's consistent with the instance level event
* use write models and conditional set events
* system features integration tests
* instance features integration tests
* error on empty request
* documentation entry
* typo in feature.proto
* fix start unit tests
* solve linting error on key case switch
* remove system defaults after discussion with @eliobischof
* fix system feature projection
* resolve comments in defaults.yaml
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
Even though this is a feature it's released as fix so that we can back port to earlier revisions.
As reported by multiple users startup of ZITADEL after leaded to downtime and worst case rollbacks to the previously deployed version.
The problem starts rising when there are too many events to process after the start of ZITADEL. The root cause are changes on projections (database tables) which must be recomputed. This PR solves this problem by adding a new step to the setup phase which prefills the projections. The step can be enabled by adding the `--init-projections`-flag to `setup`, `start-from-init` and `start-from-setup`. Setting this flag results in potentially longer duration of the setup phase but reduces the risk of the problems mentioned in the paragraph above.
This implementation increases parallel write capabilities of the eventstore.
Please have a look at the technical advisories: [05](https://zitadel.com/docs/support/advisory/a10005) and [06](https://zitadel.com/docs/support/advisory/a10006).
The implementation of eventstore.push is rewritten and stored events are migrated to a new table `eventstore.events2`.
If you are using cockroach: make sure that the database user of ZITADEL has `VIEWACTIVITY` grant. This is used to query events.
* start feature flags
* base feature events on domain const
* setup default features
* allow setting feature in system api
* allow setting feature in admin api
* set settings in login based on feature
* fix rebasing
* unit tests
* i18n
* update policy after domain discovery
* some changes from review
* check feature and value type
* check feature and value type