# Which Problems Are Solved
Scheduled handlers use `eventstore.InstanceIDs` to get the all active
instances within a given timeframe. This function scrapes through all
events written within that time frame which can cause heavy load on the
database.
# How the Problems Are Solved
A new query cache `activeInstances` is introduced which caches the ids
of all instances queried by id or host within the configured timeframe.
# Additional Changes
- Changed `default.yaml`
- Removed `HandleActiveInstances` from custom handler configs
- Added `MaxActiveInstances` to define the maximal amount of cached
instance ids
- fixed start-from-init and start-from-setup to start auth and admin
projections twice
- fixed org cache invalidation to use correct index
# Additional Context
- part of #8999
# Which Problems Are Solved
There are some problems related to the use of CockroachDB with the new
notification handling (#8931).
See #9002 for details.
# How the Problems Are Solved
- Brought back the previous notification handler as legacy mode.
- Added a configuration to choose between legacy mode and new parallel
workers.
- Enabled legacy mode by default to prevent issues.
# Additional Changes
None
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/9002
- relates to #8931
# Which Problems Are Solved
While running the latest RC / main, we noticed some errors including
context timeouts and rollback issues.
# How the Problems Are Solved
- The transaction context is passed and used for any event being written
and for handling savepoints to be able to handle context timeouts.
- The user projection is not triggered anymore. This will reduce
unnecessary load and potential timeouts if lot of workers are running.
In case a user would not be projected yet, the request event will log an
error and then be skipped / retried on the next run.
- Additionally, the context is checked if being closed after each event
process.
- `latestRetries` now correctly only returns the latest retry events to
be processed
- Default values for notifications have been changed to run workers less
often, more retry delay, but less transaction duration.
# Additional Changes
None
# Additional Context
relates to #8931
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
# Which Problems Are Solved
The action v2 messages were didn't contain anything providing security
for the sent content.
# How the Problems Are Solved
Each Target now has a SigningKey, which can also be newly generated
through the API and returned at creation and through the Get-Endpoints.
There is now a HTTP header "Zitadel-Signature", which is generated with
the SigningKey and Payload, and also contains a timestamp to check with
a tolerance if the message took to long to sent.
# Additional Changes
The functionality to create and check the signature is provided in the
pkg/actions package, and can be reused in the SDK.
# Additional Context
Closes#7924
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.
# How the Problems Are Solved
- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml
Notifications:
# The amount of workers processing the notification request events.
# If set to 0, no notification request events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
# The amount of events a single worker will process in a run.
BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
# Time interval between scheduled notifications for request events
RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
# The amount of workers processing the notification retry events.
# If set to 0, no notification retry events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
# Time interval between scheduled notifications for retry events
RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
# Only instances are projected, for which at least a projection-relevant event exists within the timeframe
# from HandleActiveInstances duration in the past until the projection's current time
# If set to 0 (default), every instance is always considered active
HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
# The maximum duration a transaction remains open
# before it spots left folding additional events
# and updates the table.
TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
# Automatically cancel the notification after the amount of failed attempts
MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
# Automatically cancel the notification if it cannot be handled within a specific time
MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL
# Failed attempts are retried after a confogired delay (with exponential backoff).
# Set a minimum and maximum delay and a factor for the backoff
MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
# Any factor below 1 will be set to 1
RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```
# Additional Changes
None
# Additional Context
- closes#8931
# Which Problems Are Solved
Organizations are ofter searched for by ID or primary domain. This
results in many redundant queries, resulting in a performance impact.
# How the Problems Are Solved
Cache Organizaion objects by ID and primary domain.
# Additional Changes
- Adjust integration test config to use all types of cache.
- Adjust integration test lifetimes so the pruner has something to do
while the tests run.
# Additional Context
- Closes#8865
- After #8902
# Which Problems Are Solved
By having default entries in the `Username` and `ClientName` fields, it
was not possible to unset there parameters. Unsetting them is required
for GCP connections
# How the Problems Are Solved
Set the fields to empty strings.
# Additional Changes
- none
# Additional Context
- none
# Which Problems Are Solved
If a redis cache has connection issues or any other type of permament
error,
it tanks the responsiveness of ZITADEL.
We currently do not support things like Redis cluster or sentinel. So
adding a simple redis cache improves performance but introduces a single
point of failure.
# How the Problems Are Solved
Implement a [circuit
breaker](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/dn589784(v=pandp.10)?redirectedfrom=MSDN)
as
[`redis.Limiter`](https://pkg.go.dev/github.com/redis/go-redis/v9#Limiter)
by wrapping sony's [gobreaker](https://github.com/sony/gobreaker)
package. This package is picked as it seems well maintained and we
already use their `sonyflake` package
# Additional Changes
- The unit tests constructed an unused `redis.Client` and didn't cleanup
the connector. This is now fixed.
# Additional Context
Closes#8864
# Which Problems Are Solved
Fixes small typo in email body during user creation & verification. The
change also includes the removal of some unnecessary white space in the
same yaml file.
# How the Problems Are Solved
Replaces din't with didn't.
![image](https://github.com/user-attachments/assets/48abf38b-4deb-42b7-a85b-91009e19f27f)
Co-authored-by: jtaylor@dingo.com <jtaylor@dingo.com>
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
Add a cache implementation using Redis single mode. This does not add
support for Redis Cluster or sentinel.
# How the Problems Are Solved
Added the `internal/cache/redis` package. All operations occur
atomically, including setting of secondary indexes, using LUA scripts
where needed.
The [`miniredis`](https://github.com/alicebob/miniredis) package is used
to run unit tests.
# Additional Changes
- Move connector code to `internal/cache/connector/...` and remove
duplicate code from `query` and `command` packages.
- Fix a missed invalidation on the restrictions projection
# Additional Context
Closes#8130
# Which Problems Are Solved
Currently ZITADEL supports RP-initiated logout for clients. Back-channel
logout ensures that user sessions are terminated across all connected
applications, even if the user closes their browser or loses
connectivity providing a more secure alternative for certain use cases.
# How the Problems Are Solved
If the feature is activated and the client used for the authentication
has a back_channel_logout_uri configured, a
`session_logout.back_channel` will be registered. Once a user terminates
their session, a (notification) handler will send a SET (form POST) to
the registered uri containing a logout_token (with the user's ID and
session ID).
- A new feature "back_channel_logout" is added on system and instance
level
- A `back_channel_logout_uri` can be managed on OIDC applications
- Added a `session_logout` aggregate to register and inform about sent
`back_channel` notifications
- Added a `SecurityEventToken` channel and `Form`message type in the
notification handlers
- Added `TriggeredAtOrigin` fields to `HumanSignedOut` and
`TerminateSession` events for notification handling
- Exported various functions and types in the `oidc` package to be able
to reuse for token signing in the back_channel notifier.
- To prevent that current existing session termination events will be
handled, a setup step is added to set the `current_states` for the
`projections.notifications_back_channel_logout` to the current position
- [x] requires https://github.com/zitadel/oidc/pull/671
# Additional Changes
- Updated all OTEL dependencies to v1.29.0, since OIDC already updated
some of them to that version.
- Single Session Termination feature is correctly checked (fixed feature
mapping)
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/8467
- TODO:
- Documentation
- UI to be done: https://github.com/zitadel/zitadel/issues/8469
---------
Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
# Which Problems Are Solved
System administrators can block hosts and IPs for HTTP calls in actions.
Using DNS, blocked IPs could be bypassed.
# How the Problems Are Solved
- Hosts are resolved (DNS lookup) to check whether their corresponding
IP is blocked.
# Additional Changes
- Added complete lookup ip address range and "unspecified" address to
the default `DenyList`
# Which Problems Are Solved
The primary issue addressed in this PR is that the defaults.yaml file
contains escaped characters (like `<` for < and `>` for >) in
message texts, which prevents valid HTML rendering in certain parts of
the Zitadel platform.
These escaped characters are used in user-facing content (e.g., email
templates or notifications), resulting in improperly displayed text,
where the HTML elements like line breaks or bold text don't render
correctly.
# How the Problems Are Solved
The solution involves replacing the escaped characters with their
corresponding HTML tags in the defaults.yaml file, ensuring that the
HTML renders correctly in the emails or user interfaces where these
messages are displayed.
This update ensures that:
- The HTML in these message templates is rendered properly, improving
the user experience.
- The content looks professional and adheres to web standards for
displaying HTML content.
# Additional Changes
N/A
# Additional Context
N/A
- Closes#8531
Co-authored-by: Max Peintner <max@caos.ch>
# Which Problems Are Solved
We identified the need of caching.
Currently we have a number of places where we use different ways of
caching, like go maps or LRU.
We might also want shared chaches in the future, like Redis-based or in
special SQL tables.
# How the Problems Are Solved
Define a generic Cache interface which allows different implementations.
- A noop implementation is provided and enabled as.
- An implementation using go maps is provided
- disabled in defaults.yaml
- enabled in integration tests
- Authz middleware instance objects are cached using the interface.
# Additional Changes
- Enabled integration test command raceflag
- Fix a race condition in the limits integration test client
- Fix a number of flaky integration tests. (Because zitadel is super
fast now!) 🎸🚀
# Additional Context
Related to https://github.com/zitadel/zitadel/issues/8648
# Which Problems Are Solved
Reduce the chance for projection dead-locks. Increasing or disabling the
projection transaction duration solved dead-locks in all reported cases.
# How the Problems Are Solved
Increase the default transaction duration to 1 minute.
Due to the high value it is functionally similar to disabling,
however it still provides a safety net for transaction that do freeze,
perhaps due to connection issues with the database.
# Additional Changes
- Integration test uses default.
- Technical advisory
# Additional Context
- Related to https://github.com/zitadel/zitadel/issues/8517
---------
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
As an administrator I want to be able to invite users to my application
with the API V2, some user data I will already prefil, the user should
add the authentication method themself (password, passkey, sso).
# How the Problems Are Solved
- A user can now be created with a email explicitly set to false.
- If a user has no verified email and no authentication method, an
`InviteCode` can be created through the User V2 API.
- the code can be returned or sent through email
- additionally `URLTemplate` and an `ApplicatioName` can provided for
the email
- The code can be resent and verified through the User V2 API
- The V1 login allows users to verify and resend the code and set a
password (analog user initialization)
- The message text for the user invitation can be customized
# Additional Changes
- `verifyUserPasskeyCode` directly uses `crypto.VerifyCode` (instead of
`verifyEncryptedCode`)
- `verifyEncryptedCode` is removed (unnecessarily queried for the code
generator)
# Additional Context
- closes#8310
- TODO: login V2 will have to implement invite flow:
https://github.com/zitadel/typescript/issues/166
# Which Problems Are Solved
Add a debug API which allows pushing a set of events to be reduced in a
dedicated projection.
The events can carry a sleep duration which simulates a slow query
during projection handling.
# How the Problems Are Solved
- `CreateDebugEvents` allows pushing multiple events which simulate the
lifecycle of a resource. Each event has a `projectionSleep` field, which
issues a `pg_sleep()` statement query in the projection handler :
- Add
- Change
- Remove
- `ListDebugEventsStates` list the current state of the projection,
optionally with a Trigger
- `GetDebugEventsStateByID` get the current state of the aggregate ID in
the projection, optionally with a Trigger
# Additional Changes
- none
# Additional Context
- Allows reproduction of https://github.com/zitadel/zitadel/issues/8517
# Which Problems Are Solved
Use web keys, managed by the `resources/v3alpha/web_keys` API, for OIDC
token signing and verification,
as well as serving the public web keys on the jwks / keys endpoint.
Response header on the keys endpoint now allows caching of the response.
This is now "safe" to do since keys can be created ahead of time and
caches have sufficient time to pickup the change before keys get
enabled.
# How the Problems Are Solved
- The web key format is used in the `getSignerOnce` function in the
`api/oidc` package.
- The public key cache is changed to get and store web keys.
- The jwks / keys endpoint returns the combined set of valid "legacy"
public keys and all available web keys.
- Cache-Control max-age default to 5 minutes and is configured in
`defaults.yaml`.
When the web keys feature is enabled, fallback mechanisms are in place
to obtain and convert "legacy" `query.PublicKey` as web keys when
needed. This allows transitioning to the feature without invalidating
existing tokens. A small performance overhead may be noticed on the keys
endpoint, because 2 queries need to be run sequentially. This will
disappear once the feature is stable and the legacy code gets cleaned
up.
# Additional Changes
- Extend legacy key lifetimes so that tests can be run on an existing
database with more than 6 hours apart.
- Discovery endpoint returns all supported algorithms when the Web Key
feature is enabled.
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/8031
- Part of https://github.com/zitadel/zitadel/issues/7809
- After https://github.com/zitadel/oidc/pull/637
- After https://github.com/zitadel/oidc/pull/638
# Which Problems Are Solved
To have more insight on the performance, CPU and memory usage of
ZITADEL, we want to enable profiling.
# How the Problems Are Solved
- Allow profiling by configuration.
- Provide Google Cloud Profiler as first implementation
# Additional Changes
None.
# Additional Context
There were possible memory leaks reported:
https://discord.com/channels/927474939156643850/1273210227918897152
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
Implement a new API service that allows management of OIDC signing web
keys.
This allows users to manage rotation of the instance level keys. which
are currently managed based on expiry.
The API accepts the generation of the following key types and
parameters:
- RSA keys with 2048, 3072 or 4096 bit in size and:
- Signing with SHA-256 (RS256)
- Signing with SHA-384 (RS384)
- Signing with SHA-512 (RS512)
- ECDSA keys with
- P256 curve
- P384 curve
- P512 curve
- ED25519 keys
# How the Problems Are Solved
Keys are serialized for storage using the JSON web key format from the
`jose` library. This is the format that will be used by OIDC for
signing, verification and publication.
Each instance can have a number of key pairs. All existing public keys
are meant to be used for token verification and publication the keys
endpoint. Keys can be activated and the active private key is meant to
sign new tokens. There is always exactly 1 active signing key:
1. When the first key for an instance is generated, it is automatically
activated.
2. Activation of the next key automatically deactivates the previously
active key.
3. Keys cannot be manually deactivated from the API
4. Active keys cannot be deleted
# Additional Changes
- Query methods that later will be used by the OIDC package are already
implemented. Preparation for #8031
- Fix indentation in french translation for instance event
- Move user_schema translations to consistent positions in all
translation files
# Additional Context
- Closes#8030
- Part of #7809
---------
Co-authored-by: Elio Bischof <elio@zitadel.com>
# Which Problems Are Solved
The current v3alpha actions APIs don't exactly adhere to the [new
resources API
design](https://zitadel.com/docs/apis/v3#standard-resources).
# How the Problems Are Solved
- **Improved ID access**: The aggregate ID is added to the resource
details object, so accessing resource IDs and constructing proto
messages for resources is easier
- **Explicit Instances**: Optionally, the instance can be explicitly
given in each request
- **Pagination**: A default search limit and a max search limit are
added to the defaults.yaml. They apply to the new v3 APIs (currently
only actions). The search query defaults are changed to ascending by
creation date, because this makes the pagination results the most
deterministic. The creation date is also added to the object details.
The bug with updated creation dates is fixed for executions and targets.
- **Removed Sequences**: Removed Sequence from object details and
ProcessedSequence from search details
# Additional Changes
Object details IDs are checked in unit test only if an empty ID is
expected. Centralizing the details check also makes this internal object
more flexible for future evolutions.
# Additional Context
- Closes#8169
- Depends on https://github.com/zitadel/zitadel/pull/8225
---------
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
# Which Problems Are Solved
There was no default configuration for `DeviceAuth`, which makes it
impossible to override by environment variables.
Additionally, a custom `CharAmount` value would overwrite also the
`DashInterval`.
# How the Problems Are Solved
- added to defaults.yaml
- fixed customization
# Additional Changes
None.
# Additional Context
- noticed during a customer request
# Which Problems Are Solved
ZITADEL currently selects the instance context based on a HTTP header
(see https://github.com/zitadel/zitadel/issues/8279#issue-2399959845 and
checks it against the list of instance domains. Let's call it instance
or API domain.
For any context based URL (e.g. OAuth, OIDC, SAML endpoints, links in
emails, ...) the requested domain (instance domain) will be used. Let's
call it the public domain.
In cases of proxied setups, all exposed domains (public domains) require
the domain to be managed as instance domain.
This can either be done using the "ExternalDomain" in the runtime config
or via system API, which requires a validation through CustomerPortal on
zitadel.cloud.
# How the Problems Are Solved
- Two new headers / header list are added:
- `InstanceHostHeaders`: an ordered list (first sent wins), which will
be used to match the instance.
(For backward compatibility: the `HTTP1HostHeader`, `HTTP2HostHeader`
and `forwarded`, `x-forwarded-for`, `x-forwarded-host` are checked
afterwards as well)
- `PublicHostHeaders`: an ordered list (first sent wins), which will be
used as public host / domain. This will be checked against a list of
trusted domains on the instance.
- The middleware intercepts all requests to the API and passes a
`DomainCtx` object with the hosts and protocol into the context
(previously only a computed `origin` was passed)
- HTTP / GRPC server do not longer try to match the headers to instances
themself, but use the passed `http.DomainContext` in their interceptors.
- The `RequestedHost` and `RequestedDomain` from authz.Instance are
removed in favor of the `http.DomainContext`
- When authenticating to or signing out from Console UI, the current
`http.DomainContext(ctx).Origin` (already checked by instance
interceptor for validity) is used to compute and dynamically add a
`redirect_uri` and `post_logout_redirect_uri`.
- Gateway passes all configured host headers (previously only did
`x-zitadel-*`)
- Admin API allows to manage trusted domain
# Additional Changes
None
# Additional Context
- part of #8279
- open topics:
- "single-instance" mode
- Console UI
# Which Problems Are Solved
The current v3alpha actions APIs don't exactly adhere to the [new
resources API
design](https://zitadel.com/docs/apis/v3#standard-resources).
# How the Problems Are Solved
- **Breaking**: The current v3alpha actions APIs are removed. This is
breaking.
- **Resource Namespace**: New v3alpha actions APIs for targets and
executions are added under the namespace /resources.
- **Feature Flag**: New v3alpha actions APIs still have to be activated
using the actions feature flag
- **Reduced Executions Overhead**: Executions are managed similar to
settings according to the new API design: an empty list of targets
basically makes an execution a Noop. So a single method, SetExecution is
enough to cover all use cases. Noop executions are not returned in
future search requests.
- **Compatibility**: The executions created with previous v3alpha APIs
are still available to be managed with the new executions API.
# Additional Changes
- Removed integration tests which test executions but rely on readable
targets. They are added again with #8169
# Additional Context
Closes#8168
# Which Problems Are Solved
The default terms of service and privacy policy links are applied to all
new ZITADEL instances, also for self hosters. However, the links
contents don't apply to self-hosters.
# How the Problems Are Solved
The links are removed from the DefaultInstance section in the
*defaults.yaml* file.
By default, the links are not shown anymore in the hosted login pages.
They can still be configured using the privacy policy.
# Additional Context
- Found because of a support request
# Which Problems Are Solved
The connection pool of go uses a high amount of database connections.
# How the Problems Are Solved
The standard lib connection pool was replaced by `pgxpool.Pool`
# Additional Changes
The `db.BeginTx`-spans are removed because they cause to much noise in
the traces.
# Additional Context
- part of https://github.com/zitadel/zitadel/issues/7639
# Which Problems Are Solved
Bigger systems need to process many events during the initialisation
phase of the `eventstore.fields`-table. During setup these calls can
time out.
# How the Problems Are Solved
Changed the default behaviour of these projections to not time out and
increased the bulk limit.
# Which Problems Are Solved
Allow verification of imported passwords hashed with plain md5, without
salt. These are password digests typically created by one of:
- `printf "password" | md5sum` on most linux systems.
- PHP's `md5("password")`
- Python3's `hashlib.md5(b"password").hexdigest()`
# How the Problems Are Solved
- Upgrade passwap to
[v0.6.0](https://github.com/zitadel/passwap/releases/tag/v0.6.0)
- Add md5plain as a new verfier option in `defaults.yaml`
# Additional Changes
- Updated documentation to explain difference between `md5` (crypt) and
`md5plain` verifiers.
# Additional Context
- Requested by customer for import case
# Which Problems Are Solved
The init job fails if no database called *postgres* or *defaultdb* for
cockroach respectively exists.
# How the Problems Are Solved
The value is now configurable, for example by env variable
*ZITADEL_DATABASE_POSTGRES_ADMIN_EXISTINGDATABASE*
# Additional Context
- Closes#5810
This PR extends the user schema service (V3 API) with the possibility to ListUserSchemas and GetUserSchemaByID.
The previously started guide is extended to demonstrate how to retrieve the schema(s) and notes the generated revision property.
* feat: add projections and query side to executions and targets
* feat: add list and get endpoints for targets
* feat: add integration tests for query endpoints target and execution
* fix: linting
* fix: linting
* fix: review changes, renames and corrections
* fix: review changes, renames and corrections
* fix: review changes, renames and corrections
* fix: review changes, renames and corrections
* fix: review changes, renames and corrections
* fix: review changes, renames and corrections
* fix: remove position from list details
This PR adds the functionality to manage user schemas through the new user schema service.
It includes the possibility to create a basic JSON schema and also provides a way on defining permissions (read, write) for owner and self context with an annotation.
Further annotations for OIDC claims and SAML attribute mappings will follow.
A guide on how to create a schema and assign permissions has been started. It will be extended though out the process of implementing the schema and users based on those.
Note:
This feature is in an early stage and therefore not enabled by default. To test it out, please enable the UserSchema feature flag on your instance / system though the feature service.
* docs: describe DefaultInstance vs FirstInstance
* link to docs
* add better searchable tip to the docs
* add better searchable tip to the docs
* add link
* partial work done
* test IAM membership roles
* org membership tests
* console :(, translations and docs
* fix integration test
* fix tests
* add EnableImpersonation to security policy API
* fix integration test timestamp checking
* add security policy tests and fix projections
* add impersonation setting in console
* add security settings to the settings v2 API
* fix typo
* move impersonation to instance
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* feat(api): feature API proto definitions
* update proto based on discussion with @livio-a
* cleanup old feature flag stuff
* authz instance queries
* align defaults
* projection definitions
* define commands and event reducers
* implement system and instance setter APIs
* api getter implementation
* unit test repository package
* command unit tests
* unit test Get queries
* grpc converter unit tests
* migrate the V1 features
* migrate oidc to dynamic features
* projection unit test
* fix instance by host
* fix instance by id data type in sql
* fix linting errors
* add system projection test
* fix behavior inversion
* resolve proto file comments
* rename SystemDefaultLoginInstanceEventType to SystemLoginDefaultOrgEventType so it's consistent with the instance level event
* use write models and conditional set events
* system features integration tests
* instance features integration tests
* error on empty request
* documentation entry
* typo in feature.proto
* fix start unit tests
* solve linting error on key case switch
* remove system defaults after discussion with @eliobischof
* fix system feature projection
* resolve comments in defaults.yaml
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* feat: add events for execution
* feat: add events for execution and command side
* feat: add events for execution and command side
* feat: add api endpoints for set and delete executions with integration tests
* feat: add integration and unit tests and more existence checks
* feat: add integration and unit tests and more existence checks
* feat: unit tests for includes in executions
* feat: integration tests for includes in executions
* fix: linting
* fix: update internal/api/api.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: update internal/command/command.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: apply suggestions from code review
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: change api return
* fix: change aggregateID with prefix of execution type and add to documentation
* fix: change body in proto for documentation and correct linting
* fix: changed existing check to single query in separate writemodel
* fix: linter changes and list endpoints for conditions in executions
* fix: remove writemodel query on exeuction set as state before is irrelevant
* fix: testing for exists write models and correction
* fix: translations for errors and event types
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>