# Which Problems Are Solved
The action v2 messages were didn't contain anything providing security
for the sent content.
# How the Problems Are Solved
Each Target now has a SigningKey, which can also be newly generated
through the API and returned at creation and through the Get-Endpoints.
There is now a HTTP header "Zitadel-Signature", which is generated with
the SigningKey and Payload, and also contains a timestamp to check with
a tolerance if the message took to long to sent.
# Additional Changes
The functionality to create and check the signature is provided in the
pkg/actions package, and can be reused in the SDK.
# Additional Context
Closes#7924
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.
# How the Problems Are Solved
- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml
Notifications:
# The amount of workers processing the notification request events.
# If set to 0, no notification request events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
# The amount of events a single worker will process in a run.
BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
# Time interval between scheduled notifications for request events
RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
# The amount of workers processing the notification retry events.
# If set to 0, no notification retry events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
# Time interval between scheduled notifications for retry events
RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
# Only instances are projected, for which at least a projection-relevant event exists within the timeframe
# from HandleActiveInstances duration in the past until the projection's current time
# If set to 0 (default), every instance is always considered active
HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
# The maximum duration a transaction remains open
# before it spots left folding additional events
# and updates the table.
TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
# Automatically cancel the notification after the amount of failed attempts
MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
# Automatically cancel the notification if it cannot be handled within a specific time
MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL
# Failed attempts are retried after a confogired delay (with exponential backoff).
# Set a minimum and maximum delay and a factor for the backoff
MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
# Any factor below 1 will be set to 1
RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```
# Additional Changes
None
# Additional Context
- closes#8931
# Which Problems Are Solved
Add a cache implementation using Redis single mode. This does not add
support for Redis Cluster or sentinel.
# How the Problems Are Solved
Added the `internal/cache/redis` package. All operations occur
atomically, including setting of secondary indexes, using LUA scripts
where needed.
The [`miniredis`](https://github.com/alicebob/miniredis) package is used
to run unit tests.
# Additional Changes
- Move connector code to `internal/cache/connector/...` and remove
duplicate code from `query` and `command` packages.
- Fix a missed invalidation on the restrictions projection
# Additional Context
Closes#8130
# Which Problems Are Solved
Currently ZITADEL supports RP-initiated logout for clients. Back-channel
logout ensures that user sessions are terminated across all connected
applications, even if the user closes their browser or loses
connectivity providing a more secure alternative for certain use cases.
# How the Problems Are Solved
If the feature is activated and the client used for the authentication
has a back_channel_logout_uri configured, a
`session_logout.back_channel` will be registered. Once a user terminates
their session, a (notification) handler will send a SET (form POST) to
the registered uri containing a logout_token (with the user's ID and
session ID).
- A new feature "back_channel_logout" is added on system and instance
level
- A `back_channel_logout_uri` can be managed on OIDC applications
- Added a `session_logout` aggregate to register and inform about sent
`back_channel` notifications
- Added a `SecurityEventToken` channel and `Form`message type in the
notification handlers
- Added `TriggeredAtOrigin` fields to `HumanSignedOut` and
`TerminateSession` events for notification handling
- Exported various functions and types in the `oidc` package to be able
to reuse for token signing in the back_channel notifier.
- To prevent that current existing session termination events will be
handled, a setup step is added to set the `current_states` for the
`projections.notifications_back_channel_logout` to the current position
- [x] requires https://github.com/zitadel/oidc/pull/671
# Additional Changes
- Updated all OTEL dependencies to v1.29.0, since OIDC already updated
some of them to that version.
- Single Session Termination feature is correctly checked (fixed feature
mapping)
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/8467
- TODO:
- Documentation
- UI to be done: https://github.com/zitadel/zitadel/issues/8469
---------
Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
# Which Problems Are Solved
Milestones used existing events from a number of aggregates. OIDC
session is one of them. We noticed in load-tests that the reduction of
the oidc_session.added event into the milestone projection is a costly
business with payload based conditionals. A milestone is reached once,
but even then we remain subscribed to the OIDC events. This requires the
projections.current_states to be updated continuously.
# How the Problems Are Solved
The milestone creation is refactored to use dedicated events instead.
The command side decides when a milestone is reached and creates the
reached event once for each milestone when required.
# Additional Changes
In order to prevent reached milestones being created twice, a migration
script is provided. When the old `projections.milestones` table exist,
the state is read from there and `v2` milestone aggregate events are
created, with the original reached and pushed dates.
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/8800
# Which Problems Are Solved
System administrators can block hosts and IPs for HTTP calls in actions.
Using DNS, blocked IPs could be bypassed.
# How the Problems Are Solved
- Hosts are resolved (DNS lookup) to check whether their corresponding
IP is blocked.
# Additional Changes
- Added complete lookup ip address range and "unspecified" address to
the default `DenyList`
# Which Problems Are Solved
We identified the need of caching.
Currently we have a number of places where we use different ways of
caching, like go maps or LRU.
We might also want shared chaches in the future, like Redis-based or in
special SQL tables.
# How the Problems Are Solved
Define a generic Cache interface which allows different implementations.
- A noop implementation is provided and enabled as.
- An implementation using go maps is provided
- disabled in defaults.yaml
- enabled in integration tests
- Authz middleware instance objects are cached using the interface.
# Additional Changes
- Enabled integration test command raceflag
- Fix a race condition in the limits integration test client
- Fix a number of flaky integration tests. (Because zitadel is super
fast now!) 🎸🚀
# Additional Context
Related to https://github.com/zitadel/zitadel/issues/8648
# Which Problems Are Solved
Endpoints to maintain email and phone contact on user v3 are not
implemented.
# How the Problems Are Solved
Add 3 endpoints with SetContactEmail, VerifyContactEmail and
ResendContactEmailCode.
Add 3 endpoints with SetContactPhone, VerifyContactPhone and
ResendContactPhoneCode.
Refactor the logic how contact is managed in the user creation and
update.
# Additional Changes
None
# Additional Context
- part of https://github.com/zitadel/zitadel/issues/6433
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
Add a debug API which allows pushing a set of events to be reduced in a
dedicated projection.
The events can carry a sleep duration which simulates a slow query
during projection handling.
# How the Problems Are Solved
- `CreateDebugEvents` allows pushing multiple events which simulate the
lifecycle of a resource. Each event has a `projectionSleep` field, which
issues a `pg_sleep()` statement query in the projection handler :
- Add
- Change
- Remove
- `ListDebugEventsStates` list the current state of the projection,
optionally with a Trigger
- `GetDebugEventsStateByID` get the current state of the aggregate ID in
the projection, optionally with a Trigger
# Additional Changes
- none
# Additional Context
- Allows reproduction of https://github.com/zitadel/zitadel/issues/8517
# Which Problems Are Solved
Use a single server instance for API integration tests. This optimizes
the time taken for the integration test pipeline,
because it allows running tests on multiple packages in parallel. Also,
it saves time by not start and stopping a zitadel server for every
package.
# How the Problems Are Solved
- Build a binary with `go build -race -cover ....`
- Integration tests only construct clients. The server remains running
in the background.
- The integration package and tested packages now fully utilize the API.
No more direct database access trough `query` and `command` packages.
- Use Makefile recipes to setup, start and stop the server in the
background.
- The binary has the race detector enabled
- Init and setup jobs are configured to halt immediately on race
condition
- Because the server runs in the background, races are only logged. When
the server is stopped and race logs exist, the Makefile recipe will
throw an error and print the logs.
- Makefile recipes include logic to print logs and convert coverage
reports after the server is stopped.
- Some tests need a downstream HTTP server to make requests, like quota
and milestones. A new `integration/sink` package creates an HTTP server
and uses websockets to forward HTTP request back to the test packages.
The package API uses Go channels for abstraction and easy usage.
# Additional Changes
- Integration test files already used the `//go:build integration`
directive. In order to properly split integration from unit tests,
integration test files need to be in a `integration_test` subdirectory
of their package.
- `UseIsolatedInstance` used to overwrite the `Tester.Client` for each
instance. Now a `Instance` object is returned with a gRPC client that is
connected to the isolated instance's hostname.
- The `Tester` type is now `Instance`. The object is created for the
first instance, used by default in any test. Isolated instances are also
`Instance` objects and therefore benefit from the same methods and
values. The first instance and any other us capable of creating an
isolated instance over the system API.
- All test packages run in an Isolated instance by calling
`NewInstance()`
- Individual tests that use an isolated instance use `t.Parallel()`
# Additional Context
- Closes#6684
- https://go.dev/doc/articles/race_detector
- https://go.dev/doc/build-cover
---------
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
# Which Problems Are Solved
Added functionality that user with a userschema can be created and
removed.
# How the Problems Are Solved
Added logic and moved APIs so that everything is API v3 conform.
# Additional Changes
- move of user and userschema API to resources folder
- changed testing and parameters
- some renaming
# Additional Context
closes#7308
---------
Co-authored-by: Elio Bischof <elio@zitadel.com>
# Which Problems Are Solved
To have more insight on the performance, CPU and memory usage of
ZITADEL, we want to enable profiling.
# How the Problems Are Solved
- Allow profiling by configuration.
- Provide Google Cloud Profiler as first implementation
# Additional Changes
None.
# Additional Context
There were possible memory leaks reported:
https://discord.com/channels/927474939156643850/1273210227918897152
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
GetIDPByID as endpoint in the API v2 so that it can be available for the
new login.
# How the Problems Are Solved
Create GetIDPByID endpoint with IDP v2 API, throught the GetProviderByID
implementation from admin and management API.
# Additional Changes
- Remove the OwnerType attribute from the response, as the information
is available through the resourceOwner.
- correct refs to messages in proto which are used for doc generation
- renaming of elements for API v3
# Additional Context
Closes#8337
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
Implement a new API service that allows management of OIDC signing web
keys.
This allows users to manage rotation of the instance level keys. which
are currently managed based on expiry.
The API accepts the generation of the following key types and
parameters:
- RSA keys with 2048, 3072 or 4096 bit in size and:
- Signing with SHA-256 (RS256)
- Signing with SHA-384 (RS384)
- Signing with SHA-512 (RS512)
- ECDSA keys with
- P256 curve
- P384 curve
- P512 curve
- ED25519 keys
# How the Problems Are Solved
Keys are serialized for storage using the JSON web key format from the
`jose` library. This is the format that will be used by OIDC for
signing, verification and publication.
Each instance can have a number of key pairs. All existing public keys
are meant to be used for token verification and publication the keys
endpoint. Keys can be activated and the active private key is meant to
sign new tokens. There is always exactly 1 active signing key:
1. When the first key for an instance is generated, it is automatically
activated.
2. Activation of the next key automatically deactivates the previously
active key.
3. Keys cannot be manually deactivated from the API
4. Active keys cannot be deleted
# Additional Changes
- Query methods that later will be used by the OIDC package are already
implemented. Preparation for #8031
- Fix indentation in french translation for instance event
- Move user_schema translations to consistent positions in all
translation files
# Additional Context
- Closes#8030
- Part of #7809
---------
Co-authored-by: Elio Bischof <elio@zitadel.com>
# Which Problems Are Solved
The current v3alpha actions APIs don't exactly adhere to the [new
resources API
design](https://zitadel.com/docs/apis/v3#standard-resources).
# How the Problems Are Solved
- **Improved ID access**: The aggregate ID is added to the resource
details object, so accessing resource IDs and constructing proto
messages for resources is easier
- **Explicit Instances**: Optionally, the instance can be explicitly
given in each request
- **Pagination**: A default search limit and a max search limit are
added to the defaults.yaml. They apply to the new v3 APIs (currently
only actions). The search query defaults are changed to ascending by
creation date, because this makes the pagination results the most
deterministic. The creation date is also added to the object details.
The bug with updated creation dates is fixed for executions and targets.
- **Removed Sequences**: Removed Sequence from object details and
ProcessedSequence from search details
# Additional Changes
Object details IDs are checked in unit test only if an empty ID is
expected. Centralizing the details check also makes this internal object
more flexible for future evolutions.
# Additional Context
- Closes#8169
- Depends on https://github.com/zitadel/zitadel/pull/8225
---------
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
# Which Problems Are Solved
ZITADEL currently selects the instance context based on a HTTP header
(see https://github.com/zitadel/zitadel/issues/8279#issue-2399959845 and
checks it against the list of instance domains. Let's call it instance
or API domain.
For any context based URL (e.g. OAuth, OIDC, SAML endpoints, links in
emails, ...) the requested domain (instance domain) will be used. Let's
call it the public domain.
In cases of proxied setups, all exposed domains (public domains) require
the domain to be managed as instance domain.
This can either be done using the "ExternalDomain" in the runtime config
or via system API, which requires a validation through CustomerPortal on
zitadel.cloud.
# How the Problems Are Solved
- Two new headers / header list are added:
- `InstanceHostHeaders`: an ordered list (first sent wins), which will
be used to match the instance.
(For backward compatibility: the `HTTP1HostHeader`, `HTTP2HostHeader`
and `forwarded`, `x-forwarded-for`, `x-forwarded-host` are checked
afterwards as well)
- `PublicHostHeaders`: an ordered list (first sent wins), which will be
used as public host / domain. This will be checked against a list of
trusted domains on the instance.
- The middleware intercepts all requests to the API and passes a
`DomainCtx` object with the hosts and protocol into the context
(previously only a computed `origin` was passed)
- HTTP / GRPC server do not longer try to match the headers to instances
themself, but use the passed `http.DomainContext` in their interceptors.
- The `RequestedHost` and `RequestedDomain` from authz.Instance are
removed in favor of the `http.DomainContext`
- When authenticating to or signing out from Console UI, the current
`http.DomainContext(ctx).Origin` (already checked by instance
interceptor for validity) is used to compute and dynamically add a
`redirect_uri` and `post_logout_redirect_uri`.
- Gateway passes all configured host headers (previously only did
`x-zitadel-*`)
- Admin API allows to manage trusted domain
# Additional Changes
None
# Additional Context
- part of #8279
- open topics:
- "single-instance" mode
- Console UI
# Which Problems Are Solved
The current v3alpha actions APIs don't exactly adhere to the [new
resources API
design](https://zitadel.com/docs/apis/v3#standard-resources).
# How the Problems Are Solved
- **Breaking**: The current v3alpha actions APIs are removed. This is
breaking.
- **Resource Namespace**: New v3alpha actions APIs for targets and
executions are added under the namespace /resources.
- **Feature Flag**: New v3alpha actions APIs still have to be activated
using the actions feature flag
- **Reduced Executions Overhead**: Executions are managed similar to
settings according to the new API design: an empty list of targets
basically makes an execution a Noop. So a single method, SetExecution is
enough to cover all use cases. Noop executions are not returned in
future search requests.
- **Compatibility**: The executions created with previous v3alpha APIs
are still available to be managed with the new executions API.
# Additional Changes
- Removed integration tests which test executions but rely on readable
targets. They are added again with #8169
# Additional Context
Closes#8168
# Which Problems Are Solved
The v2beta services are stable but not GA.
# How the Problems Are Solved
The v2beta services are copied to v2. The corresponding v1 and v2beta
services are deprecated.
# Additional Context
Closes#7236
---------
Co-authored-by: Elio Bischof <elio@zitadel.com>
This reverts commit e126ccc9aa.
# Which Problems Are Solved
#8295 introduced the possibility to handle idps on a single callback,
but broke current setups.
# How the Problems Are Solved
- Revert the change until a proper solution is found. Revert is needed
as docs were also changed.
# Additional Changes
None.
# Additional Context
- relates to #8295
# Which Problems Are Solved
Both the login UI and the IdP intent flow have their own IdP callback
endpoints.
This makes configuration hard to impossible (e.g. Github only allows one
endpoint) for customers.
# How the Problems Are Solved
- The login UI prefixes the `state` parameter when creating an auth /
SAML request.
- All requests now use the `/idp/callback` or the corresponding
variation (e.g. SAML)
- On callback, the state, resp. its prefix is checked. In case of the
login UI prefix, the request will be forwarded to the existing login UI
handler without the prefix state.
Existing setups will therefore not be affected and also requests started
before this release can be handled without any impact.
- Console only lists the "new" endpoint(s). Any
`/login/externalidp/callback` is removed.
# Additional Changes
- Cleaned up some images from the IdP documentation.
- fix the error handling in `handleExternalNotFoundOptionCheck`
# Additional Context
- closes#8236
# Which Problems Are Solved
Logs the type of sonyflake strategy used for generating unique machine
IDs
# How the Problems Are Solved
- Created function to log machine id strategy on the start up logs
# Additional Changes
- Added public function for retrieving current strategy set by
configuration
# Additional Context
- Closes#7750
# Which Problems Are Solved
To improve performance a new table and method is implemented on
eventstore. The goal of this table is to index searchable fields on
command side to use it on command and query side.
The table allows to store one primitive value (numeric, text) per row.
The eventstore framework is extended by the `Search`-method which allows
to search for objects.
The `Command`-interface is extended by the `SearchOperations()`-method
which does manipulate the the `search`-table.
# How the Problems Are Solved
This PR adds the capability of improving performance for command and
query side by using the `Search`-method of the eventstore instead of
using one of the `Filter`-methods.
# Open Tasks
- [x] Add feature flag
- [x] Unit tests
- [ ] ~~Benchmarks if needed~~
- [x] Ensure no behavior change
- [x] Add setup step to fill table with current data
- [x] Add projection which ensures data added between setup and start of
the new version are also added to the table
# Additional Changes
The `Search`-method is currently used by `ProjectGrant`-command side.
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/8094
# Which Problems Are Solved
Adds the possibility to mirror an existing database to a new one.
For that a new command was added `zitadel mirror`. Including it's
subcommands for a more fine grained mirror of the data.
Sub commands:
* `zitadel mirror eventstore`: copies only events and their unique
constraints
* `zitadel mirror system`: mirrors the data of the `system`-schema
* `zitadel mirror projections`: runs all projections
* `zitadel mirror auth`: copies auth requests
* `zitadel mirror verify`: counts the amount of rows in the source and
destination database and prints the diff.
The command requires one of the following flags:
* `--system`: copies all instances of the system
* `--instance <instance-id>`, `--instance <comma separated list of
instance ids>`: copies only the defined instances
The command is save to execute multiple times by adding the
`--replace`-flag. This replaces currently existing data except of the
`events`-table
# Additional Changes
A `--for-mirror`-flag was added to `zitadel setup` to prepare the new
database. The flag skips the creation of the first instances and initial
run of projections.
It is now possible to skip the creation of the first instance during
setup by setting `FirstInstance.Skip` to true in the steps
configuration.
# Additional info
It is currently not possible to merge multiple databases. See
https://github.com/zitadel/zitadel/issues/7964 for more details.
It is currently not possible to use files. See
https://github.com/zitadel/zitadel/issues/7966 for more information.
closes https://github.com/zitadel/zitadel/issues/7586
closes https://github.com/zitadel/zitadel/issues/7486
### Definition of Ready
- [x] I am happy with the code
- [x] Short description of the feature/issue is added in the pr
description
- [x] PR is linked to the corresponding user story
- [x] Acceptance criteria are met
- [x] All open todos and follow ups are defined in a new ticket and
justified
- [x] Deviations from the acceptance criteria and design are agreed with
the PO and documented.
- [x] No debug or dead code
- [x] My code has no repetitions
- [x] Critical parts are tested automatically
- [ ] Where possible E2E tests are implemented
- [x] Documentation/examples are up-to-date
- [x] All non-functional requirements are met
- [x] Functionality of the acceptance criteria is checked manually on
the dev system.
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* fix: add action v2 execution to features
* fix: add action v2 execution to features
* fix: add action v2 execution to features
* fix: update internal/command/instance_features_model.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: merge back main
* fix: merge back main
* fix: rename feature and service
* fix: rename feature and service
* fix: review changes
* fix: review changes
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
chore(fmt): run gci on complete project
Fix global import formatting in go code by running the `gci` command. This allows us to just use the command directly, instead of fixing the import order manually for the linter, on each PR.
Co-authored-by: Elio Bischof <elio@zitadel.com>
This PR adds the functionality to manage user schemas through the new user schema service.
It includes the possibility to create a basic JSON schema and also provides a way on defining permissions (read, write) for owner and self context with an annotation.
Further annotations for OIDC claims and SAML attribute mappings will follow.
A guide on how to create a schema and assign permissions has been started. It will be extended though out the process of implementing the schema and users based on those.
Note:
This feature is in an early stage and therefore not enabled by default. To test it out, please enable the UserSchema feature flag on your instance / system though the feature service.
* feat: improve instance not found error
* unit tests
* check if is templatable
* lint
* assert
* compile tests
* remove error templates
* link to instance not found page
* fmt
* cleanup
* lint
* feat(api): feature API proto definitions
* update proto based on discussion with @livio-a
* cleanup old feature flag stuff
* authz instance queries
* align defaults
* projection definitions
* define commands and event reducers
* implement system and instance setter APIs
* api getter implementation
* unit test repository package
* command unit tests
* unit test Get queries
* grpc converter unit tests
* migrate the V1 features
* migrate oidc to dynamic features
* projection unit test
* fix instance by host
* fix instance by id data type in sql
* fix linting errors
* add system projection test
* fix behavior inversion
* resolve proto file comments
* rename SystemDefaultLoginInstanceEventType to SystemLoginDefaultOrgEventType so it's consistent with the instance level event
* use write models and conditional set events
* system features integration tests
* instance features integration tests
* error on empty request
* documentation entry
* typo in feature.proto
* fix start unit tests
* solve linting error on key case switch
* remove system defaults after discussion with @eliobischof
* fix system feature projection
* resolve comments in defaults.yaml
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* feat: add events for execution
* feat: add events for execution and command side
* feat: add events for execution and command side
* feat: add api endpoints for set and delete executions with integration tests
* feat: add integration and unit tests and more existence checks
* feat: add integration and unit tests and more existence checks
* feat: unit tests for includes in executions
* feat: integration tests for includes in executions
* fix: linting
* fix: update internal/api/api.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: update internal/command/command.go
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: apply suggestions from code review
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: change api return
* fix: change aggregateID with prefix of execution type and add to documentation
* fix: change body in proto for documentation and correct linting
* fix: changed existing check to single query in separate writemodel
* fix: linter changes and list endpoints for conditions in executions
* fix: remove writemodel query on exeuction set as state before is irrelevant
* fix: testing for exists write models and correction
* fix: translations for errors and event types
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix(backend): respect start flags in all commands
Currently flags like --externalDomain do only work in the last
registered command which currently is start-from-setup.
This creates the flags globally in the init function in uses them for
all start commands.
* fix(backend): remove viper defaults in start flags
At this point viper is not yet initialized so this defaults would have
not effect either.
* Remove flag name variables and run go mod tidy
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Even though this is a feature it's released as fix so that we can back port to earlier revisions.
As reported by multiple users startup of ZITADEL after leaded to downtime and worst case rollbacks to the previously deployed version.
The problem starts rising when there are too many events to process after the start of ZITADEL. The root cause are changes on projections (database tables) which must be recomputed. This PR solves this problem by adding a new step to the setup phase which prefills the projections. The step can be enabled by adding the `--init-projections`-flag to `setup`, `start-from-init` and `start-from-setup`. Setting this flag results in potentially longer duration of the setup phase but reduces the risk of the problems mentioned in the paragraph above.
* feat: add query endpoints for user v2 api
* fix: correct integration tests
* fix: correct linting
* fix: correct linting
* fix: comment out permission check on user get and list
* fix: permission check on user v2 query
* fix: merge back origin/main
* fix: add search query in user emails
* fix: reset count for SearchUser if users are removed due to permissions
* fix: reset count for SearchUser if users are removed due to permissions
---------
Co-authored-by: Elio Bischof <elio@zitadel.com>
* fix(db): add additional connection pool for projection spooling
* use correct connection pool for projections
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* feat: return 404 or 409 if org reg disallowed
* fix: system limit permissions
* feat: add iam limits api
* feat: disallow public org registrations on default instance
* add integration test
* test: integration
* fix test
* docs: describe public org registrations
* avoid updating docs deps
* fix system limits integration test
* silence integration tests
* fix linting
* ignore strange linter complaints
* review
* improve reset properties naming
* redefine the api
* use restrictions aggregate
* test query
* simplify and test projection
* test commands
* fix unit tests
* move integration test
* support restrictions on default instance
* also test GetRestrictions
* self review
* lint
* abstract away resource owner
* fix tests
* configure supported languages
* fix allowed languages
* fix tests
* default lang must not be restricted
* preferred language must be allowed
* change preferred languages
* check languages everywhere
* lint
* test command side
* lint
* add integration test
* add integration test
* restrict supported ui locales
* lint
* lint
* cleanup
* lint
* allow undefined preferred language
* fix integration tests
* update main
* fix env var
* ignore linter
* ignore linter
* improve integration test config
* reduce cognitive complexity
* compile
* check for duplicates
* remove useless restriction checks
* review
* revert restriction renaming
* fix language restrictions
* lint
* generate
* allow custom texts for supported langs for now
* fix tests
* cleanup
* cleanup
* cleanup
* lint
* unsupported preferred lang is allowed
* fix integration test
* finish reverting to old property name
* finish reverting to old property name
* load languages
* refactor(i18n): centralize translators and fs
* lint
* amplify no validations on preferred languages
* fix integration test
* lint
* fix resetting allowed languages
* test unchanged restrictions
* fix: add https status to activity log
* create prerelease
* create RC
* pass info from gateway to grpc server
* fix: update releaserc to create RC version
* cleanup
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* feat(oidc): use the new oidc server interface
* rename from provider to server
* pin logging and oidc packages
* use oidc introspection fix branch
* add overloaded methods with tracing
* cleanup unused code
* include latest oidc fixes
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* define roles and permissions
* support system user memberships
* don't limit system users
* cleanup permissions
* restrict memberships to aggregates
* default to SYSTEM_OWNER
* update unit tests
* test: system user token test (#6778)
* update unit tests
* refactor: make authz testable
* move session constants
* cleanup
* comment
* comment
* decode member type string to enum (#6780)
* decode member type string to enum
* handle all membership types
* decode enums where necessary
* decode member type in steps config
* update system api docs
* add technical advisory
* tweak docs a bit
* comment in comment
* lint
* extract token from Bearer header prefix
* review changes
* fix tests
* fix: add fix for activityhandler
* add isSystemUser
* remove IsSystemUser from activity info
* fix: add fix for activityhandler
---------
Co-authored-by: Stefan Benz <stefan@caos.ch>
* feat: add activity logs on user actions with authentication, resourceAPI and sessionAPI
* feat: add activity logs on user actions with authentication, resourceAPI and sessionAPI
* feat: add activity logs on user actions with authentication, resourceAPI and sessionAPI
* feat: add activity logs on user actions with authentication, resourceAPI and sessionAPI
* feat: add activity logs on user actions with authentication, resourceAPI and sessionAPI
* fix: add unit tests to info package for context changes
* fix: add activity_interceptor.go suggestion
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix: refactoring and fixes through PR review
* fix: add auth service to lists of resourceAPIs
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Fabi <fabienne@zitadel.com>