161 Commits

Author SHA1 Message Date
Silvan
8ac4b61ee6 perf(query): reduce user query duration (#10037)
# Which Problems Are Solved

The resource usage to query user(s) on the database was high and
therefore could have performance impact.

# How the Problems Are Solved

Database queries involving the users and loginnames table were improved
and an index was added for user by email query.

# Additional Changes

- spellchecks
- updated apis on load tests

# additional info

needs cherry pick to v3

(cherry picked from commit 4df138286b)
2025-06-12 07:06:45 +02:00
Livio Spring
3c99cf82f8 feat: federated logout for SAML IdPs (#9931)
# Which Problems Are Solved

Currently if a user signs in using an IdP, once they sign out of
Zitadel, the corresponding IdP session is not terminated. This can be
the desired behavior. In some cases, e.g. when using a shared computer
it results in a potential security risk, since a follower user might be
able to sign in as the previous using the still open IdP session.

# How the Problems Are Solved

- Admins can enabled a federated logout option on SAML IdPs through the
Admin and Management APIs.
- During the termination of a login V1 session using OIDC end_session
endpoint, Zitadel will check if an IdP was used to authenticate that
session.
- In case there was a SAML IdP used with Federated Logout enabled, it
will intercept the logout process, store the information into the shared
cache and redirect to the federated logout endpoint in the V1 login.
- The V1 login federated logout endpoint checks every request on an
existing cache entry. On success it will create a SAML logout request
for the used IdP and either redirect or POST to the configured SLO
endpoint. The cache entry is updated with a `redirected` state.
- A SLO endpoint is added to the `/idp` handlers, which will handle the
SAML logout responses. At the moment it will check again for an existing
federated logout entry (with state `redirected`) in the cache. On
success, the user is redirected to the initially provided
`post_logout_redirect_uri` from the end_session request.

# Additional Changes

None

# Additional Context

- This PR merges the https://github.com/zitadel/zitadel/pull/9841 and
https://github.com/zitadel/zitadel/pull/9854 to main, additionally
updating the docs on Entra ID SAML.
- closes #9228
- backport to 3.x

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Zach Hirschtritt <zachary.hirschtritt@klaviyo.com>
(cherry picked from commit 2cf3ef4de4)
2025-05-23 14:59:34 +02:00
Silvan
d4498ad136 fix(setup): reenable index creation (#9868)
# Which Problems Are Solved

We saw high CPU usage if many events were created on the database. This
was caused by the new actions which query for all event types and
aggregate types.

# How the Problems Are Solved

- the handler of action execution does not filter for aggregate and
event types.
- the index for `instance_id` and `position` is reenabled.

# Additional Changes

none

# Additional Context

none

(cherry picked from commit 60ce32ca4f)
2025-05-09 11:41:06 +02:00
Livio Spring
b5d14bafce fix: remove index es_instance_position (#9862)
# Which Problems Are Solved

#9837 added a new index `es_instance_position` on the events table with
the idea to improve performance for some projections. Unfortunately, it
makes it worse for almost all projections and would only improve the
situation for the events handler of the actions V2 subscriptions.

# How the Problems Are Solved

Remove the index again.

# Additional Changes

None

# Additional Context

relates to #9837
relates to #9863

(cherry picked from commit d71795c433)
2025-05-08 08:36:28 +02:00
Stefan Benz
c877add363 fix: add current state for execution handler into setup (#9863)
# Which Problems Are Solved

The execution handler projection handles all events to check if an
execution has to be provided to the worker to execute.
In this logic all events would be processed from the beginning which is
not necessary.

# How the Problems Are Solved

Add the current state to the execution handler projection, to avoid
processing all existing events.

# Additional Changes

Add custom configuration to the default, so that the transactions are
limited to some events.

# Additional Context

None

(cherry picked from commit 21167a4bba)
2025-05-07 16:30:09 +02:00
Silvan
d4ec519af6 fix(setup): execute s54 (#9849)
# Which Problems Are Solved

Step 54 was not executed during setup.

# How the Problems Are Solved

Added the step to setup jobs

# Additional Changes

none

# Additional Context

- the step was added in https://github.com/zitadel/zitadel/pull/9837
- thanks to @zhirschtritt for raising this.

(cherry picked from commit a626678004)
2025-05-06 08:25:18 +02:00
Tim Möhlmann
5e48ee2c15 perf(eventstore): add instance position index (#9837)
# Which Problems Are Solved

Some projection queries took a long time to run. It seems that 1 or more
queries couldn't make proper use of the `es_projection` index. This
might be because of a specific complexity aggregate_type and event_type
arguments, making the index unfeasible for postgres.

# How the Problems Are Solved

Following the index recommendation, add and index that covers just
instance_id and position.

# Additional Changes

- none

# Additional Context

- Related to https://github.com/zitadel/zitadel/issues/9832

(cherry picked from commit bb56b362a7)
2025-05-02 13:51:59 +02:00
Zach Hirschtritt
7fceb5eaf8 fix: Auto cleanup failed Setup steps if process is killed (#9736)
# Which Problems Are Solved

When running a long-running Zitadel Setup, Kubernetes might decide to
move a pod to a new node automatically. Currently, this puts any
migrations into a broken state that an operator needs to manually run
the "cleanup" command on - assuming they catch the error.

The only super long running commands are typically projection pre-fill
operations, which depending on the size of the event table for that
projection, can take many hours - plenty of time for Kubernetes to make
unexpected decisions, especially in a busy cluster.

# How the Problems Are Solved

This change listens on `os.Interrupt` and `syscall.SIGTERM`, cancels the
current Setup context, and runs the `Cleanup` command. The logs then
look something like this:
```shell
...
INFO[0000] verify migration                              caller="/Users/zach/src/zitadel/internal/migration/migration.go:43" name=repeatable_delete_stale_org_fields
INFO[0000] starting migration                            caller="/Users/zach/src/zitadel/internal/migration/migration.go:66" name=repeatable_delete_stale_org_fields
INFO[0000] execute delete query                          caller="/Users/zach/src/zitadel/cmd/setup/39.go:37" instance_id=281297936179003398 migration=repeatable_delete_stale_org_fields progress=1/1
INFO[0000] verify migration                              caller="/Users/zach/src/zitadel/internal/migration/migration.go:43" name=repeatable_fill_fields_for_instance_domains
INFO[0000] starting migration                            caller="/Users/zach/src/zitadel/internal/migration/migration.go:66" name=repeatable_fill_fields_for_instance_domains
----- SIGTERM signal issued -----
INFO[0000] received interrupt signal, shutting down: interrupt  caller="/Users/zach/src/zitadel/cmd/setup/setup.go:121"
INFO[0000] query failed                                  caller="/Users/zach/src/zitadel/internal/eventstore/repository/sql/query.go:135" error="timeout: context already done: context canceled"
DEBU[0000] filter eventstore failed                      caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:155" error="ID=SQL-KyeAx Message=unable to filter events Parent=(timeout: context already done: context canceled)" projection=instance_domain_fields
DEBU[0000] unable to rollback tx                         caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:110" error="sql: transaction has already been committed or rolled back" projection=instance_domain_fields
INFO[0000] process events failed                         caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:72" error="ID=SQL-KyeAx Message=unable to filter events Parent=(timeout: context already done: context canceled)" projection=instance_domain_fields
DEBU[0000] trigger iteration                             caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:73" iteration=0 projection=instance_domain_fields
ERRO[0000] migration failed                              caller="/Users/zach/src/zitadel/internal/migration/migration.go:68" error="ID=SQL-KyeAx Message=unable to filter events Parent=(timeout: context already done: context canceled)" name=repeatable_fill_fields_for_instance_domains
ERRO[0000] migration finish failed                       caller="/Users/zach/src/zitadel/internal/migration/migration.go:71" error="context canceled" name=repeatable_fill_fields_for_instance_domains
----- Cleanup before exiting -----
INFO[0000] cleanup started                               caller="/Users/zach/src/zitadel/cmd/setup/cleanup.go:30"
INFO[0000] cleanup migration                             caller="/Users/zach/src/zitadel/cmd/setup/cleanup.go:47" name=repeatable_fill_fields_for_instance_domains
```

# Additional Changes

* `mustExecuteMigration` -> `executeMigration`: **must**Execute logged a
Fatal error previously which calls os.Exit so no cleanup was possible.
Instead, this PR returns an error and assigns it to a shared error in
the Setup closure that defer can check.
* `initProjections` now returns an error instead of exiting

# Additional Context

This behavior might be unwelcome or at least unexpected in some cases.
Putting it behind a feature flag or config setting is likely a good
followup.

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit aa9ef8b49e)
2025-04-29 13:04:53 +02:00
Fabienne Bühler
07ce3b6905 chore!: Introduce ZITADEL v3 (#9645)
This PR summarizes multiple changes specifically only available with
ZITADEL v3:

- feat: Web Keys management
(https://github.com/zitadel/zitadel/pull/9526)
- fix(cmd): ensure proper working of mirror
(https://github.com/zitadel/zitadel/pull/9509)
- feat(Authz): system user support for permission check v2
(https://github.com/zitadel/zitadel/pull/9640)
- chore(license): change from Apache to AGPL
(https://github.com/zitadel/zitadel/pull/9597)
- feat(console): list v2 sessions
(https://github.com/zitadel/zitadel/pull/9539)
- fix(console): add loginV2 feature flag
(https://github.com/zitadel/zitadel/pull/9682)
- fix(feature flags): allow reading "own" flags
(https://github.com/zitadel/zitadel/pull/9649)
- feat(console): add Actions V2 UI
(https://github.com/zitadel/zitadel/pull/9591)

BREAKING CHANGE
- feat(webkey): migrate to v2beta API
(https://github.com/zitadel/zitadel/pull/9445)
- chore!: remove CockroachDB Support
(https://github.com/zitadel/zitadel/pull/9444)
- feat(actions): migrate to v2beta API
(https://github.com/zitadel/zitadel/pull/9489)

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Ramon <mail@conblem.me>
Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Kenta Yamaguchi <56732734+KEY60228@users.noreply.github.com>
Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com>
Co-authored-by: Livio Spring <livio@zitadel.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Iraq <66622793+kkrime@users.noreply.github.com>
Co-authored-by: Florian Forster <florian@zitadel.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Max Peintner <peintnerm@gmail.com>
2025-04-02 16:53:06 +02:00
Stefan Benz
2eb187f141 fix(migration): check if ldap2 already exists (#9674)
# Which Problems Are Solved

With v2.71.0 the `idp_templates6_ldap3` projection was created but never
filled, as it was a subtable. To fix this we altered the
`idp_templates6_ldap3` to `idp_templates6_ldap2` with v2.71.5.
This was unfortunately without a check that the `idp_templates_ldap2`was
already existing, which resulted in an error in the setup step.

# How the Problems Are Solved

Add check if `idp_templates6_ldap2` is already existing, before renaming
`idp_templates6_ldap3` -> `idp_templates6_ldap2`.

# Additional Changes

None

# Additional Context

Closes #9669
2025-03-31 10:06:40 +00:00
Zach Hirschtritt
c1535b7b49 feat: add prometheus metrics on projection handlers (#9561)
# Which Problems Are Solved

With current provided telemetry it's difficult to predict when a
projection handler is under increased load until it's too late and
causes downstream issues. Importantly, projection updating is in the
critical path for many login flows and increased latency there can
result in system downtime for users.

# How the Problems Are Solved

This PR adds three new prometheus-style metrics: 
1. **projection_events_processed** (_labels: projection, success_) -
This metric gives us a counter of the number of events processed per
projection update run and whether they we're processed without error. A
high number of events being processed can let us know how busy a
particular projection handler is.

2. **projection_handle_timer** _(labels: projection)_ - This is the time
it takes to process a projection update given a batch of events - time
to take the current_states lock, query for new events, reduce,
update_the projection, and update current_states.

3. **projection_state_latency** _(labels: projection)_ - This is the
time from the last event processed in the current_states table for a
given projection. It tells us how old was the last event you processed?
Or, how far behind are you running for this projection? Higher latencies
could mean high load or stalled projection handling.

# Additional Changes

I also had to initialize the global otel metrics provider (`metrics.M`)
in the `setup` step additionally to `start` since projection handlers
are initialized at setup. The initialization checks if a metrics
provider is already set (in case of `start-from-setup` or
`start-from-init` to prevent overwriting, which causes the otel metrics
provider to stop working.

# Additional Context

## Example Dashboards


![image](https://github.com/user-attachments/assets/94ba5c2b-9c62-44cd-83ee-4db4a8859073)

![image](https://github.com/user-attachments/assets/60a1b406-a8c6-48dc-a925-575359f97e1e)

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-03-27 07:40:27 +01:00
Stefan Benz
6b23c33cb6 fix: rename idp_templates6_ldap3 to ldap2 if necessary (#9565)
# Which Problems Are Solved

Zitadel setup with v2.71.0 could result in errors regarding the
idp_templates6_ldap3 subtable.

# How the Problems Are Solved

Rename the subtable idp_templates6_ldap3 to idp_templates6_ldap2 if no
idp_templates6_ldap2 is existing and rename column `rootCA` to
`root_ca`.

# Additional Changes

None

# Additional Context

Related PR #9292

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-03-26 19:26:16 +00:00
Iraq
11c9be3b8d chore: updating projections.idp_templates6 to projections.idp_templates7 (#9517)
# Which Problems Are Solved

This was left out as part of
https://github.com/zitadel/zitadel/pull/9292

- Closes https://github.com/zitadel/zitadel/issues/9514

---------

Co-authored-by: Iraq Jaber <IraqJaber@gmail.com>
2025-03-18 16:23:12 +01:00
Silvan
92f0cf018f fix(cmd): clarify notification config handling (#9459)
# Which Problems Are Solved

If configuration `notifications.LegacyEnabled` is set to false when
using cockroachdb as a database Zitadel start does not work and prints
the following error: `level=fatal msg="unable to start zitadel"
caller="github.com/zitadel/zitadel/cmd/start/start_from_init.go:44"
error="can't scan into dest[0]: cannot scan NULL into *string"`

# How the Problems Are Solved

The combination of the setting and cockraochdb are checked and a better
error is provided to the user.

# Additional Context

- introduced with https://github.com/zitadel/zitadel/pull/9321
2025-03-06 06:26:33 +00:00
Iraq
3c57e325f7 fix(permission): sql error in cmd/setup/49/01-permitted_orgs_function.sql (#9465)
# Which Problems Are Solved

SQL error in `cmd/setup/49/01-permitted_orgs_function.sql`

# How the Problems Are Solved

Updating `cmd/setup/49/01-permitted_orgs_function.sql`

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9461

Co-authored-by: Iraq Jaber <IraqJaber@gmail.com>
2025-03-05 21:48:20 +00:00
Silvan
444f682e25 refactor(notification): use new queue package (#9360)
# Which Problems Are Solved

The recently introduced notification queue have potential race conditions.

# How the Problems Are Solved

Current code is refactored to use the queue package, which is safe in
regards of concurrency.

# Additional Changes

- the queue is included in startup
- improved code quality of queue

# Additional Context

- closes https://github.com/zitadel/zitadel/issues/9278
2025-02-27 11:49:12 +01:00
Tim Möhlmann
e670b9126c fix(permissions): chunked synchronization of role permission events (#9403)
# Which Problems Are Solved

Setup fails to push all role permission events when running Zitadel with
CockroachDB. `TransactionRetryError`s were visible in logs which finally
times out the setup job with `timeout: context deadline exceeded`

# How the Problems Are Solved

As suggested in the [Cockroach documentation](timeout: context deadline
exceeded), _"break down larger transactions"_. The commands to be pushed
for the role permissions are chunked in 50 events per push. This
chunking is only done with CockroachDB.

# Additional Changes

- gci run fixed some unrelated imports
- access to `command.Commands` for the setup job, so we can reuse the
sync logic.

# Additional Context

Closes #9293

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-02-26 16:06:50 +00:00
Livio Spring
8f88c4cf5b feat: add PKCE option to generic OAuth2 / OIDC identity providers (#9373)
# Which Problems Are Solved

Some OAuth2 and OIDC providers require the use of PKCE for all their
clients. While ZITADEL already recommended the same for its clients, it
did not yet support the option on the IdP configuration.

# How the Problems Are Solved

- A new boolean `use_pkce` is added to the add/update generic OAuth/OIDC
endpoints.
- A new checkbox is added to the generic OAuth and OIDC provider
templates.
- The `rp.WithPKCE` option is added to the provider if the use of PKCE
has been set.
- The `rp.WithCodeChallenge` and `rp.WithCodeVerifier` options are added
to the OIDC/Auth BeginAuth and CodeExchange function.
- Store verifier or any other persistent argument in the intent or auth
request.
- Create corresponding session object before creating the intent, to be
able to store the information.
- (refactored session structs to use a constructor for unified creation
and better overview of actual usage)

Here's a screenshot showing the URI including the PKCE params:


![use_pkce_in_url](https://github.com/zitadel/zitadel/assets/30386061/eaeab123-a5da-4826-b001-2ae9efa35169)

# Additional Changes

None.

# Additional Context

- Closes #6449
- This PR replaces the existing PR (#8228) of @doncicuto. The base he
did was cherry picked. Thank you very much for that!

---------

Co-authored-by: Miguel Cabrerizo <doncicuto@gmail.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
2025-02-26 12:20:47 +00:00
Iraq
0cb0380826 feat: updating eventstore.permitted_orgs sql function (#9309)
# Which Problems Are Solved

Performance issue for GRPC call `zitadel.user.v2.UserService.ListUsers`
due to lack of org filtering on `ListUsers`

# Additional Context

Replace this example with links to related issues, discussions, discord
threads, or other sources with more context.
Use the Closing #issue syntax for issues that are resolved with this PR.
- Closes https://github.com/zitadel/zitadel/issues/9191

---------

Co-authored-by: Iraq Jaber <IraqJaber@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
2025-02-17 11:55:28 +02:00
Stefan Benz
49de5c61b2 feat: saml application configuration for login version (#9351)
# Which Problems Are Solved

OIDC applications can configure the used login version, which is
currently not possible for SAML applications.

# How the Problems Are Solved

Add the same functionality dependent on the feature-flag for SAML
applications.

# Additional Changes

None

# Additional Context

Closes #9267
Follow up issue for frontend changes #9354

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-02-13 16:03:05 +00:00
Silvan
415bc32ed6 feat: add task queue (#9321)
# Which Problems Are Solved

To integrate river as a task queue we need to ensure the migrations of
river are executed.

# How the Problems Are Solved

- A new schema was added to the Zitadel database called "queue"
- Added a repeatable setup step to Zitadel which executes the
[migrations of
river](https://riverqueue.com/docs/migrations#go-migration-api).

# Additional Changes

- Added more hooks to the databases to properly set the schema for the
task queue

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9280
2025-02-12 14:51:55 +00:00
Tim Möhlmann
bcc6a689fa fix(setup): use template for in_tx_order type (#9346)
# Which Problems Are Solved

Systems running with PostgreSQL before Zitadel v2.39 are likely to have
a wrong type for the `in_tx_order` column in the `eventstore.event2`
table. The migration at the time used the `event_sequence` as default
value without typecast, which results in a `bigint` type for that
column. However, when creating the table from scratch, we explicitly
specify the type to be `integer`.

Starting from Zitadel v2.67 we use a Pl/PgSQL function to push events.
The function requires the types from `eventstore.events2` to the same as
the `select` destinations used in the function. In the function
`in_tx_order` is also expected to by of `integer` type.

CochroachDB systems are not affected because `bigint` is an alias to the
`int` type. In other words, CockroachDB uses `int8` when specifying type
`int`. Therefore the types already match.

# How the Problems Are Solved

Retrieve the actual column type currently in use. A template is used to
assign the type to the `ordinality` column returned as `in_tx_order`.

# Additional Changes

- Detailed logging on migration failure

# Additional Context

- Closes #9180

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-02-12 11:06:34 +00:00
Livio Spring
b63c5fdb17 fix(login): fix migration to allow login by email again (#9315)
# Which Problems Are Solved

The login by email was not possible anymore. This was due to a newly
generated user projection because of #9255 .
Internal logs showed that the computed lower case column for verified
email was missing.

# How the Problems Are Solved

Update name of setup step 25 to rerun the step, since the underlying sql
changed.

# Additional Changes

None

# Additional Context

- relates to #9255
2025-02-06 09:45:22 +00:00
Emilien GUILMINEAU
857812bb9e fix(setup): Fix query alias on 46-06 (#9298)
# Which Problems Are Solved

After updating to version 2.69.0, my zitadel instance refuse to start
with this error log :
```
time="2025-02-03T19:46:47Z" level=info msg="starting migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:66" name=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=info msg="execute statement" caller="/home/runner/work/zitadel/zitadel/cmd/setup/46.go:29" file=01-role_permissions_view.sql migration=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=info msg="execute statement" caller="/home/runner/work/zitadel/zitadel/cmd/setup/46.go:29" file=02-instance_orgs_view.sql migration=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=info msg="execute statement" caller="/home/runner/work/zitadel/zitadel/cmd/setup/46.go:29" file=03-instance_members_view.sql migration=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=info msg="execute statement" caller="/home/runner/work/zitadel/zitadel/cmd/setup/46.go:29" file=04-org_members_view.sql migration=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=info msg="execute statement" caller="/home/runner/work/zitadel/zitadel/cmd/setup/46.go:29" file=05-project_members_view.sql migration=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=info msg="execute statement" caller="/home/runner/work/zitadel/zitadel/cmd/setup/46.go:29" file=06-permitted_orgs_function.sql migration=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:68" error="46_init_permission_functions 06-permitted_orgs_function.sql: ERROR: subquery in FROM must have an alias (SQLSTATE 42601)" name=46_init_permission_functions
time="2025-02-03T19:46:47Z" level=fatal msg="migration failed" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:274" error="46_init_permission_functions 06-permitted_orgs_function.sql: ERROR: subquery in FROM must have an alias (SQLSTATE 42601)" name=46_init_permission_functions
```

# How the Problems Are Solved

I used the original sql script on my database which gave me the same
error.
So i added an alias for the subquery and the error cas gone

# Additional Context

I was migrating from version 2.58.3

Closes https://github.com/zitadel/zitadel/issues/9300

Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
2025-02-04 10:47:22 +00:00
Tim Möhlmann
b6841251b1 feat(users/v2): return prompt information (#9255)
# Which Problems Are Solved

Add the ability to update the timestamp when MFA initialization was last
skipped.
Get User By ID now also returns the timestamps when MFA setup was last
skipped.

# How the Problems Are Solved

- Add a `HumanMFAInitSkipped` method to the `users/v2` API.
- MFA skipped was already projected in the `auth.users3` table. In this
PR the same column is added to the users projection. Event handling is
kept the same as in the `UserView`:

<details>


62804ca45f/internal/user/repository/view/model/user.go (L243-L377)

</details>

# Additional Changes

- none

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9197
2025-01-29 15:12:31 +00:00
Stefan Benz
a59c6b9f84 fix: change usage from filepath to path (#9260)
# Which Problems Are Solved

Paths for setup steps are joined with "\" when binary is started under
Windows, which results in wrongly joined paths.

# How the Problems Are Solved

Replace the usage of "filepath" with "path" package, which does only
join with "/" and nothing OS specific.

# Additional Changes

None

# Additional Context

Closes #9227
2025-01-29 09:53:27 +01:00
Tim Möhlmann
ec5f18c168 fix(setup): split membership fields migration (#9230)
# Which Problems Are Solved

The membership fields migration timed out in certain cases. It also
tried to migrate instances which were already removed.

# How the Problems Are Solved

Revert the previous fix that combined the repeatable step for multiple
fill triggers. The membeship migration is now single-run as it might
take a lot of time. It is not worth making it repeatable. Instance IDs
of removed instances are skipped.

# Additional Changes

None

# Additional Context

Introduced in https://github.com/zitadel/zitadel/pull/9199
2025-01-24 11:24:35 +01:00
Tim Möhlmann
94cbf97534 fix(permissions_v2): add membership fields migration (#9199)
# Which Problems Are Solved

Memberships did not have a fields table fill migration.

# How the Problems Are Solved

Add filling of membership fields to the repeatable steps.

# Additional Changes

- Use the same repeatable step for multiple fill fields handlers.
- Fix an error for PostgreSQL 15 where a subquery in a `FROM` clause
needs an alias ing the `permitted_orgs` function.

# Additional Context

- Part of https://github.com/zitadel/zitadel/issues/9188
- Introduced in https://github.com/zitadel/zitadel/pull/9152
2025-01-17 16:16:26 +01:00
Silvan
9532c9bea5 fix(eventstore): correct sql push function (#9201)
# Which Problems Are Solved

https://github.com/zitadel/zitadel/pull/9186 introduced the new `push`
sql function for cockroachdb. The function used the wrong database
function to generate the position of the event and would therefore
insert events at a position before events created with an old Zitadel
version.

# How the Problems Are Solved

Instead of `EXTRACT(EPOCH FROM NOW())`, `cluster_logical_timestamp()` is
used to calculate the position of an event.

# Additional Context

- Introduced in https://github.com/zitadel/zitadel/pull/9186
- Affected versions:
https://github.com/zitadel/zitadel/releases/tag/v2.67.3
2025-01-17 15:32:05 +01:00
Silvan
4645045987 refactor: consolidate database pools (#9105)
# Which Problems Are Solved

Zitadel currently uses 3 database pool, 1 for queries, 1 for pushing
events and 1 for scheduled projection updates. This defeats the purpose
of a connection pool which already handles multiple connections.

During load tests we found that the current structure of connection
pools consumes a lot of database resources. The resource usage dropped
after we reduced the amount of database pools to 1 because existing
connections can be used more efficiently.

# How the Problems Are Solved

Removed logic to handle multiple connection pools and use a single one.

# Additional Changes

none

# Additional Context

part of https://github.com/zitadel/zitadel/issues/8352
2025-01-16 11:07:18 +00:00
Tim Möhlmann
3f6ea78c87 perf: role permissions in database (#9152)
# Which Problems Are Solved

Currently ZITADEL defines organization and instance member roles and
permissions in defaults.yaml. The permission check is done on API call
level. For example: "is this user allowed to make this call on this
org". This makes sense on the V1 API where the API is permission-level
shaped. For example, a search for users always happens in the context of
the organization. (Either the organization the calling user belongs to,
or through member ship and the x-zitadel-orgid header.

However, for resource based APIs we must be able to resolve permissions
by object. For example, an IAM_OWNER listing users should be able to get
all users in an instance based on the query filters. Alternatively a
user may have user.read permissions on one or more orgs. They should be
able to read just those users.

# How the Problems Are Solved

## Role permission mapping

The role permission mappings defined from `defaults.yaml` or local
config override are synchronized to the database on every run of
`zitadel setup`:

- A single query per **aggregate** builds a list of `add` and `remove`
actions needed to reach the desired state or role permission mappings
from the config.
- The required events based on the actions are pushed to the event
store.
- Events define search fields so that permission checking can use the
indices and is strongly consistent for both query and command sides.

The migration is split in the following aggregates:

- System aggregate for for roles prefixed with `SYSTEM`
- Each instance for roles not prefixed with `SYSTEM`. This is in
anticipation of instance level management over the API.

## Membership

Current instance / org / project membership events now have field table
definitions. Like the role permissions this ensures strong consistency
while still being able to use the indices of the fields table. A
migration is provided to fill the membership fields.

## Permission check

I aimed keeping the mental overhead to the developer to a minimal. The
provided implementation only provides a permission check for list
queries for org level resources, for example users. In the `query`
package there is a simple helper function `wherePermittedOrgs` which
makes sure the underlying database function is called as part of the
`SELECT` query and the permitted organizations are part of the `WHERE`
clause. This makes sure results from non-permitted organizations are
omitted. Under the hood:

- A Pg/PlSQL function searches for a list of organization IDs the passed
user has the passed permission.
- When the user has the permission on instance level, it returns early
with all organizations.
- The functions uses a number of views. The views help mapping the
fields entries into relational data and simplify the code use for the
function. The views provide some pre-filters which allow proper index
usage once the final `WHERE` clauses are set by the function.

# Additional Changes



# Additional Context

Closes #9032
Closes https://github.com/zitadel/zitadel/issues/9014

https://github.com/zitadel/zitadel/issues/9188 defines follow-ups for
the new permission framework based on this concept.
2025-01-16 10:09:15 +00:00
Silvan
690147b30e perf(eventstore): fast push on crdb (#9186)
# Which Problems Are Solved

The performance of the initial push function can further be increased

# How the Problems Are Solved

`eventstore.push`- and `eventstore.commands_to_events`-functions were
rewritten

# Additional Changes

none

# Additional Context

same optimizations as for postgres:
https://github.com/zitadel/zitadel/pull/9092
2025-01-15 14:55:48 +00:00
Silvan
1949d1546a fix: set correct owner on project grants (#9089)
# Which Problems Are Solved

In versions previous to v2.66 it was possible to set a different
resource owner on project grants. This was introduced with the new
resource based API. The resource owner was possible to overwrite using
the x-zitadel-org header.

Because of this issue project grants got the wrong resource owner,
instead of the owner of the project it got the granted org which is
wrong because a resource owner of an aggregate is not allowed to change.

# How the Problems Are Solved

- The wrong owners of the events are set to the original owner of the
project.
- A new event is pushed to these aggregates `project.owner.corrected` 
- The projection updates the owners of the user grants if that event was
written

# Additional Changes

The eventstore push function (replaced in version 2.66) writes the
correct resource owner.

# Additional Context

closes https://github.com/zitadel/zitadel/issues/9072
2025-01-15 11:22:16 +01:00
Silvan
829f4543da perf(eventstore): redefine current sequences index (#9142)
# Which Problems Are Solved

On Zitadel cloud we found changing the order of columns in the
`eventstore.events2_current_sequence` index improved CPU usage for the
`SELECT ... FOR UPDATE` query the pusher executes.

# How the Problems Are Solved

`eventstore.events2_current_sequence`-index got replaced

# Additional Context

closes https://github.com/zitadel/zitadel/issues/9082
2025-01-08 16:54:17 +00:00
Tim Möhlmann
df2c6f1d4c perf(eventstore): optimize commands to events function (#9092)
# Which Problems Are Solved

We were seeing high query costs in a the lateral join executed in the
commands_to_events procedural function in the database. The high cost
resulted in incremental CPU usage as a load test continued and less
req/sec handled, sarting at 836 and ending at 130 req/sec.

# How the Problems Are Solved

1. Set `PARALLEL SAFE`. I noticed that this option defaults to `UNSAFE`.
But it's actually safe if the function doesn't `INSERT`
2. Set the returned `ROWS 10` parameter.
3. Function is re-written in Pl/PgSQL so that we eliminate expensive
joins.
4. Introduced an intermediate state that does `SELECT DISTINCT` for the
aggregate so that we don't have to do an expensive lateral join.

# Additional Changes

Use a `COALESCE` to get the owner from the last event, instead of a
`CASE` switch.

# Additional Context

- Function was introduced in
https://github.com/zitadel/zitadel/pull/8816
- Closes https://github.com/zitadel/zitadel/issues/8352

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-01-08 11:59:44 +00:00
Silvan
f320d18b1a perf(fields): create index for instance domain query (#9146)
# Which Problems Are Solved

get instance by domain cannot provide an instance id because it is not
known at that time. This causes a full table scan on the fields table
because current indexes always include the `instance_id` column.

# How the Problems Are Solved

Added a specific index for this query.

# Additional Context

If a system has many fields and there is no cache hit for the given
domain this query can heaviuly influence database CPU usage, the newly
added resolves this problem.
2025-01-07 16:06:33 +00:00
Livio Spring
50d2b26a28 feat: specify login UI version on instance and apps (#9071)
# Which Problems Are Solved

To be able to migrate or test the new login UI, admins might want to
(temporarily) switch individual apps.
At a later point admin might want to make sure all applications use the
new login UI.

# How the Problems Are Solved

- Added a feature flag `` on instance level to require all apps to use
the new login and provide an optional base url.
- if the flag is enabled, all (OIDC) applications will automatically use
the v2 login.
  - if disabled, applications can decide based on their configuration
- Added an option on OIDC apps to use the new login UI and an optional
base url.
- Removed the requirement to use `x-zitadel-login-client` to be
redirected to the login V2 and retrieve created authrequest and link
them to SSO sessions.
- Added a new "IAM_LOGIN_CLIENT" role to allow management of users,
sessions, grants and more without `x-zitadel-login-client`.

# Additional Changes

None

# Additional Context

closes https://github.com/zitadel/zitadel/issues/8702
2024-12-19 10:37:46 +01:00
Tim Möhlmann
da706a8b30 fix(setup): make step 39 repeatable (#9085)
# Which Problems Are Solved

When downgrading zitadel and upgrading it again, it might be that orgs
deleted in this period still have stale entries in the fields table.

# How the Problems Are Solved

- Make the cleanup repeatable
- Scope the query by instance so that an index is used.
2024-12-18 16:48:22 +01:00
Silvan
b89e8a6037 fix(setup): make step 41 repeatable (#9084)
# Which Problems Are Solved

setup step 41 cannot handle downgrades at the moment. This step writes
the instance domain to the fields table. If there are new instances
created during the downgraded version is running there would be domain
missing in the fields afterwards.

# How the Problems Are Solved

Make step 41 repeatable for each version
2024-12-18 15:28:29 +00:00
Tim Möhlmann
6f6e2234eb fix(migrations): clean stale org fields using events (#9051)
# Which Problems Are Solved

Migration step 39 is supposed to cleanup stale organization entries in
the eventstore.fields table. In order to do this it used the projection
to check which orgs still exist.

During initial setup of ZITADEL the first instance with the organization
is created. Howevet, the projections are filled after all migrations are
done. With the organization projection empty, the fields of the first
org would be deleted.

This was discovered during development of a new field type. The
accosiated events did not yet have any projection based filled assigned.
It seems fields with a pre-fill projection are somehow restored.
Therefore a restoration migration isn't required IMO.

# How the Problems Are Solved

Query the event store for `org.removed` events instead. This has the
drawback of using a sequential scan on the eventstore, making the
migration more expensive.

# Additional Changes

- none

# Additional Context

- Introduced in https://github.com/zitadel/zitadel/pull/8946
2024-12-12 18:37:18 +02:00
Silvan
77cd430b3a refactor(handler): cache active instances (#9008)
# Which Problems Are Solved

Scheduled handlers use `eventstore.InstanceIDs` to get the all active
instances within a given timeframe. This function scrapes through all
events written within that time frame which can cause heavy load on the
database.

# How the Problems Are Solved

A new query cache `activeInstances` is introduced which caches the ids
of all instances queried by id or host within the configured timeframe.

# Additional Changes

- Changed `default.yaml`
  - Removed `HandleActiveInstances` from custom handler configs
- Added `MaxActiveInstances` to define the maximal amount of cached
instance ids
- fixed start-from-init and start-from-setup to start auth and admin
projections twice
- fixed org cache invalidation to use correct index

# Additional Context

- part of #8999
2024-12-06 11:32:53 +00:00
Silvan
6614aacf78 feat(fields): add instance domain (#9000)
# Which Problems Are Solved

Instance domains are only computed on read side. This can cause missing
domains if calls are executed shortly after a instance domain (or
instance) was added.

# How the Problems Are Solved

The instance domain is added to the fields table which is filled on
command side.

# Additional Changes

- added setup step to compute instance domains
- instance by host uses fields table instead of instance_domains table

# Additional Context

- part of https://github.com/zitadel/zitadel/issues/8999
2024-12-04 18:10:10 +00:00
Silvan
dab5d9e756 refactor(eventstore): move push logic to sql (#8816)
# Which Problems Are Solved

If many events are written to the same aggregate id it can happen that
zitadel [starts to retry the push
transaction](48ffc902cc/internal/eventstore/eventstore.go (L101))
because [the locking
behaviour](48ffc902cc/internal/eventstore/v3/sequence.go (L25))
during push does compute the wrong sequence because newly committed
events are not visible to the transaction. These events impact the
current sequence.

In cases with high command traffic on a single aggregate id this can
have severe impact on general performance of zitadel. Because many
connections of the `eventstore pusher` database pool are blocked by each
other.

# How the Problems Are Solved

To improve the performance this locking mechanism was removed and the
business logic of push is moved to sql functions which reduce network
traffic and can be analyzed by the database before the actual push. For
clients of the eventstore framework nothing changed.

# Additional Changes

- after a connection is established prefetches the newly added database
types
- `eventstore.BaseEvent` now returns the correct revision of the event

# Additional Context

- part of https://github.com/zitadel/zitadel/issues/8931

---------

Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Miguel Cabrerizo <30386061+doncicuto@users.noreply.github.com>
Co-authored-by: Joakim Lodén <Loddan@users.noreply.github.com>
Co-authored-by: Yxnt <Yxnt@users.noreply.github.com>
Co-authored-by: Stefan Benz <stefan@caos.ch>
Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com>
Co-authored-by: Zach H <zhirschtritt@gmail.com>
2024-12-04 13:51:40 +00:00
Stefan Benz
7caa43ab23 feat: action v2 signing (#8779)
# Which Problems Are Solved

The action v2 messages were didn't contain anything providing security
for the sent content.

# How the Problems Are Solved

Each Target now has a SigningKey, which can also be newly generated
through the API and returned at creation and through the Get-Endpoints.
There is now a HTTP header "Zitadel-Signature", which is generated with
the SigningKey and Payload, and also contains a timestamp to check with
a tolerance if the message took to long to sent.

# Additional Changes

The functionality to create and check the signature is provided in the
pkg/actions package, and can be reused in the SDK.

# Additional Context

Closes #7924

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-11-28 10:06:52 +00:00
Livio Spring
8537805ea5 feat(notification): use event worker pool (#8962)
# Which Problems Are Solved

The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.

# How the Problems Are Solved

- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml

Notifications:
  # The amount of workers processing the notification request events.
  # If set to 0, no notification request events will be handled. This can be useful when running in
  # multi binary / pod setup and allowing only certain executables to process the events.
  Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
  # The amount of events a single worker will process in a run.
  BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
  # Time interval between scheduled notifications for request events
  RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
  # The amount of workers processing the notification retry events.
  # If set to 0, no notification retry events will be handled. This can be useful when running in
  # multi binary / pod setup and allowing only certain executables to process the events.
  RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
  # Time interval between scheduled notifications for retry events
  RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
  # Only instances are projected, for which at least a projection-relevant event exists within the timeframe
  # from HandleActiveInstances duration in the past until the projection's current time
  # If set to 0 (default), every instance is always considered active
  HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
  # The maximum duration a transaction remains open
  # before it spots left folding additional events
  # and updates the table.
  TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
  # Automatically cancel the notification after the amount of failed attempts
  MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
  # Automatically cancel the notification if it cannot be handled within a specific time
  MaxTtl: 5m  # ZITADEL_NOTIFIACATIONS_MAXTTL
  # Failed attempts are retried after a confogired delay (with exponential backoff).
  # Set a minimum and maximum delay and a factor for the backoff
  MinRetryDelay: 1s  # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
  MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
  # Any factor below 1 will be set to 1
  RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```


# Additional Changes

None

# Additional Context

- closes #8931
2024-11-27 15:01:17 +00:00
Tim Möhlmann
ccef67cefa fix(eventstore): cleanup org fields on remove (#8946)
# Which Problems Are Solved

When an org is removed, the corresponding fields are not deleted. This
creates issues, such as recreating a new org with the same verified
domain.

# How the Problems Are Solved

Remove the search fields by the org aggregate, instead of just setting
the removed state.

# Additional Changes

- Cleanup migration script that removed current stale fields.

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/8943
- Related to https://github.com/zitadel/zitadel/pull/8790

---------

Co-authored-by: Silvan <silvan.reusser@gmail.com>
2024-11-26 15:26:41 +00:00
Livio Spring
fb6579e456 fix(milestones): use previous spelling for milestone types (#8886)
# Which Problems Are Solved

https://github.com/zitadel/zitadel/pull/8788 accidentally changed the
spelling of milestone types from PascalCase to snake_case. This breaks
systems where `milestone.pushed` events already exist.

# How the Problems Are Solved

- Use PascalCase again
- Prefix event types with v2. (Previous pushed event type was anyway
ignored).
- Create `milstones3` projection

# Additional Changes

None

# Additional Context

relates to #8788
2024-11-11 11:28:27 +00:00
Tim Möhlmann
250f2344c8 feat(cache): redis cache (#8822)
# Which Problems Are Solved

Add a cache implementation using Redis single mode. This does not add
support for Redis Cluster or sentinel.

# How the Problems Are Solved

Added the `internal/cache/redis` package. All operations occur
atomically, including setting of secondary indexes, using LUA scripts
where needed.

The [`miniredis`](https://github.com/alicebob/miniredis) package is used
to run unit tests.

# Additional Changes

- Move connector code to `internal/cache/connector/...` and remove
duplicate code from `query` and `command` packages.
- Fix a missed invalidation on the restrictions projection

# Additional Context

Closes #8130
2024-11-04 10:44:51 +00:00
Livio Spring
041af26917 feat(OIDC): add back channel logout (#8837)
# Which Problems Are Solved

Currently ZITADEL supports RP-initiated logout for clients. Back-channel
logout ensures that user sessions are terminated across all connected
applications, even if the user closes their browser or loses
connectivity providing a more secure alternative for certain use cases.

# How the Problems Are Solved

If the feature is activated and the client used for the authentication
has a back_channel_logout_uri configured, a
`session_logout.back_channel` will be registered. Once a user terminates
their session, a (notification) handler will send a SET (form POST) to
the registered uri containing a logout_token (with the user's ID and
session ID).

- A new feature "back_channel_logout" is added on system and instance
level
- A `back_channel_logout_uri` can be managed on OIDC applications
- Added a `session_logout` aggregate to register and inform about sent
`back_channel` notifications
- Added a `SecurityEventToken` channel and `Form`message type in the
notification handlers
- Added `TriggeredAtOrigin` fields to `HumanSignedOut` and
`TerminateSession` events for notification handling
- Exported various functions and types in the `oidc` package to be able
to reuse for token signing in the back_channel notifier.
- To prevent that current existing session termination events will be
handled, a setup step is added to set the `current_states` for the
`projections.notifications_back_channel_logout` to the current position

- [x] requires https://github.com/zitadel/oidc/pull/671

# Additional Changes

- Updated all OTEL dependencies to v1.29.0, since OIDC already updated
some of them to that version.
- Single Session Termination feature is correctly checked (fixed feature
mapping)

# Additional Context

- closes https://github.com/zitadel/zitadel/issues/8467
- TODO:
  - Documentation
  - UI to be done: https://github.com/zitadel/zitadel/issues/8469

---------

Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 15:57:17 +01:00
Tim Möhlmann
32bad3feb3 perf(milestones): refactor (#8788)
Some checks are pending
ZITADEL CI/CD / core (push) Waiting to run
ZITADEL CI/CD / console (push) Waiting to run
ZITADEL CI/CD / version (push) Waiting to run
ZITADEL CI/CD / compile (push) Blocked by required conditions
ZITADEL CI/CD / core-unit-test (push) Blocked by required conditions
ZITADEL CI/CD / core-integration-test (push) Blocked by required conditions
ZITADEL CI/CD / lint (push) Blocked by required conditions
ZITADEL CI/CD / container (push) Blocked by required conditions
ZITADEL CI/CD / e2e (push) Blocked by required conditions
ZITADEL CI/CD / release (push) Blocked by required conditions
Code Scanning / CodeQL-Build (go) (push) Waiting to run
Code Scanning / CodeQL-Build (javascript) (push) Waiting to run
# Which Problems Are Solved

Milestones used existing events from a number of aggregates. OIDC
session is one of them. We noticed in load-tests that the reduction of
the oidc_session.added event into the milestone projection is a costly
business with payload based conditionals. A milestone is reached once,
but even then we remain subscribed to the OIDC events. This requires the
projections.current_states to be updated continuously.


# How the Problems Are Solved

The milestone creation is refactored to use dedicated events instead.
The command side decides when a milestone is reached and creates the
reached event once for each milestone when required.

# Additional Changes

In order to prevent reached milestones being created twice, a migration
script is provided. When the old `projections.milestones` table exist,
the state is read from there and `v2` milestone aggregate events are
created, with the original reached and pushed dates.

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/8800
2024-10-28 08:29:34 +00:00