Commit Graph

90 Commits

Author SHA1 Message Date
Iraq
9595a1bcca feat(db): adding relational instance table (#10007)
<!--
Please inform yourself about the contribution guidelines on submitting a
PR here:
https://github.com/zitadel/zitadel/blob/main/CONTRIBUTING.md#submit-a-pull-request-pr.
Take note of how PR/commit titles should be written and replace the
template texts in the sections below. Don't remove any of the sections.
It is important that the commit history clearly shows what is changed
and why.
Important: By submitting a contribution you agree to the terms from our
Licensing Policy as described here:
https://github.com/zitadel/zitadel/blob/main/LICENSING.md#community-contributions.
-->

# Which Problems Are Solved

Implementing Instance table to new relational database schema

# How the Problems Are Solved


The following fields must be managed in this table:

- `id`
- `name`
- `default_org_id`
- `zitadel_project_id`
- `console_client_id`
- `console_app_id`
- `default_language`
- `created_at`
- `updated_at`
- `deleted_at`

The repository must provide the following functions:

Manipulations:
- create
  - `name`
  - `default_org_id`
  - `zitadel_project_id`
  - `console_client_id`
  - `console_app_id`
  - `default_language`
- update
  - `name`
  - `default_language`
- delete

Queries:
- get returns single instance matching the criteria and pagination,
should return error if multiple instances were found
- list returns list of instances matching the criteria, pagination

Criteria are the following:
- by id

pagination:
- by created_at
- by updated_at
- by name

### instance events

The following events must be applied on the table using a projection
(`internal/query/projection`)

- `instance.added` results in create
- `instance.changed` changes the `name` field
- `instance.removed` sets the the `deleted_at` field
- `instance.default.org.set` sets the `default_org_id` field
- `instance.iam.project.set` sets the `zitadel_project_id` field
- `instance.iam.console.set` sets the `console_client_id` and
`console_app_id` fields
- `instance.default.language.set` sets the `default_language` field
- if answer is yes to discussion: `instance.domain.primary.set` sets the
`primary_domain` field

### acceptance criteria

- [x] migration is implemented and gets executed
- [x] domain interfaces are implemented and documented for service layer
- [x] repository is implemented and implements domain interface
- [x] testing
  - [x] the repository methods
  - [x] events get reduced correctly
  - [x] unique constraints

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9935
2025-06-17 09:46:01 +02:00
Iraq Jaber
d857c12b0f Merge branch 'main' into clean-transactional-propsal 2025-06-13 15:05:33 +02:00
Silvan
4df138286b perf(query): reduce user query duration (#10037)
# Which Problems Are Solved

The resource usage to query user(s) on the database was high and
therefore could have performance impact.

# How the Problems Are Solved

Database queries involving the users and loginnames table were improved
and an index was added for user by email query.

# Additional Changes

- spellchecks
- updated apis on load tests

# additional info

needs cherry pick to v3
2025-06-06 08:48:29 +00:00
Tim Möhlmann
b9c1cdf4ad feat(projections): resource counters (#9979)
# Which Problems Are Solved

Add the ability to keep track of the current counts of projection
resources. We want to prevent calling `SELECT COUNT(*)` on tables, as
that forces a full scan and sudden spikes of DB resource uses.

# How the Problems Are Solved

- A resource_counts table is added
- Triggers that increment and decrement the counted values on inserts
and deletes
- Triggers that delete all counts of a table when the source table is
TRUNCATEd. This is not in the business logic, but prevents wrong counts
in case someone want to force a re-projection.
- Triggers that delete all counts if the parent resource is deleted
- Script to pre-populate the resource_counts table when a new source
table is added.

The triggers are reusable for any type of resource, in case we choose to
add more in the future.
Counts are aggregated by a given parent. Currently only `instance` and
`organization` are defined as possible parent. This can later be
extended to other types, such as `project`, should the need arise.

I deliberately chose to use `parent_id` to distinguish from the
de-factor `resource_owner` which is usually an organization ID. For
example:

- For users the parent is an organization and the `parent_id` matches
`resource_owner`.
- For organizations the parent is an instance, but the `resource_owner`
is the `org_id`. In this case the `parent_id` is the `instance_id`.
- Applications would have a similar problem, where the parent is a
project, but the `resource_owner` is the `org_id`


# Additional Context

Closes https://github.com/zitadel/zitadel/issues/9957
2025-06-03 14:15:30 +00:00
Silvan
362420f62b feat: init migrations for transactional tables (#9946)
# Which Problems Are Solved

To start with transactional tables we need to setup the new `zitadel`
schema in a way that does not rely on the event store later.

# How the Problems Are Solved

Setup step added which calls the function which executes the migrations.

# Additional Changes

none

# Additional Context

- closes #9933
2025-05-26 08:20:14 +02:00
Livio Spring
2cf3ef4de4 feat: federated logout for SAML IdPs (#9931)
# Which Problems Are Solved

Currently if a user signs in using an IdP, once they sign out of
Zitadel, the corresponding IdP session is not terminated. This can be
the desired behavior. In some cases, e.g. when using a shared computer
it results in a potential security risk, since a follower user might be
able to sign in as the previous using the still open IdP session.

# How the Problems Are Solved

- Admins can enabled a federated logout option on SAML IdPs through the
Admin and Management APIs.
- During the termination of a login V1 session using OIDC end_session
endpoint, Zitadel will check if an IdP was used to authenticate that
session.
- In case there was a SAML IdP used with Federated Logout enabled, it
will intercept the logout process, store the information into the shared
cache and redirect to the federated logout endpoint in the V1 login.
- The V1 login federated logout endpoint checks every request on an
existing cache entry. On success it will create a SAML logout request
for the used IdP and either redirect or POST to the configured SLO
endpoint. The cache entry is updated with a `redirected` state.
- A SLO endpoint is added to the `/idp` handlers, which will handle the
SAML logout responses. At the moment it will check again for an existing
federated logout entry (with state `redirected`) in the cache. On
success, the user is redirected to the initially provided
`post_logout_redirect_uri` from the end_session request.

# Additional Changes

None

# Additional Context

- This PR merges the https://github.com/zitadel/zitadel/pull/9841 and
https://github.com/zitadel/zitadel/pull/9854 to main, additionally
updating the docs on Entra ID SAML.
- closes #9228 
- backport to 3.x

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Zach Hirschtritt <zachary.hirschtritt@klaviyo.com>
2025-05-23 13:52:25 +02:00
Stefan Benz
21167a4bba fix: add current state for execution handler into setup (#9863)
# Which Problems Are Solved

The execution handler projection handles all events to check if an
execution has to be provided to the worker to execute.
In this logic all events would be processed from the beginning which is
not necessary.

# How the Problems Are Solved

Add the current state to the execution handler projection, to avoid
processing all existing events.

# Additional Changes

Add custom configuration to the default, so that the transactions are
limited to some events.

# Additional Context

None
2025-05-07 14:26:53 +00:00
Silvan
a626678004 fix(setup): execute s54 (#9849)
# Which Problems Are Solved

Step 54 was not executed during setup.

# How the Problems Are Solved

Added the step to setup jobs

# Additional Changes

none

# Additional Context

- the step was added in https://github.com/zitadel/zitadel/pull/9837
- thanks to @zhirschtritt for raising this.
2025-05-06 06:15:45 +00:00
Zach Hirschtritt
aa9ef8b49e fix: Auto cleanup failed Setup steps if process is killed (#9736)
# Which Problems Are Solved

When running a long-running Zitadel Setup, Kubernetes might decide to
move a pod to a new node automatically. Currently, this puts any
migrations into a broken state that an operator needs to manually run
the "cleanup" command on - assuming they catch the error.

The only super long running commands are typically projection pre-fill
operations, which depending on the size of the event table for that
projection, can take many hours - plenty of time for Kubernetes to make
unexpected decisions, especially in a busy cluster.

# How the Problems Are Solved

This change listens on `os.Interrupt` and `syscall.SIGTERM`, cancels the
current Setup context, and runs the `Cleanup` command. The logs then
look something like this:
```shell
...
INFO[0000] verify migration                              caller="/Users/zach/src/zitadel/internal/migration/migration.go:43" name=repeatable_delete_stale_org_fields
INFO[0000] starting migration                            caller="/Users/zach/src/zitadel/internal/migration/migration.go:66" name=repeatable_delete_stale_org_fields
INFO[0000] execute delete query                          caller="/Users/zach/src/zitadel/cmd/setup/39.go:37" instance_id=281297936179003398 migration=repeatable_delete_stale_org_fields progress=1/1
INFO[0000] verify migration                              caller="/Users/zach/src/zitadel/internal/migration/migration.go:43" name=repeatable_fill_fields_for_instance_domains
INFO[0000] starting migration                            caller="/Users/zach/src/zitadel/internal/migration/migration.go:66" name=repeatable_fill_fields_for_instance_domains
----- SIGTERM signal issued -----
INFO[0000] received interrupt signal, shutting down: interrupt  caller="/Users/zach/src/zitadel/cmd/setup/setup.go:121"
INFO[0000] query failed                                  caller="/Users/zach/src/zitadel/internal/eventstore/repository/sql/query.go:135" error="timeout: context already done: context canceled"
DEBU[0000] filter eventstore failed                      caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:155" error="ID=SQL-KyeAx Message=unable to filter events Parent=(timeout: context already done: context canceled)" projection=instance_domain_fields
DEBU[0000] unable to rollback tx                         caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:110" error="sql: transaction has already been committed or rolled back" projection=instance_domain_fields
INFO[0000] process events failed                         caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:72" error="ID=SQL-KyeAx Message=unable to filter events Parent=(timeout: context already done: context canceled)" projection=instance_domain_fields
DEBU[0000] trigger iteration                             caller="/Users/zach/src/zitadel/internal/eventstore/handler/v2/field_handler.go:73" iteration=0 projection=instance_domain_fields
ERRO[0000] migration failed                              caller="/Users/zach/src/zitadel/internal/migration/migration.go:68" error="ID=SQL-KyeAx Message=unable to filter events Parent=(timeout: context already done: context canceled)" name=repeatable_fill_fields_for_instance_domains
ERRO[0000] migration finish failed                       caller="/Users/zach/src/zitadel/internal/migration/migration.go:71" error="context canceled" name=repeatable_fill_fields_for_instance_domains
----- Cleanup before exiting -----
INFO[0000] cleanup started                               caller="/Users/zach/src/zitadel/cmd/setup/cleanup.go:30"
INFO[0000] cleanup migration                             caller="/Users/zach/src/zitadel/cmd/setup/cleanup.go:47" name=repeatable_fill_fields_for_instance_domains
```

# Additional Changes

* `mustExecuteMigration` -> `executeMigration`: **must**Execute logged a
Fatal error previously which calls os.Exit so no cleanup was possible.
Instead, this PR returns an error and assigns it to a shared error in
the Setup closure that defer can check.
* `initProjections` now returns an error instead of exiting

# Additional Context

This behavior might be unwelcome or at least unexpected in some cases.
Putting it behind a feature flag or config setting is likely a good
followup.

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-04-22 09:34:02 +00:00
Fabienne Bühler
07ce3b6905 chore!: Introduce ZITADEL v3 (#9645)
This PR summarizes multiple changes specifically only available with
ZITADEL v3:

- feat: Web Keys management
(https://github.com/zitadel/zitadel/pull/9526)
- fix(cmd): ensure proper working of mirror
(https://github.com/zitadel/zitadel/pull/9509)
- feat(Authz): system user support for permission check v2
(https://github.com/zitadel/zitadel/pull/9640)
- chore(license): change from Apache to AGPL
(https://github.com/zitadel/zitadel/pull/9597)
- feat(console): list v2 sessions
(https://github.com/zitadel/zitadel/pull/9539)
- fix(console): add loginV2 feature flag
(https://github.com/zitadel/zitadel/pull/9682)
- fix(feature flags): allow reading "own" flags
(https://github.com/zitadel/zitadel/pull/9649)
- feat(console): add Actions V2 UI
(https://github.com/zitadel/zitadel/pull/9591)

BREAKING CHANGE
- feat(webkey): migrate to v2beta API
(https://github.com/zitadel/zitadel/pull/9445)
- chore!: remove CockroachDB Support
(https://github.com/zitadel/zitadel/pull/9444)
- feat(actions): migrate to v2beta API
(https://github.com/zitadel/zitadel/pull/9489)

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Ramon <mail@conblem.me>
Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Kenta Yamaguchi <56732734+KEY60228@users.noreply.github.com>
Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com>
Co-authored-by: Livio Spring <livio@zitadel.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Iraq <66622793+kkrime@users.noreply.github.com>
Co-authored-by: Florian Forster <florian@zitadel.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Max Peintner <peintnerm@gmail.com>
2025-04-02 16:53:06 +02:00
Stefan Benz
6b23c33cb6 fix: rename idp_templates6_ldap3 to ldap2 if necessary (#9565)
# Which Problems Are Solved

Zitadel setup with v2.71.0 could result in errors regarding the
idp_templates6_ldap3 subtable.

# How the Problems Are Solved

Rename the subtable idp_templates6_ldap3 to idp_templates6_ldap2 if no
idp_templates6_ldap2 is existing and rename column `rootCA` to
`root_ca`.

# Additional Changes

None

# Additional Context

Related PR #9292

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-03-26 19:26:16 +00:00
Iraq
11c9be3b8d chore: updating projections.idp_templates6 to projections.idp_templates7 (#9517)
# Which Problems Are Solved

This was left out as part of
https://github.com/zitadel/zitadel/pull/9292

- Closes https://github.com/zitadel/zitadel/issues/9514

---------

Co-authored-by: Iraq Jaber <IraqJaber@gmail.com>
2025-03-18 16:23:12 +01:00
Silvan
92f0cf018f fix(cmd): clarify notification config handling (#9459)
# Which Problems Are Solved

If configuration `notifications.LegacyEnabled` is set to false when
using cockroachdb as a database Zitadel start does not work and prints
the following error: `level=fatal msg="unable to start zitadel"
caller="github.com/zitadel/zitadel/cmd/start/start_from_init.go:44"
error="can't scan into dest[0]: cannot scan NULL into *string"`

# How the Problems Are Solved

The combination of the setting and cockraochdb are checked and a better
error is provided to the user.

# Additional Context

- introduced with https://github.com/zitadel/zitadel/pull/9321
2025-03-06 06:26:33 +00:00
Silvan
444f682e25 refactor(notification): use new queue package (#9360)
# Which Problems Are Solved

The recently introduced notification queue have potential race conditions.

# How the Problems Are Solved

Current code is refactored to use the queue package, which is safe in
regards of concurrency.

# Additional Changes

- the queue is included in startup
- improved code quality of queue

# Additional Context

- closes https://github.com/zitadel/zitadel/issues/9278
2025-02-27 11:49:12 +01:00
Tim Möhlmann
e670b9126c fix(permissions): chunked synchronization of role permission events (#9403)
# Which Problems Are Solved

Setup fails to push all role permission events when running Zitadel with
CockroachDB. `TransactionRetryError`s were visible in logs which finally
times out the setup job with `timeout: context deadline exceeded`

# How the Problems Are Solved

As suggested in the [Cockroach documentation](timeout: context deadline
exceeded), _"break down larger transactions"_. The commands to be pushed
for the role permissions are chunked in 50 events per push. This
chunking is only done with CockroachDB.

# Additional Changes

- gci run fixed some unrelated imports
- access to `command.Commands` for the setup job, so we can reuse the
sync logic.

# Additional Context

Closes #9293

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-02-26 16:06:50 +00:00
Livio Spring
8f88c4cf5b feat: add PKCE option to generic OAuth2 / OIDC identity providers (#9373)
# Which Problems Are Solved

Some OAuth2 and OIDC providers require the use of PKCE for all their
clients. While ZITADEL already recommended the same for its clients, it
did not yet support the option on the IdP configuration.

# How the Problems Are Solved

- A new boolean `use_pkce` is added to the add/update generic OAuth/OIDC
endpoints.
- A new checkbox is added to the generic OAuth and OIDC provider
templates.
- The `rp.WithPKCE` option is added to the provider if the use of PKCE
has been set.
- The `rp.WithCodeChallenge` and `rp.WithCodeVerifier` options are added
to the OIDC/Auth BeginAuth and CodeExchange function.
- Store verifier or any other persistent argument in the intent or auth
request.
- Create corresponding session object before creating the intent, to be
able to store the information.
- (refactored session structs to use a constructor for unified creation
and better overview of actual usage)

Here's a screenshot showing the URI including the PKCE params:


![use_pkce_in_url](https://github.com/zitadel/zitadel/assets/30386061/eaeab123-a5da-4826-b001-2ae9efa35169)

# Additional Changes

None.

# Additional Context

- Closes #6449
- This PR replaces the existing PR (#8228) of @doncicuto. The base he
did was cherry picked. Thank you very much for that!

---------

Co-authored-by: Miguel Cabrerizo <doncicuto@gmail.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
2025-02-26 12:20:47 +00:00
Iraq
0cb0380826 feat: updating eventstore.permitted_orgs sql function (#9309)
# Which Problems Are Solved

Performance issue for GRPC call `zitadel.user.v2.UserService.ListUsers`
due to lack of org filtering on `ListUsers`

# Additional Context

Replace this example with links to related issues, discussions, discord
threads, or other sources with more context.
Use the Closing #issue syntax for issues that are resolved with this PR.
- Closes https://github.com/zitadel/zitadel/issues/9191

---------

Co-authored-by: Iraq Jaber <IraqJaber@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
2025-02-17 11:55:28 +02:00
Stefan Benz
49de5c61b2 feat: saml application configuration for login version (#9351)
# Which Problems Are Solved

OIDC applications can configure the used login version, which is
currently not possible for SAML applications.

# How the Problems Are Solved

Add the same functionality dependent on the feature-flag for SAML
applications.

# Additional Changes

None

# Additional Context

Closes #9267
Follow up issue for frontend changes #9354

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2025-02-13 16:03:05 +00:00
Silvan
415bc32ed6 feat: add task queue (#9321)
# Which Problems Are Solved

To integrate river as a task queue we need to ensure the migrations of
river are executed.

# How the Problems Are Solved

- A new schema was added to the Zitadel database called "queue"
- Added a repeatable setup step to Zitadel which executes the
[migrations of
river](https://riverqueue.com/docs/migrations#go-migration-api).

# Additional Changes

- Added more hooks to the databases to properly set the schema for the
task queue

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/9280
2025-02-12 14:51:55 +00:00
Tim Möhlmann
bcc6a689fa fix(setup): use template for in_tx_order type (#9346)
# Which Problems Are Solved

Systems running with PostgreSQL before Zitadel v2.39 are likely to have
a wrong type for the `in_tx_order` column in the `eventstore.event2`
table. The migration at the time used the `event_sequence` as default
value without typecast, which results in a `bigint` type for that
column. However, when creating the table from scratch, we explicitly
specify the type to be `integer`.

Starting from Zitadel v2.67 we use a Pl/PgSQL function to push events.
The function requires the types from `eventstore.events2` to the same as
the `select` destinations used in the function. In the function
`in_tx_order` is also expected to by of `integer` type.

CochroachDB systems are not affected because `bigint` is an alias to the
`int` type. In other words, CockroachDB uses `int8` when specifying type
`int`. Therefore the types already match.

# How the Problems Are Solved

Retrieve the actual column type currently in use. A template is used to
assign the type to the `ordinality` column returned as `in_tx_order`.

# Additional Changes

- Detailed logging on migration failure

# Additional Context

- Closes #9180

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-02-12 11:06:34 +00:00
Stefan Benz
a59c6b9f84 fix: change usage from filepath to path (#9260)
# Which Problems Are Solved

Paths for setup steps are joined with "\" when binary is started under
Windows, which results in wrongly joined paths.

# How the Problems Are Solved

Replace the usage of "filepath" with "path" package, which does only
join with "/" and nothing OS specific.

# Additional Changes

None

# Additional Context

Closes #9227
2025-01-29 09:53:27 +01:00
Tim Möhlmann
ec5f18c168 fix(setup): split membership fields migration (#9230)
# Which Problems Are Solved

The membership fields migration timed out in certain cases. It also
tried to migrate instances which were already removed.

# How the Problems Are Solved

Revert the previous fix that combined the repeatable step for multiple
fill triggers. The membeship migration is now single-run as it might
take a lot of time. It is not worth making it repeatable. Instance IDs
of removed instances are skipped.

# Additional Changes

None

# Additional Context

Introduced in https://github.com/zitadel/zitadel/pull/9199
2025-01-24 11:24:35 +01:00
Tim Möhlmann
94cbf97534 fix(permissions_v2): add membership fields migration (#9199)
# Which Problems Are Solved

Memberships did not have a fields table fill migration.

# How the Problems Are Solved

Add filling of membership fields to the repeatable steps.

# Additional Changes

- Use the same repeatable step for multiple fill fields handlers.
- Fix an error for PostgreSQL 15 where a subquery in a `FROM` clause
needs an alias ing the `permitted_orgs` function.

# Additional Context

- Part of https://github.com/zitadel/zitadel/issues/9188
- Introduced in https://github.com/zitadel/zitadel/pull/9152
2025-01-17 16:16:26 +01:00
Silvan
4645045987 refactor: consolidate database pools (#9105)
# Which Problems Are Solved

Zitadel currently uses 3 database pool, 1 for queries, 1 for pushing
events and 1 for scheduled projection updates. This defeats the purpose
of a connection pool which already handles multiple connections.

During load tests we found that the current structure of connection
pools consumes a lot of database resources. The resource usage dropped
after we reduced the amount of database pools to 1 because existing
connections can be used more efficiently.

# How the Problems Are Solved

Removed logic to handle multiple connection pools and use a single one.

# Additional Changes

none

# Additional Context

part of https://github.com/zitadel/zitadel/issues/8352
2025-01-16 11:07:18 +00:00
Tim Möhlmann
3f6ea78c87 perf: role permissions in database (#9152)
# Which Problems Are Solved

Currently ZITADEL defines organization and instance member roles and
permissions in defaults.yaml. The permission check is done on API call
level. For example: "is this user allowed to make this call on this
org". This makes sense on the V1 API where the API is permission-level
shaped. For example, a search for users always happens in the context of
the organization. (Either the organization the calling user belongs to,
or through member ship and the x-zitadel-orgid header.

However, for resource based APIs we must be able to resolve permissions
by object. For example, an IAM_OWNER listing users should be able to get
all users in an instance based on the query filters. Alternatively a
user may have user.read permissions on one or more orgs. They should be
able to read just those users.

# How the Problems Are Solved

## Role permission mapping

The role permission mappings defined from `defaults.yaml` or local
config override are synchronized to the database on every run of
`zitadel setup`:

- A single query per **aggregate** builds a list of `add` and `remove`
actions needed to reach the desired state or role permission mappings
from the config.
- The required events based on the actions are pushed to the event
store.
- Events define search fields so that permission checking can use the
indices and is strongly consistent for both query and command sides.

The migration is split in the following aggregates:

- System aggregate for for roles prefixed with `SYSTEM`
- Each instance for roles not prefixed with `SYSTEM`. This is in
anticipation of instance level management over the API.

## Membership

Current instance / org / project membership events now have field table
definitions. Like the role permissions this ensures strong consistency
while still being able to use the indices of the fields table. A
migration is provided to fill the membership fields.

## Permission check

I aimed keeping the mental overhead to the developer to a minimal. The
provided implementation only provides a permission check for list
queries for org level resources, for example users. In the `query`
package there is a simple helper function `wherePermittedOrgs` which
makes sure the underlying database function is called as part of the
`SELECT` query and the permitted organizations are part of the `WHERE`
clause. This makes sure results from non-permitted organizations are
omitted. Under the hood:

- A Pg/PlSQL function searches for a list of organization IDs the passed
user has the passed permission.
- When the user has the permission on instance level, it returns early
with all organizations.
- The functions uses a number of views. The views help mapping the
fields entries into relational data and simplify the code use for the
function. The views provide some pre-filters which allow proper index
usage once the final `WHERE` clauses are set by the function.

# Additional Changes



# Additional Context

Closes #9032
Closes https://github.com/zitadel/zitadel/issues/9014

https://github.com/zitadel/zitadel/issues/9188 defines follow-ups for
the new permission framework based on this concept.
2025-01-16 10:09:15 +00:00
Silvan
1949d1546a fix: set correct owner on project grants (#9089)
# Which Problems Are Solved

In versions previous to v2.66 it was possible to set a different
resource owner on project grants. This was introduced with the new
resource based API. The resource owner was possible to overwrite using
the x-zitadel-org header.

Because of this issue project grants got the wrong resource owner,
instead of the owner of the project it got the granted org which is
wrong because a resource owner of an aggregate is not allowed to change.

# How the Problems Are Solved

- The wrong owners of the events are set to the original owner of the
project.
- A new event is pushed to these aggregates `project.owner.corrected` 
- The projection updates the owners of the user grants if that event was
written

# Additional Changes

The eventstore push function (replaced in version 2.66) writes the
correct resource owner.

# Additional Context

closes https://github.com/zitadel/zitadel/issues/9072
2025-01-15 11:22:16 +01:00
Silvan
829f4543da perf(eventstore): redefine current sequences index (#9142)
# Which Problems Are Solved

On Zitadel cloud we found changing the order of columns in the
`eventstore.events2_current_sequence` index improved CPU usage for the
`SELECT ... FOR UPDATE` query the pusher executes.

# How the Problems Are Solved

`eventstore.events2_current_sequence`-index got replaced

# Additional Context

closes https://github.com/zitadel/zitadel/issues/9082
2025-01-08 16:54:17 +00:00
Silvan
f320d18b1a perf(fields): create index for instance domain query (#9146)
# Which Problems Are Solved

get instance by domain cannot provide an instance id because it is not
known at that time. This causes a full table scan on the fields table
because current indexes always include the `instance_id` column.

# How the Problems Are Solved

Added a specific index for this query.

# Additional Context

If a system has many fields and there is no cache hit for the given
domain this query can heaviuly influence database CPU usage, the newly
added resolves this problem.
2025-01-07 16:06:33 +00:00
Livio Spring
50d2b26a28 feat: specify login UI version on instance and apps (#9071)
# Which Problems Are Solved

To be able to migrate or test the new login UI, admins might want to
(temporarily) switch individual apps.
At a later point admin might want to make sure all applications use the
new login UI.

# How the Problems Are Solved

- Added a feature flag `` on instance level to require all apps to use
the new login and provide an optional base url.
- if the flag is enabled, all (OIDC) applications will automatically use
the v2 login.
  - if disabled, applications can decide based on their configuration
- Added an option on OIDC apps to use the new login UI and an optional
base url.
- Removed the requirement to use `x-zitadel-login-client` to be
redirected to the login V2 and retrieve created authrequest and link
them to SSO sessions.
- Added a new "IAM_LOGIN_CLIENT" role to allow management of users,
sessions, grants and more without `x-zitadel-login-client`.

# Additional Changes

None

# Additional Context

closes https://github.com/zitadel/zitadel/issues/8702
2024-12-19 10:37:46 +01:00
Tim Möhlmann
da706a8b30 fix(setup): make step 39 repeatable (#9085)
# Which Problems Are Solved

When downgrading zitadel and upgrading it again, it might be that orgs
deleted in this period still have stale entries in the fields table.

# How the Problems Are Solved

- Make the cleanup repeatable
- Scope the query by instance so that an index is used.
2024-12-18 16:48:22 +01:00
Silvan
b89e8a6037 fix(setup): make step 41 repeatable (#9084)
# Which Problems Are Solved

setup step 41 cannot handle downgrades at the moment. This step writes
the instance domain to the fields table. If there are new instances
created during the downgraded version is running there would be domain
missing in the fields afterwards.

# How the Problems Are Solved

Make step 41 repeatable for each version
2024-12-18 15:28:29 +00:00
Silvan
6614aacf78 feat(fields): add instance domain (#9000)
# Which Problems Are Solved

Instance domains are only computed on read side. This can cause missing
domains if calls are executed shortly after a instance domain (or
instance) was added.

# How the Problems Are Solved

The instance domain is added to the fields table which is filled on
command side.

# Additional Changes

- added setup step to compute instance domains
- instance by host uses fields table instead of instance_domains table

# Additional Context

- part of https://github.com/zitadel/zitadel/issues/8999
2024-12-04 18:10:10 +00:00
Silvan
dab5d9e756 refactor(eventstore): move push logic to sql (#8816)
# Which Problems Are Solved

If many events are written to the same aggregate id it can happen that
zitadel [starts to retry the push
transaction](48ffc902cc/internal/eventstore/eventstore.go (L101))
because [the locking
behaviour](48ffc902cc/internal/eventstore/v3/sequence.go (L25))
during push does compute the wrong sequence because newly committed
events are not visible to the transaction. These events impact the
current sequence.

In cases with high command traffic on a single aggregate id this can
have severe impact on general performance of zitadel. Because many
connections of the `eventstore pusher` database pool are blocked by each
other.

# How the Problems Are Solved

To improve the performance this locking mechanism was removed and the
business logic of push is moved to sql functions which reduce network
traffic and can be analyzed by the database before the actual push. For
clients of the eventstore framework nothing changed.

# Additional Changes

- after a connection is established prefetches the newly added database
types
- `eventstore.BaseEvent` now returns the correct revision of the event

# Additional Context

- part of https://github.com/zitadel/zitadel/issues/8931

---------

Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Miguel Cabrerizo <30386061+doncicuto@users.noreply.github.com>
Co-authored-by: Joakim Lodén <Loddan@users.noreply.github.com>
Co-authored-by: Yxnt <Yxnt@users.noreply.github.com>
Co-authored-by: Stefan Benz <stefan@caos.ch>
Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com>
Co-authored-by: Zach H <zhirschtritt@gmail.com>
2024-12-04 13:51:40 +00:00
Stefan Benz
7caa43ab23 feat: action v2 signing (#8779)
# Which Problems Are Solved

The action v2 messages were didn't contain anything providing security
for the sent content.

# How the Problems Are Solved

Each Target now has a SigningKey, which can also be newly generated
through the API and returned at creation and through the Get-Endpoints.
There is now a HTTP header "Zitadel-Signature", which is generated with
the SigningKey and Payload, and also contains a timestamp to check with
a tolerance if the message took to long to sent.

# Additional Changes

The functionality to create and check the signature is provided in the
pkg/actions package, and can be reused in the SDK.

# Additional Context

Closes #7924

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-11-28 10:06:52 +00:00
Livio Spring
8537805ea5 feat(notification): use event worker pool (#8962)
# Which Problems Are Solved

The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.

# How the Problems Are Solved

- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml

Notifications:
  # The amount of workers processing the notification request events.
  # If set to 0, no notification request events will be handled. This can be useful when running in
  # multi binary / pod setup and allowing only certain executables to process the events.
  Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
  # The amount of events a single worker will process in a run.
  BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
  # Time interval between scheduled notifications for request events
  RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
  # The amount of workers processing the notification retry events.
  # If set to 0, no notification retry events will be handled. This can be useful when running in
  # multi binary / pod setup and allowing only certain executables to process the events.
  RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
  # Time interval between scheduled notifications for retry events
  RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
  # Only instances are projected, for which at least a projection-relevant event exists within the timeframe
  # from HandleActiveInstances duration in the past until the projection's current time
  # If set to 0 (default), every instance is always considered active
  HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
  # The maximum duration a transaction remains open
  # before it spots left folding additional events
  # and updates the table.
  TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
  # Automatically cancel the notification after the amount of failed attempts
  MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
  # Automatically cancel the notification if it cannot be handled within a specific time
  MaxTtl: 5m  # ZITADEL_NOTIFIACATIONS_MAXTTL
  # Failed attempts are retried after a confogired delay (with exponential backoff).
  # Set a minimum and maximum delay and a factor for the backoff
  MinRetryDelay: 1s  # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
  MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
  # Any factor below 1 will be set to 1
  RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```


# Additional Changes

None

# Additional Context

- closes #8931
2024-11-27 15:01:17 +00:00
Tim Möhlmann
ccef67cefa fix(eventstore): cleanup org fields on remove (#8946)
# Which Problems Are Solved

When an org is removed, the corresponding fields are not deleted. This
creates issues, such as recreating a new org with the same verified
domain.

# How the Problems Are Solved

Remove the search fields by the org aggregate, instead of just setting
the removed state.

# Additional Changes

- Cleanup migration script that removed current stale fields.

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/8943
- Related to https://github.com/zitadel/zitadel/pull/8790

---------

Co-authored-by: Silvan <silvan.reusser@gmail.com>
2024-11-26 15:26:41 +00:00
Livio Spring
fb6579e456 fix(milestones): use previous spelling for milestone types (#8886)
# Which Problems Are Solved

https://github.com/zitadel/zitadel/pull/8788 accidentally changed the
spelling of milestone types from PascalCase to snake_case. This breaks
systems where `milestone.pushed` events already exist.

# How the Problems Are Solved

- Use PascalCase again
- Prefix event types with v2. (Previous pushed event type was anyway
ignored).
- Create `milstones3` projection

# Additional Changes

None

# Additional Context

relates to #8788
2024-11-11 11:28:27 +00:00
Tim Möhlmann
250f2344c8 feat(cache): redis cache (#8822)
# Which Problems Are Solved

Add a cache implementation using Redis single mode. This does not add
support for Redis Cluster or sentinel.

# How the Problems Are Solved

Added the `internal/cache/redis` package. All operations occur
atomically, including setting of secondary indexes, using LUA scripts
where needed.

The [`miniredis`](https://github.com/alicebob/miniredis) package is used
to run unit tests.

# Additional Changes

- Move connector code to `internal/cache/connector/...` and remove
duplicate code from `query` and `command` packages.
- Fix a missed invalidation on the restrictions projection

# Additional Context

Closes #8130
2024-11-04 10:44:51 +00:00
Livio Spring
041af26917 feat(OIDC): add back channel logout (#8837)
# Which Problems Are Solved

Currently ZITADEL supports RP-initiated logout for clients. Back-channel
logout ensures that user sessions are terminated across all connected
applications, even if the user closes their browser or loses
connectivity providing a more secure alternative for certain use cases.

# How the Problems Are Solved

If the feature is activated and the client used for the authentication
has a back_channel_logout_uri configured, a
`session_logout.back_channel` will be registered. Once a user terminates
their session, a (notification) handler will send a SET (form POST) to
the registered uri containing a logout_token (with the user's ID and
session ID).

- A new feature "back_channel_logout" is added on system and instance
level
- A `back_channel_logout_uri` can be managed on OIDC applications
- Added a `session_logout` aggregate to register and inform about sent
`back_channel` notifications
- Added a `SecurityEventToken` channel and `Form`message type in the
notification handlers
- Added `TriggeredAtOrigin` fields to `HumanSignedOut` and
`TerminateSession` events for notification handling
- Exported various functions and types in the `oidc` package to be able
to reuse for token signing in the back_channel notifier.
- To prevent that current existing session termination events will be
handled, a setup step is added to set the `current_states` for the
`projections.notifications_back_channel_logout` to the current position

- [x] requires https://github.com/zitadel/oidc/pull/671

# Additional Changes

- Updated all OTEL dependencies to v1.29.0, since OIDC already updated
some of them to that version.
- Single Session Termination feature is correctly checked (fixed feature
mapping)

# Additional Context

- closes https://github.com/zitadel/zitadel/issues/8467
- TODO:
  - Documentation
  - UI to be done: https://github.com/zitadel/zitadel/issues/8469

---------

Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 15:57:17 +01:00
Tim Möhlmann
32bad3feb3 perf(milestones): refactor (#8788)
Some checks are pending
ZITADEL CI/CD / core (push) Waiting to run
ZITADEL CI/CD / console (push) Waiting to run
ZITADEL CI/CD / version (push) Waiting to run
ZITADEL CI/CD / compile (push) Blocked by required conditions
ZITADEL CI/CD / core-unit-test (push) Blocked by required conditions
ZITADEL CI/CD / core-integration-test (push) Blocked by required conditions
ZITADEL CI/CD / lint (push) Blocked by required conditions
ZITADEL CI/CD / container (push) Blocked by required conditions
ZITADEL CI/CD / e2e (push) Blocked by required conditions
ZITADEL CI/CD / release (push) Blocked by required conditions
Code Scanning / CodeQL-Build (go) (push) Waiting to run
Code Scanning / CodeQL-Build (javascript) (push) Waiting to run
# Which Problems Are Solved

Milestones used existing events from a number of aggregates. OIDC
session is one of them. We noticed in load-tests that the reduction of
the oidc_session.added event into the milestone projection is a costly
business with payload based conditionals. A milestone is reached once,
but even then we remain subscribed to the OIDC events. This requires the
projections.current_states to be updated continuously.


# How the Problems Are Solved

The milestone creation is refactored to use dedicated events instead.
The command side decides when a milestone is reached and creates the
reached event once for each milestone when required.

# Additional Changes

In order to prevent reached milestones being created twice, a migration
script is provided. When the old `projections.milestones` table exist,
the state is read from there and `v2` milestone aggregate events are
created, with the original reached and pushed dates.

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/8800
2024-10-28 08:29:34 +00:00
Tim Möhlmann
a84b259e8c perf(oidc): nest position clause for session terminated query (#8738)
# Which Problems Are Solved

Optimize the query that checks for terminated sessions in the access
token verifier. The verifier is used in auth middleware, userinfo and
introspection.


# How the Problems Are Solved

The previous implementation built a query for certain events and then
appended a single `PositionAfter` clause. This caused the postgreSQL
planner to use indexes only for the instance ID, aggregate IDs,
aggregate types and event types. Followed by an expensive sequential
scan for the position. This resulting in internal over-fetching of rows
before the final filter was applied.


![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c)

Furthermore, the query was searching for events which are not always
applicable. For example, there was always a session ID search and if
there was a user ID, we would also search for a browser fingerprint in
event payload (expensive). Even if those argument string would be empty.

This PR changes:

1. Nest the position query, so that a full `instance_id, aggregate_id,
aggregate_type, event_type, "position"` index can be matched.
2. Redefine the `es_wm` index to include the `position` column.
3. Only search for events for the IDs that actually have a value. Do not
search (noop) if none of session ID, user ID or fingerpint ID are set.

New query plan:


![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775)


# Additional Changes

- cleanup how we load multi-statement migrations and make that a bit
more reusable.

# Additional Context

- Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
Tim Möhlmann
25dc7bfe72 perf(cache): pgx pool connector (#8703)
# Which Problems Are Solved

Cache implementation using a PGX connection pool.

# How the Problems Are Solved

Defines a new schema `cache` in the zitadel database.
A table for string keys and a table for objects is defined.
For postgreSQL, tables are unlogged and partitioned by cache name for
performance.

Cockroach does not have unlogged tables and partitioning is an
enterprise feature that uses alternative syntax combined with sharding.
Regular tables are used here.

# Additional Changes

- `postgres.Config` can return a pxg pool. See following discussion

# Additional Context

- Part of https://github.com/zitadel/zitadel/issues/8648
- Closes https://github.com/zitadel/zitadel/issues/8647

---------

Co-authored-by: Silvan <silvan.reusser@gmail.com>
2024-10-04 13:15:41 +00:00
Livio Spring
14e2aba1bc feat: Add Twilio Verification Service (#8678)
# Which Problems Are Solved
Twilio supports a robust, multi-channel verification service that
notably supports multi-region SMS sender numbers required for our use
case. Currently, Zitadel does much of the work of the Twilio Verify (eg.
localization, code generation, messaging) but doesn't support the pool
of sender numbers that Twilio Verify does.

# How the Problems Are Solved
To support this API, we need to be able to store the Twilio Service ID
and send that in a verification request where appropriate: phone number
verification and SMS 2FA code paths.

This PR does the following: 
- Adds the ability to use Twilio Verify of standard messaging through
Twilio
- Adds support for international numbers and more reliable verification
messages sent from multiple numbers
- Adds a new Twilio configuration option to support Twilio Verify in the
admin console
- Sends verification SMS messages through Twilio Verify
- Implements Twilio Verification Checks for codes generated through the
same

# Additional Changes

# Additional Context
- base was implemented by @zhirschtritt in
https://github.com/zitadel/zitadel/pull/8268 ❤️
- closes https://github.com/zitadel/zitadel/issues/8581

---------

Co-authored-by: Zachary Hirschtritt <zachary.hirschtritt@klaviyo.com>
Co-authored-by: Joey Biscoglia <joey.biscoglia@klaviyo.com>
2024-09-26 09:14:33 +02:00
Tim Möhlmann
4eaa3163b6 feat(storage): generic cache interface (#8628)
# Which Problems Are Solved

We identified the need of caching.
Currently we have a number of places where we use different ways of
caching, like go maps or LRU.
We might also want shared chaches in the future, like Redis-based or in
special SQL tables.

# How the Problems Are Solved

Define a generic Cache interface which allows different implementations.

- A noop implementation is provided and enabled as.
- An implementation using go maps is provided
  - disabled in defaults.yaml
  - enabled in integration tests
- Authz middleware instance objects are cached using the interface.

# Additional Changes

- Enabled integration test command raceflag
- Fix a race condition in the limits integration test client
- Fix a number of flaky integration tests. (Because zitadel is super
fast now!) 🎸 🚀

# Additional Context

Related to https://github.com/zitadel/zitadel/issues/8648
2024-09-25 21:40:21 +02:00
Livio Spring
9ec9ad4314 feat(oidc): sid claim for id_tokens issued through login V1 (#8525)
# Which Problems Are Solved

id_tokens issued for auth requests created through the login UI
currently do not provide a sid claim.
This is due to the fact that (SSO) sessions for the login UI do not have
one and are only computed by the userAgent(ID), the user(ID) and the
authentication checks of the latter.

This prevents client to track sessions and terminate specific session on
the end_session_endpoint.

# How the Problems Are Solved

- An `id` column is added to the `auth.user_sessions` table.
- The `id` (prefixed with `V1_`) is set whenever a session is added or
updated to active (from terminated)
- The id is passed to the `oidc session` (as v2 sessionIDs), to expose
it as `sid` claim

# Additional Changes

- refactored `getUpdateCols` to handle different column value types and
add arguments for query

# Additional Context

- closes #8499 
- relates to #8501
2024-09-03 13:19:00 +00:00
Silvan
23bebc7e30 fix(fields): add index to improve search by aggregate (#8267)
# Which Problems Are Solved

During performance testing of the `eventstore.fields` table we found
some long running queries which searched for the aggregate id.

# How the Problems Are Solved

A new index was added to the `eventstore.fields`-table called
`f_aggregate_object_type_idx`.

# Additional Changes

None

# Additional Context

- Table was added in https://github.com/zitadel/zitadel/pull/8191
- Part of https://github.com/zitadel/zitadel/issues/7639
2024-07-08 15:54:19 +00:00
Tim Möhlmann
7967e6f98b perf(import): optimize search for domains claimed by other organizations (#8200)
# Which Problems Are Solved

Improve the performance of human imports by optimizing the query that
finds domains claimed by other organizations.

# How the Problems Are Solved

Use the fields search table introduced in
https://github.com/zitadel/zitadel/pull/8191 by storing each
organization domain as Object ID and the verified status as field value.

# Additional Changes

- Feature flag for this optimization

# Additional Context

- Performance improvements for import are evaluated and acted upon
internally at the moment

---------

Co-authored-by: adlerhurst <silvan.reusser@gmail.com>
2024-07-05 09:36:00 +02:00
Silvan
1d84635836 feat(eventstore): add search table (#8191)
# Which Problems Are Solved

To improve performance a new table and method is implemented on
eventstore. The goal of this table is to index searchable fields on
command side to use it on command and query side.

The table allows to store one primitive value (numeric, text) per row.

The eventstore framework is extended by the `Search`-method which allows
to search for objects.
The `Command`-interface is extended by the `SearchOperations()`-method
which does manipulate the the `search`-table.

# How the Problems Are Solved

This PR adds the capability of improving performance for command and
query side by using the `Search`-method of the eventstore instead of
using one of the `Filter`-methods.

# Open Tasks

- [x] Add feature flag
- [x] Unit tests
- [ ] ~~Benchmarks if needed~~
- [x] Ensure no behavior change
- [x] Add setup step to fill table with current data
- [x] Add projection which ensures data added between setup and start of
the new version are also added to the table

# Additional Changes

The `Search`-method is currently used by `ProjectGrant`-command side.

# Additional Context

- Closes https://github.com/zitadel/zitadel/issues/8094
2024-07-03 15:00:56 +00:00
Silvan
2243306ef6 feat(cmd): mirror (#7004)
# Which Problems Are Solved

Adds the possibility to mirror an existing database to a new one. 

For that a new command was added `zitadel mirror`. Including it's
subcommands for a more fine grained mirror of the data.

Sub commands:

* `zitadel mirror eventstore`: copies only events and their unique
constraints
* `zitadel mirror system`: mirrors the data of the `system`-schema
*  `zitadel mirror projections`: runs all projections
*  `zitadel mirror auth`: copies auth requests
* `zitadel mirror verify`: counts the amount of rows in the source and
destination database and prints the diff.

The command requires one of the following flags:
* `--system`: copies all instances of the system
* `--instance <instance-id>`, `--instance <comma separated list of
instance ids>`: copies only the defined instances

The command is save to execute multiple times by adding the
`--replace`-flag. This replaces currently existing data except of the
`events`-table

# Additional Changes

A `--for-mirror`-flag was added to `zitadel setup` to prepare the new
database. The flag skips the creation of the first instances and initial
run of projections.

It is now possible to skip the creation of the first instance during
setup by setting `FirstInstance.Skip` to true in the steps
configuration.

# Additional info

It is currently not possible to merge multiple databases. See
https://github.com/zitadel/zitadel/issues/7964 for more details.

It is currently not possible to use files. See
https://github.com/zitadel/zitadel/issues/7966 for more information.

closes https://github.com/zitadel/zitadel/issues/7586
closes https://github.com/zitadel/zitadel/issues/7486

### Definition of Ready

- [x] I am happy with the code
- [x] Short description of the feature/issue is added in the pr
description
- [x] PR is linked to the corresponding user story
- [x] Acceptance criteria are met
- [x] All open todos and follow ups are defined in a new ticket and
justified
- [x] Deviations from the acceptance criteria and design are agreed with
the PO and documented.
- [x] No debug or dead code
- [x] My code has no repetitions
- [x] Critical parts are tested automatically
- [ ] Where possible E2E tests are implemented
- [x] Documentation/examples are up-to-date
- [x] All non-functional requirements are met
- [x] Functionality of the acceptance criteria is checked manually on
the dev system.

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
Livio Spring
e57a9b57c8 feat(saml): allow setting nameid-format and alternative mapping for transient format (#7979)
# Which Problems Are Solved

ZITADEL currently always uses
`urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` in SAML requests,
relying on the IdP to respect that flag and always return a peristent
nameid in order to be able to map the external user with an existing
user (idp link) in ZITADEL.
In case the IdP however returns a
`urn:oasis:names:tc:SAML:2.0:nameid-format:transient` (transient)
nameid, the attribute will differ between each request and it will not
be possible to match existing users.

# How the Problems Are Solved

This PR adds the following two options on SAML IdP:
- **nameIDFormat**: allows to set the nameid-format used in the SAML
Request
- **transientMappingAttributeName**: allows to set an attribute name,
which will be used instead of the nameid itself in case the returned
nameid-format is transient

# Additional Changes

To reduce impact on current installations, the `idp_templates6_saml`
table is altered with the two added columns by a setup job. New
installations will automatically get the table with the two columns
directly.
All idp unit tests are updated to use `expectEventstore` instead of the
deprecated `eventstoreExpect`.

# Additional Context

Closes #7483
Closes #7743

---------

Co-authored-by: peintnermax <max@caos.ch>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
2024-05-23 05:04:07 +00:00