3678 Commits

Author SHA1 Message Date
Livio Spring
1ac894414a chore: use crdb 24.3 (#9558)
# Which Problems Are Solved

E2E tests in pipelines started to fail randomly. While debugging it, i
noticed that we use the `latest` tag of cockroach's docker image. They
tagged 25.1 as latest yesterday.

# How the Problems Are Solved

Since we drop support for CRDB with version 3 as there are anyway
multiple issues with various versions, I pinned the docker image tag to
`latest-v24.3`.

# Additional Changes

None

# Additional Context

relates to https://github.com/zitadel/zitadel/actions/runs/13917603587
and https://github.com/zitadel/zitadel/actions/runs/13904928050

(cherry picked from commit f1f500d0e7)
2025-03-18 16:45:50 +01:00
Harsha Reddy
81b6824de0 fix: reduce cardinality in metrics and tracing for unknown paths (#9523)
# Which Problems Are Solved
Zitadel should not record 404 response counts of unknown paths (check
`/debug/metrics`).
This can lead to high cardinality on metrics endpoint and in traces.

```
GOOD http_server_return_code_counter_total{method="GET",otel_scope_name="",otel_scope_version="",return_code="200",uri="/.well-known/openid-configuration"} 2
GOOD http_server_return_code_counter_total{method="GET",otel_scope_name="",otel_scope_version="",return_code="200",uri="/oauth/v2/keys"} 2
BAD http_server_return_code_counter_total{method="GET",otel_scope_name="",otel_scope_version="",return_code="404",uri="/junk"} 2000
```

After
```
GOOD http_server_return_code_counter_total{method="GET",otel_scope_name="",otel_scope_version="",return_code="200",uri="/.well-known/openid-configuration"} 2
GOOD http_server_return_code_counter_total{method="GET",otel_scope_name="",otel_scope_version="",return_code="200",uri="/oauth/v2/keys"} 2
```

# How the Problems Are Solved

This PR makes sure, that any unknown path is recorded as `UNKNOWN_PATH`
instead of the actual path.

# Additional Changes

N/A

# Additional Context

On our production instance, when a penetration test was run, it caused
our metric count to blow up to many thousands due to Zitadel recording
404 response counts.

Next nice to have steps, remove 404 timer recordings which serve no
purpose

---------

Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Livio Spring <livio@zitadel.com>
(cherry picked from commit 599850e7e8)
2025-03-18 16:45:43 +01:00
Silvan
0b74b6ccd2 fix(perf): simplify eventstore queries by removing or in projection handlers (#9530)
# Which Problems Are Solved

[A recent performance
enhancement]((https://github.com/zitadel/zitadel/pull/9497)) aimed at
optimizing event store queries, specifically those involving multiple
aggregate type filters, has successfully improved index utilization.
While the query planner now correctly selects relevant indexes, it
employs [bitmap index
scans](https://www.postgresql.org/docs/current/indexes-bitmap-scans.html)
to retrieve data.

This approach, while beneficial in many scenarios, introduces a
potential I/O bottleneck. The bitmap index scan first identifies the
required database blocks and then utilizes a bitmap to access the
corresponding rows from the table's heap. This subsequent "bitmap heap
scan" can result in significant I/O overhead, particularly when queries
return a substantial number of rows across numerous data pages.

## Impact:

Under heavy load or with queries filtering for a wide range of events
across multiple aggregate types, this increased I/O activity may lead
to:

- Increased query latency.
- Elevated disk utilization.
- Potential performance degradation of the event store and dependent
systems.

# How the Problems Are Solved

To address this I/O bottleneck and further optimize query performance,
the projection handler has been modified. Instead of employing multiple
OR clauses for each aggregate type, the aggregate and event type filters
are now combined using IN ARRAY filters.

Technical Details:

This change allows the PostgreSQL query planner to leverage [index-only
scans](https://www.postgresql.org/docs/current/indexes-index-only-scans.html).
By utilizing IN ARRAY filters, the database can efficiently retrieve the
necessary data directly from the index, eliminating the need to access
the table's heap. This results in:

* Reduced I/O: Index-only scans significantly minimize disk I/O
operations, as the database avoids reading data pages from the main
table.
* Improved Query Performance: By reducing I/O, query execution times are
substantially improved, leading to lower latency.

# Additional Changes

- rollback of https://github.com/zitadel/zitadel/pull/9497

# Additional Information

## Query Plan of previous query

```sql
SELECT
    created_at, event_type, "sequence", "position", payload, creator, "owner", instance_id, aggregate_type, aggregate_id, revision
FROM
    eventstore.events2
WHERE
    instance_id = '<INSTANCE_ID>'
    AND (
        (
            instance_id = '<INSTANCE_ID>'
            AND "position" > <POSITION>
            AND aggregate_type = 'project'
            AND event_type = ANY(ARRAY[
                                'project.application.added'
                                ,'project.application.changed'
                                ,'project.application.deactivated'
                                ,'project.application.reactivated'
                                ,'project.application.removed'
                                ,'project.removed'
                                ,'project.application.config.api.added'
                                ,'project.application.config.api.changed'
                                ,'project.application.config.api.secret.changed'
                ,'project.application.config.api.secret.updated'
                                ,'project.application.config.oidc.added'
                                ,'project.application.config.oidc.changed'
                                ,'project.application.config.oidc.secret.changed'
                ,'project.application.config.oidc.secret.updated'
                                ,'project.application.config.saml.added'
                                ,'project.application.config.saml.changed'
                        ])
        ) OR (
            instance_id = '<INSTANCE_ID>'
            AND "position" > <POSITION>
            AND aggregate_type = 'org'
            AND event_type = 'org.removed'
        ) OR (
            instance_id = '<INSTANCE_ID>'
            AND "position" > <POSITION>
            AND aggregate_type = 'instance'
            AND event_type = 'instance.removed'
        )
    )
    AND "position" > 1741600905.3495
    AND "position" < (
        SELECT
            COALESCE(EXTRACT(EPOCH FROM min(xact_start)), EXTRACT(EPOCH FROM now()))
        FROM
            pg_stat_activity
        WHERE
            datname = current_database()
            AND application_name = ANY(ARRAY['zitadel_es_pusher_', 'zitadel_es_pusher', 'zitadel_es_pusher_<INSTANCE_ID>'])
            AND state <> 'idle'
    )
ORDER BY "position", in_tx_order LIMIT 200 OFFSET 1;
```

```
Limit  (cost=120.08..120.09 rows=7 width=361) (actual time=2.167..2.172 rows=0 loops=1)
   Output: events2.created_at, events2.event_type, events2.sequence, events2."position", events2.payload, events2.creator, events2.owner, events2.instance_id, events2.aggregate_type, events2.aggregate_id, events2.revision, events2.in_tx_order
   InitPlan 1
     ->  Aggregate  (cost=2.74..2.76 rows=1 width=32) (actual time=1.813..1.815 rows=1 loops=1)
           Output: COALESCE(EXTRACT(epoch FROM min(s.xact_start)), EXTRACT(epoch FROM now()))
           ->  Nested Loop  (cost=0.00..2.74 rows=1 width=8) (actual time=1.803..1.805 rows=0 loops=1)
                 Output: s.xact_start
                 Join Filter: (d.oid = s.datid)
                 ->  Seq Scan on pg_catalog.pg_database d  (cost=0.00..1.07 rows=1 width=4) (actual time=0.016..0.021 rows=1 loops=1)
                       Output: d.oid, d.datname, d.datdba, d.encoding, d.datlocprovider, d.datistemplate, d.datallowconn, d.dathasloginevt, d.datconnlimit, d.datfrozenxid, d.datminmxid, d.dattablespace, d.datcollate, d.datctype, d.datlocale, d.daticurules, d.datcollversion, d.datacl
                       Filter: (d.datname = current_database())
                       Rows Removed by Filter: 4
                 ->  Function Scan on pg_catalog.pg_stat_get_activity s  (cost=0.00..1.63 rows=3 width=16) (actual time=1.781..1.781 rows=0 loops=1)
                       Output: s.datid, s.pid, s.usesysid, s.application_name, s.state, s.query, s.wait_event_type, s.wait_event, s.xact_start, s.query_start, s.backend_start, s.state_change, s.client_addr, s.client_hostname, s.client_port, s.backend_xid, s.backend_xmin, s.backend_type, s.ssl, s.sslversion, s.sslcipher, s.sslbits, s.ssl_client_dn, s.ssl_client_serial, s.ssl_issuer_dn, s.gss_auth, s.gss_princ, s.gss_enc, s.gss_delegation, s.leader_pid, s.query_id
                       Function Call: pg_stat_get_activity(NULL::integer)
                       Filter: ((s.state <> 'idle'::text) AND (s.application_name = ANY ('{zitadel_es_pusher_,zitadel_es_pusher,zitadel_es_pusher_<INSTANCE_ID>}'::text[])))
                       Rows Removed by Filter: 49
   ->  Sort  (cost=117.31..117.33 rows=8 width=361) (actual time=2.167..2.168 rows=0 loops=1)
         Output: events2.created_at, events2.event_type, events2.sequence, events2."position", events2.payload, events2.creator, events2.owner, events2.instance_id, events2.aggregate_type, events2.aggregate_id, events2.revision, events2.in_tx_order
         Sort Key: events2."position", events2.in_tx_order
         Sort Method: quicksort  Memory: 25kB
         ->  Bitmap Heap Scan on eventstore.events2  (cost=84.92..117.19 rows=8 width=361) (actual time=2.088..2.089 rows=0 loops=1)
               Output: events2.created_at, events2.event_type, events2.sequence, events2."position", events2.payload, events2.creator, events2.owner, events2.instance_id, events2.aggregate_type, events2.aggregate_id, events2.revision, events2.in_tx_order
               Recheck Cond: (((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = 'project'::text) AND (events2.event_type = ANY ('{project.application.added,project.application.changed,project.application.deactivated,project.application.reactivated,project.application.removed,project.removed,project.application.config.api.added,project.application.config.api.changed,project.application.config.api.secret.changed,project.application.config.api.secret.updated,project.application.config.oidc.added,project.application.config.oidc.changed,project.application.config.oidc.secret.changed,project.application.config.oidc.secret.updated,project.application.config.saml.added,project.application.config.saml.changed}'::text[])) AND (events2."position" > <POSITION>) AND (events2."position" > 1741600905.3495) AND (events2."position" < (InitPlan 1).col1)) OR ((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = 'org'::text) AND (events2.event_type = 'org.removed'::text) AND (events2."position" > <POSITION>) AND (events2."position" > 1741600905.3495) AND (events2."position" < (InitPlan 1).col1)) OR ((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = 'instance'::text) AND (events2.event_type = 'instance.removed'::text) AND (events2."position" > <POSITION>) AND (events2."position" > 1741600905.3495) AND (events2."position" < (InitPlan 1).col1)))
               ->  BitmapOr  (cost=84.88..84.88 rows=8 width=0) (actual time=2.080..2.081 rows=0 loops=1)
                     ->  Bitmap Index Scan on es_projection  (cost=0.00..75.44 rows=8 width=0) (actual time=2.016..2.017 rows=0 loops=1)
                           Index Cond: ((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = 'project'::text) AND (events2.event_type = ANY ('{project.application.added,project.application.changed,project.application.deactivated,project.application.reactivated,project.application.removed,project.removed,project.application.config.api.added,project.application.config.api.changed,project.application.config.api.secret.changed,project.application.config.api.secret.updated,project.application.config.oidc.added,project.application.config.oidc.changed,project.application.config.oidc.secret.changed,project.application.config.oidc.secret.updated,project.application.config.saml.added,project.application.config.saml.changed}'::text[])) AND (events2."position" > <POSITION>) AND (events2."position" > 1741600905.3495) AND (events2."position" < (InitPlan 1).col1))
                     ->  Bitmap Index Scan on es_projection  (cost=0.00..4.71 rows=1 width=0) (actual time=0.016..0.016 rows=0 loops=1)
                           Index Cond: ((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = 'org'::text) AND (events2.event_type = 'org.removed'::text) AND (events2."position" > <POSITION>) AND (events2."position" > 1741600905.3495) AND (events2."position" < (InitPlan 1).col1))
                     ->  Bitmap Index Scan on es_projection  (cost=0.00..4.71 rows=1 width=0) (actual time=0.045..0.045 rows=0 loops=1)
                           Index Cond: ((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = 'instance'::text) AND (events2.event_type = 'instance.removed'::text) AND (events2."position" > <POSITION>) AND (events2."position" > 1741600905.3495) AND (events2."position" < (InitPlan 1).col1))
 Query Identifier: 3194938266011254479
 Planning Time: 1.295 ms
 Execution Time: 2.832 ms
```

## Query Plan of new query

```sql
SELECT
    created_at, event_type, "sequence", "position", payload, creator, "owner", instance_id, aggregate_type, aggregate_id, revision
FROM
    eventstore.events2
WHERE
    instance_id = '<INSTANCE_ID>'
    AND "position" > <POSITION>
    AND aggregate_type = ANY(ARRAY['project', 'instance', 'org'])
    AND event_type = ANY(ARRAY[
        'project.application.added'
        ,'project.application.changed'
        ,'project.application.deactivated'
        ,'project.application.reactivated'
        ,'project.application.removed'
        ,'project.removed'
        ,'project.application.config.api.added'
        ,'project.application.config.api.changed'
        ,'project.application.config.api.secret.changed'
        ,'project.application.config.api.secret.updated'
        ,'project.application.config.oidc.added'
        ,'project.application.config.oidc.changed'
        ,'project.application.config.oidc.secret.changed'
        ,'project.application.config.oidc.secret.updated'
        ,'project.application.config.saml.added'
        ,'project.application.config.saml.changed'
        ,'org.removed'
        ,'instance.removed'
    ])
    AND "position" < (
        SELECT
            COALESCE(EXTRACT(EPOCH FROM min(xact_start)), EXTRACT(EPOCH FROM now()))
        FROM
            pg_stat_activity
        WHERE
            datname = current_database()
            AND application_name = ANY(ARRAY['zitadel_es_pusher_', 'zitadel_es_pusher', 'zitadel_es_pusher_<INSTANCE_ID>'])
            AND state <> 'idle'
    )
ORDER BY "position", in_tx_order LIMIT 200 OFFSET 1;
```

```
Limit  (cost=293.34..293.36 rows=8 width=361) (actual time=4.686..4.689 rows=0 loops=1)
   Output: events2.created_at, events2.event_type, events2.sequence, events2."position", events2.payload, events2.creator, events2.owner, events2.instance_id, events2.aggregate_type, events2.aggregate_id, events2.revision, events2.in_tx_order
   InitPlan 1
     ->  Aggregate  (cost=2.74..2.76 rows=1 width=32) (actual time=1.717..1.719 rows=1 loops=1)
           Output: COALESCE(EXTRACT(epoch FROM min(s.xact_start)), EXTRACT(epoch FROM now()))
           ->  Nested Loop  (cost=0.00..2.74 rows=1 width=8) (actual time=1.658..1.659 rows=0 loops=1)
                 Output: s.xact_start
                 Join Filter: (d.oid = s.datid)
                 ->  Seq Scan on pg_catalog.pg_database d  (cost=0.00..1.07 rows=1 width=4) (actual time=0.026..0.028 rows=1 loops=1)
                       Output: d.oid, d.datname, d.datdba, d.encoding, d.datlocprovider, d.datistemplate, d.datallowconn, d.dathasloginevt, d.datconnlimit, d.datfrozenxid, d.datminmxid, d.dattablespace, d.datcollate, d.datctype, d.datlocale, d.daticurules, d.datcollversion, d.datacl
                       Filter: (d.datname = current_database())
                       Rows Removed by Filter: 4
                 ->  Function Scan on pg_catalog.pg_stat_get_activity s  (cost=0.00..1.63 rows=3 width=16) (actual time=1.628..1.628 rows=0 loops=1)
                       Output: s.datid, s.pid, s.usesysid, s.application_name, s.state, s.query, s.wait_event_type, s.wait_event, s.xact_start, s.query_start, s.backend_start, s.state_change, s.client_addr, s.client_hostname, s.client_port, s.backend_xid, s.backend_xmin, s.backend_type, s.ssl, s.sslversion, s.sslcipher, s.sslbits, s.ssl_client_dn, s.ssl_client_serial, s.ssl_issuer_dn, s.gss_auth, s.gss_princ, s.gss_enc, s.gss_delegation, s.leader_pid, s.query_id
                       Function Call: pg_stat_get_activity(NULL::integer)
                       Filter: ((s.state <> 'idle'::text) AND (s.application_name = ANY ('{zitadel_es_pusher_,zitadel_es_pusher,zitadel_es_pusher_<INSTANCE_ID>}'::text[])))
                       Rows Removed by Filter: 42
   ->  Sort  (cost=290.58..290.60 rows=9 width=361) (actual time=4.685..4.685 rows=0 loops=1)
         Output: events2.created_at, events2.event_type, events2.sequence, events2."position", events2.payload, events2.creator, events2.owner, events2.instance_id, events2.aggregate_type, events2.aggregate_id, events2.revision, events2.in_tx_order
         Sort Key: events2."position", events2.in_tx_order
         Sort Method: quicksort  Memory: 25kB
         ->  Index Scan using es_projection on eventstore.events2  (cost=0.70..290.43 rows=9 width=361) (actual time=4.616..4.617 rows=0 loops=1)
               Output: events2.created_at, events2.event_type, events2.sequence, events2."position", events2.payload, events2.creator, events2.owner, events2.instance_id, events2.aggregate_type, events2.aggregate_id, events2.revision, events2.in_tx_order
               Index Cond: ((events2.instance_id = '<INSTANCE_ID>'::text) AND (events2.aggregate_type = ANY ('{project,instance,org}'::text[])) AND (events2.event_type = ANY ('{project.application.added,project.application.changed,project.application.deactivated,project.application.reactivated,project.application.removed,project.removed,project.application.config.api.added,project.application.config.api.changed,project.application.config.api.secret.changed,project.application.config.api.secret.updated,project.application.config.oidc.added,project.application.config.oidc.changed,project.application.config.oidc.secret.changed,project.application.config.oidc.secret.updated,project.application.config.saml.added,project.application.config.saml.changed,org.removed,instance.removed}'::text[])) AND (events2."position" > <POSITION>) AND (events2."position" < (InitPlan 1).col1))
 Query Identifier: -8254550537132386499
 Planning Time: 2.864 ms
 Execution Time: 5.414 ms
 ```

(cherry picked from commit e36f402e09)
2025-03-13 17:06:20 +01:00
Silvan
93f0067081 fix(eventstore): optimise query hints for event filters (#9497)
(cherry picked from commit b578137139)
2025-03-12 14:49:57 +01:00
Livio Spring
319efc6391 fix(OIDC): back channel logout work for custom UI (#9487)
# Which Problems Are Solved

When using a custom / new login UI and an OIDC application with
registered BackChannelLogoutUI, no logout requests were sent to the URI
when the user signed out.
Additionally, as described in #9427, an error was logged:
`level=error msg="event of type *session.TerminateEvent doesn't
implement OriginEvent"
caller="/home/runner/work/zitadel/zitadel/internal/notification/handlers/origin.go:24"`

# How the Problems Are Solved

- Properly pass `TriggerOrigin` information to session.TerminateEvent
creation and implement `OriginEvent` interface.
- Implemented `RegisterLogout` in `CreateOIDCSessionFromAuthRequest` and
`CreateOIDCSessionFromDeviceAuth`, both used when interacting with the
OIDC v2 API.
- Both functions now receive the `BackChannelLogoutURI` of the client
from the OIDC layer.

# Additional Changes

None

# Additional Context

- closes #9427

(cherry picked from commit ed697bbd69)
2025-03-12 14:49:53 +01:00
Livio Spring
dfb339c4a4 fix(token exchange): properly return an error if membership is missing (#9468)
# Which Problems Are Solved

When requesting a JWT (`urn:ietf:params:oauth:token-type:jwt`) to be
returned in a Token Exchange request, ZITADEL would panic if the `actor`
was not granted the necessary permission.

# How the Problems Are Solved

Properly check the error and return it.

# Additional Changes

None

# Additional Context

- closes #9436

(cherry picked from commit e6ce1af003)
2025-03-12 14:49:22 +01:00
Livio Spring
929d9d9c35 fix: correct required permissions on admin APIs
# Which Problems Are Solved

ZITADEL's Admin API, intended for managing ZITADEL instances, contains 12 HTTP endpoints that are unexpectedly accessible to authenticated ZITADEL users who are not ZITADEL managers. The most critical vulnerable endpoints relate to LDAP configuration:
- /idps/ldap
- /idps/ldap/{id}

By accessing these endpoints, unauthorized users could:
- Modify ZITADEL's instance LDAP settings, redirecting all LDAP login attempts to a malicious server, effectively taking over user accounts.
- Expose the original LDAP server's password, potentially compromising all user accounts.

The following endpoints are also affected by IDOR vulnerabilities, potentially allowing unauthorized modification of instance settings such as languages, labels, and templates:
- /idps/templates/_search
- /idps/templates/{id}
- /policies/label/_activate
- /policies/label/logo
- /policies/label/logo_dark
- /policies/label/icon
- /policies/label/icon_dark
- /policies/label/font
- /text/message/passwordless_registration/{language}
- /text/login/{language}

Please checkout https://github.com/zitadel/zitadel/security/advisories/GHSA-f3gh-529w-v32x for more information.

# How the Problems Are Solved

- Required permission have been fixed (only instance level allowed)

# Additional Changes

None

# Additional Context

- resolves https://github.com/zitadel/zitadel/security/advisories/GHSA-f3gh-529w-v32x

(cherry picked from commit d9d8339813)
2025-03-04 08:58:01 +01:00
Tim Möhlmann
d7b03c6394 fix(setup): use template for in_tx_order type (#9346)
# Which Problems Are Solved

Systems running with PostgreSQL before Zitadel v2.39 are likely to have
a wrong type for the `in_tx_order` column in the `eventstore.event2`
table. The migration at the time used the `event_sequence` as default
value without typecast, which results in a `bigint` type for that
column. However, when creating the table from scratch, we explicitly
specify the type to be `integer`.

Starting from Zitadel v2.67 we use a Pl/PgSQL function to push events.
The function requires the types from `eventstore.events2` to the same as
the `select` destinations used in the function. In the function
`in_tx_order` is also expected to by of `integer` type.

CochroachDB systems are not affected because `bigint` is an alias to the
`int` type. In other words, CockroachDB uses `int8` when specifying type
`int`. Therefore the types already match.

# How the Problems Are Solved

Retrieve the actual column type currently in use. A template is used to
assign the type to the `ordinality` column returned as `in_tx_order`.

# Additional Changes

- Detailed logging on migration failure

# Additional Context

- Closes #9180

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit bcc6a689fa)
2025-02-12 13:07:24 +01:00
Silvan
52a6a7f4f4 fix(eventstore): correct sql push function (#9201)
# Which Problems Are Solved

https://github.com/zitadel/zitadel/pull/9186 introduced the new `push`
sql function for cockroachdb. The function used the wrong database
function to generate the position of the event and would therefore
insert events at a position before events created with an old Zitadel
version.

# How the Problems Are Solved

Instead of `EXTRACT(EPOCH FROM NOW())`, `cluster_logical_timestamp()` is
used to calculate the position of an event.

# Additional Context

- Introduced in https://github.com/zitadel/zitadel/pull/9186
- Affected versions:
https://github.com/zitadel/zitadel/releases/tag/v2.67.3

(cherry picked from commit 9532c9bea5)
2025-02-12 13:06:56 +01:00
Silvan
a7fe14124d perf(eventstore): fast push on crdb (#9186)
# Which Problems Are Solved

The performance of the initial push function can further be increased

# How the Problems Are Solved

`eventstore.push`- and `eventstore.commands_to_events`-functions were
rewritten

# Additional Changes

none

# Additional Context

same optimizations as for postgres:
https://github.com/zitadel/zitadel/pull/9092

(cherry picked from commit 690147b30e)
2025-02-12 13:06:38 +01:00
Livio Spring
1cc05d0be2 fix(OTEL): reduce high cardinality in traces and metrics (#9286)
# Which Problems Are Solved

There were multiple issues in the OpenTelemetry (OTEL) implementation
and usage for tracing and metrics, which lead to high cardinality and
potential memory leaks:
- wrongly initiated tracing interceptors
- high cardinality in traces:
  - HTTP/1.1 endpoints containing host names
- HTTP/1.1 endpoints containing object IDs like userID (e.g.
`/management/v1/users/2352839823/`)
- high amount of traces from internal processes (spooler)
- high cardinality in metrics endpoint:
  - GRPC entries containing host names
  - notification metrics containing instanceIDs and error messages

# How the Problems Are Solved

- Properly initialize the interceptors once and update them to use the
grpc stats handler (unary interceptors were deprecated).
- Remove host names from HTTP/1.1 span names and use path as default.
- Set / overwrite the uri for spans on the grpc-gateway with the uri
pattern (`/management/v1/users/{user_id}`). This is used for spans in
traces and metric entries.
- Created a new sampler which will only sample spans in the following
cases:
  - remote was already sampled
- remote was not sampled, root span is of kind `Server` and based on
fraction set in the runtime configuration
- This will prevent having a lot of spans from the spooler back ground
jobs if they were not started by a client call querying an object (e.g.
UserByID).
- Filter out host names and alike from OTEL generated metrics (using a
`view`).
- Removed instance and error messages from notification metrics.

# Additional Changes

Fixed the middleware handling for serving Console. Telemetry and
instance selection are only used for the environment.json, but not on
statically served files.

# Additional Context

- closes #8096
- relates to #9074
- back ports to at least 2.66.x, 2.67.x and 2.68.x

(cherry picked from commit 990e1982c7)
2025-02-04 10:02:07 +01:00
Livio Spring
8e42fb80b1 fix cherry pick 2025-01-28 08:52:24 +01:00
Livio Spring
0b9f41c03d fix(notifications): cancel on missing channels and Twilio 4xx errors (#9254)
# Which Problems Are Solved

#9185 changed that if a notification channel was not present,
notification workers would no longer retry to send the notification and
would also cancel in case Twilio would return a 4xx error.
However, this would not affect the "legacy" mode.

# How the Problems Are Solved

- Handle `CancelError` in legacy notifier as not failed (event).

# Additional Changes

None

# Additional Context

- relates to #9185
- requires back port to 2.66.x and 2.67.x

(cherry picked from commit 3fc68e5d60)
2025-01-28 07:34:48 +01:00
Zach Hirschtritt
ce6df3c5c3 fix: add aggregate type to subquery to utilize indexes (#9226)
# Which Problems Are Solved

The subquery of the notification requested and retry requested is
missing the aggregate_type filter that would allow it to utilize the
`es_projection` or `active_instances_events` on the eventstore.events2
table.

# How the Problems Are Solved

Add additional filter on subquery. Final query:
```sql
SELECT <all the fields omitted> FROM eventstore.events2
WHERE
    instance_id = $1
    AND aggregate_type = $2
    AND event_type = $3
    AND created_at > $4
    AND aggregate_id NOT IN (
        SELECT aggregate_id
        FROM eventstore.events2
        WHERE
            aggregate_type = $5 <-- NB: previously missing
            AND event_type = ANY ($6)
            AND instance_id = $7
            AND created_at > $8
    )
ORDER BY "position", in_tx_order
LIMIT $9
FOR UPDATE SKIP LOCKED
```

# Additional Changes

# Additional Context

Co-authored-by: Livio Spring <livio.a@gmail.com>
(cherry picked from commit e4bbfcccc8)
2025-01-22 17:08:30 +01:00
Livio Spring
99d9aa935c fix: cancel notifications on missing channels and configurable (twilio) error codes (#9185)
# Which Problems Are Solved

If a notification channel was not present, notification workers would
retry to the max attempts. This leads to unnecessary load.
Additionally, a client noticed  bad actors trying to abuse SMS MFA.

# How the Problems Are Solved

- Directly cancel the notification on:
  - a missing channel and stop retries.
  - any `4xx` errors from Twilio Verify

# Additional Changes

None

# Additional Context

reported by customer

(cherry picked from commit 60857c8d3e)
2025-01-17 09:20:56 +01:00
Tim Möhlmann
12cd61c337 perf(eventstore): optimize commands to events function (#9092)
# Which Problems Are Solved

We were seeing high query costs in a the lateral join executed in the
commands_to_events procedural function in the database. The high cost
resulted in incremental CPU usage as a load test continued and less
req/sec handled, sarting at 836 and ending at 130 req/sec.

# How the Problems Are Solved

1. Set `PARALLEL SAFE`. I noticed that this option defaults to `UNSAFE`.
But it's actually safe if the function doesn't `INSERT`
2. Set the returned `ROWS 10` parameter.
3. Function is re-written in Pl/PgSQL so that we eliminate expensive
joins.
4. Introduced an intermediate state that does `SELECT DISTINCT` for the
aggregate so that we don't have to do an expensive lateral join.

# Additional Changes

Use a `COALESCE` to get the owner from the last event, instead of a
`CASE` switch.

# Additional Context

- Function was introduced in
https://github.com/zitadel/zitadel/pull/8816
- Closes https://github.com/zitadel/zitadel/issues/8352

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
2025-01-13 17:52:06 +01:00
Silvan
1665d7c89d perf(eventstore): redefine current sequences index (#9142)
# Which Problems Are Solved

On Zitadel cloud we found changing the order of columns in the
`eventstore.events2_current_sequence` index improved CPU usage for the
`SELECT ... FOR UPDATE` query the pusher executes.

# How the Problems Are Solved

`eventstore.events2_current_sequence`-index got replaced

# Additional Context

closes https://github.com/zitadel/zitadel/issues/9082
2025-01-13 17:50:28 +01:00
Silvan
c36be9b207 perf(fields): create index for instance domain query (#9146)
# Which Problems Are Solved

get instance by domain cannot provide an instance id because it is not
known at that time. This causes a full table scan on the fields table
because current indexes always include the `instance_id` column.

# How the Problems Are Solved

Added a specific index for this query.

# Additional Context

If a system has many fields and there is no cache hit for the given
domain this query can heaviuly influence database CPU usage, the newly
added resolves this problem.

(cherry picked from commit f320d18b1a)
2025-01-07 17:11:47 +01:00
Tim Möhlmann
ed96035a14 fix(cache): convert expiry to number (#9143)
# Which Problems Are Solved

When `LastUseAge` was configured properly, the Redis LUA script uses
manual cleanup for `MaxAge` based expiry. The expiry obtained from Redis
apears to be a string and was compared to an int, resulting in a script
error.

# How the Problems Are Solved

Convert expiry to number.

# Additional Changes

- none

# Additional Context

- Introduced in #8822
- LastUseAge was fixed in #9097
- closes https://github.com/zitadel/zitadel/issues/9140

(cherry picked from commit 56427cca50)
2025-01-07 17:10:29 +01:00
Livio Spring
6aa54b2a97 create maintenance branch 2025-01-07 17:10:13 +01:00
Livio Spring
ebc13e5133 fix(idp): correctly get data from cache before parsing (#9134)
# Which Problems Are Solved

IdPs using form callback were not always correctly handled with the
newly introduced cache mechanism
(https://github.com/zitadel/zitadel/pull/9097).

# How the Problems Are Solved

Get the data from cache before parsing it.

# Additional Changes

None

# Additional Context

Relates to https://github.com/zitadel/zitadel/pull/9097

(cherry picked from commit 8d7a1efd4a)
2025-01-06 14:49:03 +01:00
Livio Spring
b58956ba8a fix(idp): prevent server errors for idps using form post for callbacks (#9097)
# Which Problems Are Solved

Some IdP callbacks use HTTP form POST to return their data on callbacks.
For handling CSRF in the login after such calls, a 302 Found to the
corresponding non form callback (in ZITADEL) is sent. Depending on the
size of the initial form body, this could lead to ZITADEL terminating
the connection, resulting in the user not getting a response or an
intermediate proxy to return them an HTTP 502.

# How the Problems Are Solved

- the form body is parsed and stored into the ZITADEL cache (using the
configured database by default)
- the redirect (302 Found) is performed with the request id
- the callback retrieves the data from the cache instead of the query
parameters (will fallback to latter to handle open uncached requests)

# Additional Changes

- fixed a typo in the default (cache) configuration: `LastUsage` ->
`LastUseAge`

# Additional Context

- reported by a customer
- needs to be backported to current cloud version (2.66.x)

---------

Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit fa5e590aab)
2025-01-06 10:48:16 +01:00
Livio Spring
f9eb3414f5 fix(saml): parse xsd:duration format correctly (#9098)
# Which Problems Are Solved

SAML IdPs exposing an `EntitiesDescriptor` using an `xsd:duration` time
format for the `cacheDuration` property (e.g. `PT5H`) failed parsing.

# How the Problems Are Solved

Handle the unmarshalling for `EntitiesDescriptor` specifically.
[crewjam/saml](bbccb7933d/metadata.go (L88-L103))
already did this for `EntitiyDescriptor` the same way.

# Additional Changes

None

# Additional Context

- reported by a customer
- needs to be backported to current cloud version (2.66.x)

(cherry picked from commit bcf416d4cf)
2025-01-06 10:47:03 +01:00
Elio Bischof
74479bd085 fix(login): avoid disallowed languages with custom texts (#9094)
# Which Problems Are Solved

If a browsers default language is not allowed by instance restrictions,
the login still renders it if it finds any custom texts for this
language. In that case, the login tries to render all texts on all
screens in this language using custom texts, even for texts that are not
customized.

![image](https://github.com/user-attachments/assets/1038ecac-90c9-4352-b75d-e7466a639711)

![image](https://github.com/user-attachments/assets/e4cbd0fb-a60e-41c5-a404-23e6d144de6c)

![image](https://github.com/user-attachments/assets/98d8b0b9-e082-48ae-9540-66792341fe1c)

# How the Problems Are Solved

If a custom messages language is not allowed, it is not added to the
i18n library's translations bundle. The library correctly falls back to
the instances default language.

![image](https://github.com/user-attachments/assets/fadac92e-bdea-4f8c-b6c2-2aa6476b89b3)

This library method only receives messages for allowed languages

![image](https://github.com/user-attachments/assets/33081929-d3a5-4b0f-b838-7b69f88c13bc)

# Additional Context

Reported via support request

(cherry picked from commit ab6c4331df)
2025-01-06 10:46:58 +01:00
Tim Möhlmann
47268c738a fix(setup): make step 39 repeatable (#9085)
# Which Problems Are Solved

When downgrading zitadel and upgrading it again, it might be that orgs
deleted in this period still have stale entries in the fields table.

# How the Problems Are Solved

- Make the cleanup repeatable
- Scope the query by instance so that an index is used.

(cherry picked from commit da706a8b30)
v2.66.1
2024-12-18 16:54:01 +01:00
Silvan
ded9491326 fix(setup): make step 41 repeatable (#9084)
# Which Problems Are Solved

setup step 41 cannot handle downgrades at the moment. This step writes
the instance domain to the fields table. If there are new instances
created during the downgraded version is running there would be domain
missing in the fields afterwards.

# How the Problems Are Solved

Make step 41 repeatable for each version

(cherry picked from commit b89e8a6037)
2024-12-18 16:54:00 +01:00
Livio Spring
1e8756b139 Merge branch 'main' into next
# Conflicts:
#	internal/eventstore/repository/sql/query_test.go
#	internal/eventstore/v3/push.go
v2.66.0
2024-12-13 14:13:09 +01:00
Livio Spring
f20539ef8f fix(login): make sure first email verification is done before MFA check (#9039)
# Which Problems Are Solved

During authentication in the login UI, there is a check if the user's
MFA is already checked or needs to be setup.
In cases where the user was just set up or especially, if the user was
just federated without a verified email address, this can lead to the
problem, where OTP Email cannot be setup as there's no verified email
address.

# How the Problems Are Solved

- Added a check if there's no verified email address on the user and
require a mail verification check before checking for MFA.
Note: that if the user had a verified email address, but changed it and
has not verified it, they will still be prompted with an MFA check
before the email verification. This is make sure, we don't break the
existing behavior and the user's authentication is properly checked.

# Additional Changes

None

# Additional Context

- closes https://github.com/zitadel/zitadel/issues/9035
2024-12-13 11:37:20 +00:00
Livio Spring
fd70a7de5f fix(api): map REST request body in user invite requests (#9054)
# Which Problems Are Solved

The `CreateInviteCode` and `VerifyInviteCode` methods missed the body
mapping.

# How the Problems Are Solved

Added the mapping.

# Additional Changes

None

# Additional Context

Noticed during internal login UI tests using REST
2024-12-13 10:24:14 +00:00
Livio Spring
40fedace3c docs(oidc): add back-channel logout (#9034)
# Which Problems Are Solved

OIDC Back-Channel Logout released with
[V2.65.0](https://github.com/zitadel/zitadel/releases/tag/v2.65.0) were
not yet documented

# How the Problems Are Solved

- Added small guide and description
- Updated claims (added `sid` and `events`)

# Additional Changes

None

# Additional Context

relates to https://github.com/zitadel/zitadel/issues/8467
2024-12-13 09:47:04 +00:00
Stefan Benz
0a859fe416 docs: correct list users endpoint description (#9050)
# Which Problems Are Solved

There is a wrong description on the ListUsers endpoint on the users v2
API.

# How the Problems Are Solved

Correctly rewrote it with mention of instance instead of organization.

# Additional Changes

None

# Additional Context

Closes #8961
2024-12-13 09:33:20 +00:00
Stefan Benz
e90e1d00b7 fix: project existing check removed from project grant remove (#9004)
# Which Problems Are Solved

Wrongly created project grants with a unexpected resourceowner can't be
removed as there is a check if the project is existing, the project is
never existing as the wrong resourceowner is used.

# How the Problems Are Solved

There is already a fix related to the resourceowner of the project
grant, which should remove the possibility that this situation can
happen anymore. This PR removes the check for the project existing, as
when the projectgrant is existing and the project is not already
removed, this check is not needed anymore.

# Additional Changes

None

# Additional Context

Closes #8900

(cherry picked from commit 14db628856)
v2.65.4
2024-12-13 08:19:07 +01:00
Stephan Besser
a077771bff docs (adopters): add OpenAIP (#9045)
# Which Problems Are Solved

Replace this example text with a concise list of problems that this PR
solves.
For example:
- If the property XY is not given, the system crashes with a nil pointer
exception.

# How the Problems Are Solved

Replace this example text with a concise list of changes that this PR
introduces.
For example:
- Validates if property XY is given and throws an error if not

# Additional Changes

Replace this example text with a concise list of additional changes that
this PR introduces, that are not directly solving the initial problem
but are related.
For example:
- The docs explicitly describe that the property XY is mandatory
- Adds missing translations for validations.

# Additional Context

Replace this example with links to related issues, discussions, discord
threads, or other sources with more context.
Use the Closing #issue syntax for issues that are resolved with this PR.
- Closes #xxx
- Discussion #xxx
- Follow-up for PR #xxx
- https://discord.com/channels/xxx/xxx
2024-12-12 19:21:48 +00:00
Tim Möhlmann
6f6e2234eb fix(migrations): clean stale org fields using events (#9051)
# Which Problems Are Solved

Migration step 39 is supposed to cleanup stale organization entries in
the eventstore.fields table. In order to do this it used the projection
to check which orgs still exist.

During initial setup of ZITADEL the first instance with the organization
is created. Howevet, the projections are filled after all migrations are
done. With the organization projection empty, the fields of the first
org would be deleted.

This was discovered during development of a new field type. The
accosiated events did not yet have any projection based filled assigned.
It seems fields with a pre-fill projection are somehow restored.
Therefore a restoration migration isn't required IMO.

# How the Problems Are Solved

Query the event store for `org.removed` events instead. This has the
drawback of using a sequential scan on the eventstore, making the
migration more expensive.

# Additional Changes

- none

# Additional Context

- Introduced in https://github.com/zitadel/zitadel/pull/8946
2024-12-12 18:37:18 +02:00
Lucas Verdiell
25b013bf14 docs(adopters): add smat.io (#9010)
# Which Problems Are Solved
Letting the 🌏  know we use Zitadel at [smat.io](https://smat.io)

# Additional Changes
- Updated `ADOPTERS.md`.

Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
2024-12-11 18:16:22 +00:00
Silvan
e130467b7a docs(benchmarks): add v2.66.0 (#9038)
# Which Problems Are Solved

Add benchmarks for v2.66.0
2024-12-11 12:14:33 +02:00
Tim Möhlmann
83bdaf43c3 docs(events-api): user auth example using OIDC session events (#9020)
# Which Problems Are Solved

Integration guide with event API examples used outdated
`user.token.added` events which are no longer produced by ZITADEL.

# How the Problems Are Solved

Modify the example to use events from the `oidc_session` aggregate.

# Additional Changes

- Add a TODO for related SAML events.

# Additional Context

- Related to https://github.com/zitadel/zitadel/issues/8983
2024-12-10 10:54:07 +00:00
Tim Möhlmann
ee7beca61f fix(cache): ignore NOSCRIPT errors in redis circuit breaker (#9022)
# Which Problems Are Solved

When Zitadel starts the first time with a configured Redis cache, the
circuit break would open on the first requests, with no explanatory
error and only log-lines explaining the state of the Circuit breaker.

Using a debugger, `NOSCRIPT No matching script. Please use EVAL.` was
found the be passed to `Limiter.ReportResult`. This error is actually
retried by go-redis after a
[`Script.Run`](https://pkg.go.dev/github.com/redis/go-redis/v9@v9.7.0#Script.Run):

> Run optimistically uses EVALSHA to run the script. If script does not
exist it is retried using EVAL.

# How the Problems Are Solved

Add the `NOSCRIPT` error prefix to the whitelist.

# Additional Changes

- none

# Additional Context

- Introduced in: https://github.com/zitadel/zitadel/pull/8890
- Workaround for: https://github.com/redis/go-redis/issues/3203
2024-12-09 08:20:21 +00:00
Livio Spring
5c3e917248 chore: remove stable release tag (#8885)
# Which Problems Are Solved

The current "stable" release tag was no longer maintained.

# How the Problems Are Solved

Remove the tag from the docs.

# Additional Changes

Update the docs to reflect that test run with Ubuntu 22.04 instead of
20.04.

# Additional Context

- relates to https://github.com/zitadel/zitadel/issues/8884
2024-12-09 08:29:13 +01:00
Silvan
77cd430b3a refactor(handler): cache active instances (#9008)
# Which Problems Are Solved

Scheduled handlers use `eventstore.InstanceIDs` to get the all active
instances within a given timeframe. This function scrapes through all
events written within that time frame which can cause heavy load on the
database.

# How the Problems Are Solved

A new query cache `activeInstances` is introduced which caches the ids
of all instances queried by id or host within the configured timeframe.

# Additional Changes

- Changed `default.yaml`
  - Removed `HandleActiveInstances` from custom handler configs
- Added `MaxActiveInstances` to define the maximal amount of cached
instance ids
- fixed start-from-init and start-from-setup to start auth and admin
projections twice
- fixed org cache invalidation to use correct index

# Additional Context

- part of #8999
2024-12-06 11:32:53 +00:00
Tim Möhlmann
a81d42a61a fix(eventstore): set created filters to exclusion sub-query (#9019)
# Which Problems Are Solved

In eventstore queries with aggregate ID exclusion filters, filters on
events creation date where not passed to the sub-query. This results in
a high amount of returned rows from the sub-query and high overall query
cost.

# How the Problems Are Solved

When CreatedAfter and CreatedBefore are used on the global search query,
copy those filters to the sub-query. We already did this for the
position column filter.

# Additional Changes

- none

# Additional Context

- Introduced in https://github.com/zitadel/zitadel/pull/8940

Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-12-06 11:20:10 +01:00
Livio Spring
7a3ae8f499 fix(notifications): bring back legacy notification handling (#9015)
# Which Problems Are Solved

There are some problems related to the use of CockroachDB with the new
notification handling (#8931).
See #9002 for details.

# How the Problems Are Solved

- Brought back the previous notification handler as legacy mode.
- Added a configuration to choose between legacy mode and new parallel
workers.
  - Enabled legacy mode by default to prevent issues.

# Additional Changes

None

# Additional Context

- closes https://github.com/zitadel/zitadel/issues/9002
- relates to #8931
2024-12-06 10:56:19 +01:00
mffap
71d381b5e7 docs(legal): link subprocessors to trust center (#9013)
Link list of subprocessors to our trust center
2024-12-05 13:04:41 +00:00
Silvan
8f97e8a3de fix(eventstore): set application name during push to instance id (#8918)
# Which Problems Are Solved

Noisy neighbours can introduce projection latencies because the
projections only query events older than the start timestamp of the
oldest push transaction.

# How the Problems Are Solved

During push we set the application name to
`zitadel_es_pusher_<instance_id>` instead of `zitadel_es_pusher` which
is used to query events by projections.

(cherry picked from commit 522c82876f)
v2.65.3
2024-12-05 08:08:36 +01:00
Livio Spring
0017e4daa6 docs: remove autoplay from videos (#9005)
# Which Problems Are Solved

Some videos in the guides start playing automatically. This prevents a
great user / developer experience.

# How the Problems Are Solved

Stop autoplay.

# Additional Changes

None

# Additional Context

Discussed internally
2024-12-05 06:23:59 +00:00
Roman Kolokhanin
d0c23546ec fix(oidc): prompts slice conversion function returns slice which contains unexpected empty strings (#8997)
# Which Problems Are Solved

Slice initialized with a fixed length instead of capacity, this leads to
unexpected results when calling the append function.

# How the Problems Are Solved

fixed slice initialization, slice is initialized with zero length and
with capacity of function's argument

# Additional Changes

test case added

# Additional Context
none

Co-authored-by: Kolokhanin Roman <zuzmic@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
2024-12-04 20:56:36 +00:00
Livio Spring
7f0378636b fix(notifications): improve error handling (#8994)
# Which Problems Are Solved

While running the latest RC / main, we noticed some errors including
context timeouts and rollback issues.

# How the Problems Are Solved

- The transaction context is passed and used for any event being written
and for handling savepoints to be able to handle context timeouts.
- The user projection is not triggered anymore. This will reduce
unnecessary load and potential timeouts if lot of workers are running.
In case a user would not be projected yet, the request event will log an
error and then be skipped / retried on the next run.
- Additionally, the context is checked if being closed after each event
process.
- `latestRetries` now correctly only returns the latest retry events to
be processed
- Default values for notifications have been changed to run workers less
often, more retry delay, but less transaction duration.

# Additional Changes

None

# Additional Context

relates to #8931

---------

Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
2024-12-04 20:17:49 +00:00
Silvan
6614aacf78 feat(fields): add instance domain (#9000)
# Which Problems Are Solved

Instance domains are only computed on read side. This can cause missing
domains if calls are executed shortly after a instance domain (or
instance) was added.

# How the Problems Are Solved

The instance domain is added to the fields table which is filled on
command side.

# Additional Changes

- added setup step to compute instance domains
- instance by host uses fields table instead of instance_domains table

# Additional Context

- part of https://github.com/zitadel/zitadel/issues/8999
2024-12-04 18:10:10 +00:00
zitadelraccine
e6fae1b352 docs: add office hours #7 (#8947)
# Which Problems Are Solved

- Ensuring that the community meeting schedule is updated with the
latest upcoming event.

# How the Problems Are Solved

- By adding a new entry to the meeting schedule

# Additional Changes

N/A

# Additional Context

N/A

Co-authored-by: Silvan <silvan.reusser@gmail.com>
2024-12-04 15:09:41 +00:00
Silvan
dab5d9e756 refactor(eventstore): move push logic to sql (#8816)
# Which Problems Are Solved

If many events are written to the same aggregate id it can happen that
zitadel [starts to retry the push
transaction](48ffc902cc/internal/eventstore/eventstore.go (L101))
because [the locking
behaviour](48ffc902cc/internal/eventstore/v3/sequence.go (L25))
during push does compute the wrong sequence because newly committed
events are not visible to the transaction. These events impact the
current sequence.

In cases with high command traffic on a single aggregate id this can
have severe impact on general performance of zitadel. Because many
connections of the `eventstore pusher` database pool are blocked by each
other.

# How the Problems Are Solved

To improve the performance this locking mechanism was removed and the
business logic of push is moved to sql functions which reduce network
traffic and can be analyzed by the database before the actual push. For
clients of the eventstore framework nothing changed.

# Additional Changes

- after a connection is established prefetches the newly added database
types
- `eventstore.BaseEvent` now returns the correct revision of the event

# Additional Context

- part of https://github.com/zitadel/zitadel/issues/8931

---------

Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Miguel Cabrerizo <30386061+doncicuto@users.noreply.github.com>
Co-authored-by: Joakim Lodén <Loddan@users.noreply.github.com>
Co-authored-by: Yxnt <Yxnt@users.noreply.github.com>
Co-authored-by: Stefan Benz <stefan@caos.ch>
Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com>
Co-authored-by: Zach H <zhirschtritt@gmail.com>
2024-12-04 13:51:40 +00:00