Fixed an issue in `isSessionValid()` where users with multiple
configured MFA methods (e.g., TOTP and U2F) would have their sessions
incorrectly invalidated. The function previously used exclusive if-else
logic that only checked the first matching method, causing validation to
fail even when other configured methods were successfully verified.
Closes#10529
# Which Problems Are Solved
[#10529](https://github.com/zitadel/zitadel/issues/10529)
# How the Problems Are Solved
- Replaced exclusive if-else if chain with inclusive validation logic
- Session is now considered valid if ANY configured MFA method has been
verified
- Improved error logging to show all configured methods and their
verification status
Example: A user with both TOTP and U2F configured can now successfully
authenticate using either method, whereas previously the session would
be invalid if they used U2F but TOTP was checked first.
(cherry picked from commit b6cbb68428)
This PR completely removes Next.js image optimization from the login app
by replacing all next/image components with standard HTML <img> tags and
removing the image optimization configuration.
Closes https://github.com/zitadel/zitadel-charts/issues/381
# Which Problems Are Solved
Users were encountering issue when loading images in dedicated
environments. These happened due to nextjs imaging optimizations
creating different paths for images.
# How the Problems Are Solved
- Removed Next.js Image Optimization Config
- Removed images: { unoptimized: true } configuration from
[next.config.mjs](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
This config was redundant since we no longer use next/image components
- Replaced next/image with standard <img> tags
(cherry picked from commit 0a31f4ba2b)
# Which Problems Are Solved
The /userinfo endpoint only returns roles for the current project, even
if the access token includes multiple project aud scopes.
This prevents clients from retrieving all user roles across multiple
projects, making multi-project access control ineffective.
# How the Problems Are Solved
Modified the /userinfo handler logic to resolve roles across all valid
project audience scopes provided in the token, not just the current
project.
Ensured that if **urn:zitadel:iam:org:projects:roles is in the scopes**,
roles from all declared project audiences are collected and included in
the response in **urn:zitadel:iam:org:projects:roles claim**.
# Additional Changes
# Additional Context
This change enables service-to-service authorization workflows and SPA
role resolution across multiple project contexts with a single token.
- Closes#9831
---------
Co-authored-by: Masum Patel <patelmasum98@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
(cherry picked from commit 295584648d)
# Which Problems Are Solved
The event `LabelPolicyAssetsRemovedEventType` is never actually made
anywhere, this is dead code, this PR will remove it
(cherry picked from commit e5575d6042)
# Which Problems Are Solved
Project Grant ID would have needed to be unique to be handled properly
on the projections, but was defined as the organization ID the project
was granted to, so could be non-unique.
# How the Problems Are Solved
Generate the Project Grant ID even in the v2 APIs, which also includes
fixes in the integration tests.
Additionally to that, the logic for some functionality had to be
extended as the Project Grant ID is not provided anymore in the API, so
had to be queried before creating events for Project Grants.
# Additional Changes
Included fix for authorizations, when an authorization was intended to
be created for a project, without providing any organization
information, which also showed some faulty integration tests.
# Additional Context
Partially closes#10745
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit b6ff4ff16c)
# Which Problems Are Solved
This PR fixes the self-management of users for metadata and own removal
and improves the corresponding permission checks.
While looking into the problems, I also noticed that there's a bug in
the metadata mapping when using `api.metadata.push` in actions v1 and
that re-adding a previously existing key after its removal was not
possible.
# How the Problems Are Solved
- Added a parameter `allowSelfManagement` to checkPermissionOnUser to
not require a permission if a user is changing its own data.
- Updated use of `NewPermissionCheckUserWrite` including prevention of
self-management for metadata.
- Pass permission check to the command side (for metadata functions) to
allow it implicitly for login v1 and actions v1.
- Use of json.Marshal for the metadata mapping (as with
`AppendMetadata`)
- Check the metadata state when comparing the value.
# Additional Changes
- added a variadic `roles` parameter to the `CreateOrgMembership`
integration test helper function to allow defining specific roles.
# Additional Context
- noted internally while testing v4.1.x
- requires backport to v4.x
- closes https://github.com/zitadel/zitadel/issues/10470
- relates to https://github.com/zitadel/zitadel/pull/10426
(cherry picked from commit 5329d50509)
…of project grant
# Which Problems Are Solved
On Management API the fields for `GrantedOrgId`, `GrantedOrgName` and
`GrantedOrgDomain` were only filled if it was a usergrant for a granted
project.
# How the Problems Are Solved
Correctly query the Organization of the User again to the Organization
the Project is granted to.
Then fill in the information about the Organization of the User in the
fields `GrantedOrgId`, `GrantedOrgName` and `GrantedOrgDomain`.
# Additional Changes
Additionally query the information about the Organization the Project is
granted to, to have it available for the Authorization v2beta API.
# Additional Context
Closes#10723
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
(cherry picked from commit edb227f066)
# Which Problems Are Solved
#10415 added the possibility to filter users based on metadata. To
prevent duplicate results an sql `DISTINCT` was added. This resulted in
issues if the list was sorted on string columns like `username` or
`displayname`, since they are sorted using `lower`. Using `DISTINCT`
requires the `order by` column to be part of the `SELECT` statement.
# How the Problems Are Solved
Added the order by column to the statement.
# Additional Changes
None
# Additional Context
- relates to #10415
- backport to v4.x
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit 2c0ee0008f)
# Which Problems Are Solved
Depending on the metadata values (already existing), the newly created
index (#10415) cannot be created or error in the future.
# How the Problems Are Solved
- Create the index using `sha256` and change the query to use sha256 as
well when comparing bytes values such as user_metadata.
- Added a setup step to cleanup potentially created index on
`projections.user_metadata5`
# Additional Changes
None
# Additional Context
- relates to #10415
- requires backport to v4.x
(cherry picked from commit 57e8033b6e)
# Which Problems Are Solved
Eventual consistency issues.
# How the Problems Are Solved
Correctly handle timeouts and change queries to domains instead of using
the organization name.
# Additional Changes
None
# Additional Context
None
Co-authored-by: Livio Spring <livio.a@gmail.com>
(cherry picked from commit b0642a5898)
# Which Problems Are Solved
The current service ping reports can run into body size limit errors and
there's no way of knowing how big the current size is.
# How the Problems Are Solved
Log the current size to have at least some insights and possibly change
bulk size.
# Additional Changes
None
# Additional Context
- noticed internally
- backport to v4.x
(cherry picked from commit bc471b4f78)
# Which Problems Are Solved
I noticed that a failure in the projections handlers `reduce` function
(e.g. creating the statement or checking preconditions for the
statement) would not update the `failed_events2` table.
This was due to a wrong error handling, where as long as the
`maxFailureCount` was not reached, the error was returned after updating
the `failed_events2` table, which causes the transaction to be rolled
back and thus losing the update.
# How the Problems Are Solved
Wrap the error into an `executionError`, so the transaction is not
rolled back.
# Additional Changes
none
# Additional Context
- noticed internally
- requires backport to v3.x and v4.x
(cherry picked from commit ee92560f32)
# Which Problems Are Solved
Cached object may have a different schema between Zitadel versions.
# How the Problems Are Solved
Use the curent build version in DB based cache connectors PostgreSQL and
Redis.
# Additional Changes
- Cleanup the ZitadelVersion field from the authz Instance
- Solve potential race condition on global variables in build package.
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/10648
- Obsoletes https://github.com/zitadel/zitadel/pull/10646
- Needs to be back-ported to v4 over
https://github.com/zitadel/zitadel/pull/10645
(cherry picked from commit f6f37d3a31)
This PR fixes a bug where projections could skip events if they were
written within the same microsecond, which can occur during high load on
different transactions.
## Problem
The event query ordering was not fully deterministic. Events created at
the exact same time (same `position`) and in the same transaction
(`in_tx_order`) were not guaranteed to be returned in the same order on
subsequent queries. This could lead to some events being skipped by the
projection logic.
## Solution
To solve this, the `ORDER BY` clause for event queries has been extended
to include `instance_id`, `aggregate_type`, and `aggregate_id`. This
ensures a stable and deterministic ordering for all events, even if they
share the same timestamp.
## Additionally changes:
* Replaced a manual slice search with the more idiomatic
`slices.Contains` to skip already projected instances.
* Changed the handling of already locked projections to log a debug
message and skip execution instead of returning an error.
* Ensures the database transaction is explicitly committed.
(cherry picked from commit 25ab6b2397)
A timing issue (a race condition) was identified in our event processing
system. Under specific circumstances, it was possible for the system to
skip processing certain events, leading to potential data
inconsistencies.
## Which problems are solved
The system tracks its progress through the event log using timestamps.
The issue occurred because we were using the timestamp from the start of
a database transaction. If a query to read new events began after the
transaction started but before the new event was committed, the query
would not see the new event and would fail to process it.
## How the problems are solved
The fix is to change which timestamp is used for tracking. We now use
the precise timestamp of when the event is actually written to the
database. This ensures that the event's timestamp is always correctly
ordered, closing the timing gap and preventing the race condition.
This change enhances the reliability and integrity of our event
processing pipeline. It guarantees that all events are processed in the
correct order and eliminates the risk of skipped events, ensuring data
is always consistent across the system.
## Additional information
original fix: https://github.com/zitadel/zitadel/pull/10560
(cherry picked from commit 136363deda)
This PR overhauls our event projection system to make it more robust and
prevent skipped events under high load. The core change replaces our
custom, transaction-based locking with standard PostgreSQL advisory
locks. We also introduce a worker pool to manage concurrency and prevent
database connection exhaustion.
### Key Changes
* **Advisory Locks for Projections:** Replaces exclusive row locks and
inspection of `pg_stat_activity` with PostgreSQL advisory locks for
managing projection state. This is a more reliable and standard approach
to distributed locking.
* **Simplified Await Logic:** Removes the complex logic for awaiting
open transactions, simplifying it to a more straightforward time-based
filtering of events.
* **Projection Worker Pool:** Implements a worker pool to limit
concurrent projection triggers, preventing connection exhaustion and
improving stability under load. A new `MaxParallelTriggers`
configuration option is introduced.
### Problem Solved
Under high throughput, a race condition could cause projections to miss
events from the eventstore. This led to inconsistent data in projection
tables (e.g., a user grant might be missing). This PR fixes the
underlying locking and concurrency issues to ensure all events are
processed reliably.
### How it Works
1. **Event Writing:** When writing events, a *shared* advisory lock is
taken. This signals that a write is in progress.
2. **Event Handling (Projections):**
* A projection worker attempts to acquire an *exclusive* advisory lock
for that specific projection. If the lock is already held, it means
another worker is on the job, so the current one backs off.
* Once the lock is acquired, the worker briefly acquires and releases
the same *shared* lock used by event writers. This acts as a barrier,
ensuring it waits for any in-flight writes to complete.
* Finally, it processes all events that occurred before its transaction
began.
### Additional Information
* ZITADEL no longer modifies the `application_name` PostgreSQL variable
during event writes.
* The lock on the `current_states` table is now `FOR NO KEY UPDATE`.
* Fixes https://github.com/zitadel/zitadel/issues/8509
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
(cherry picked from commit 0575f67e94)
# Which Problems Are Solved
During the implementation of #10687, it was noticed that the import
endpoint might provide unnecessary error details.
# How the Problems Are Solved
Remove the underlying (parent) error from the error message.
# Additional Changes
none
# Additional Context
relates to #10687
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
(cherry picked from commit 25d921b20c)
Safari was not creating session cookies during local development,
causing authentication failures. This was due to nextjs default setting
of SameSite cookie property.
We explicitly set "strict" for session cookies now.
Closes#10473
# Which Problems Are Solved
Authentication Issues with Safari in local development
# How the Problems Are Solved
- Cleaner API: Replaced confusing sameSite boolean/string parameters
with iFrameEnabled boolean
- Better logic flow:
iFrameEnabled: true → sameSite: "none" (for iframe embedding)
Production → sameSite: "strict" (maximum security)
(cherry picked from commit a9cd3ff9c0)
This PR removes the Vercel Analytics integration from the login
application to reduce external dependencies and improve privacy.
# Which Problems Are Solved
cleaner csp
# How the Problems Are Solved
- Removed dependency: Uninstalled @vercel/analytics package from
package.json
- Updated layout component: Removed Analytics import and component usage
from layout.tsx
- Updated Content Security Policy: Removed Vercel domains
(https://va.vercel-scripts.com and https://vercel.com) from CSP
configuration in csp.js
(cherry picked from commit 1a42e99329)
# Which Problems Are Solved
Actions V2 Method names got cut off in the creation dropdown
<img width="668" height="717" alt="old modal"
src="https://github.com/user-attachments/assets/e3dda16d-5326-464e-abc7-67a8b146037c"
/>
# How the Problems Are Solved
The modal now first requires a Service to be set and only afterwards are
users allowed set Methods. This way we can cut out the Service-Names
from the Method-Name leading to cleaner and shorter names.
<img width="796" height="988" alt="new modal"
src="https://github.com/user-attachments/assets/5002afdf-b639-44ef-954a-5482cca12f96"
/>
# Additional Changes
Changed the Modal dataloading to use Tanstack Query
# Additional Context
- Closes#10596
(cherry picked from commit b694b25cdf)
# Which Problems Are Solved
When using login V2 the Callback URL for an Identity Provider is
different. When following the guideance in the console and using Login
V2 users will use the wrong callback url.
<img width="1234" height="323" alt="grafik"
src="https://github.com/user-attachments/assets/8632ecf2-d9e4-4e3b-8940-2bf80baab8df"
/>
# How the Problems Are Solved
I have added the correct Login V2 url to the identity providers and
updated our docs.
<img width="628" height="388" alt="grafik"
src="https://github.com/user-attachments/assets/2dd4f4f9-d68f-4605-a52e-2e51069da10e"
/>
# Additional Changes
Small refactorings and porting some components over to ChangeDetection
OnPush
# Additional Context
Replace this example with links to related issues, discussions, discord
threads, or other sources with more context.
Use the Closing #issue syntax for issues that are resolved with this PR.
- Closes#10461
---------
Co-authored-by: Max Peintner <max@caos.ch>
(cherry picked from commit 5cde52148f)
Closes#10498
The registration form's legal checkboxes had incorrect validation logic
that prevented users from completing registration when only one legal
document (ToS or Privacy Policy) was configured, or when no legal
documents were required.
additionally removes a duplicate description for "or use Identity
Provider"
# Which Problems Are Solved
Having only partial legal documents was blocking users to register. The
logic now conditionally renders checkboxes and checks if all provided
documents are accepted.
# How the Problems Are Solved
- Fixed checkbox validation: Now properly validates based on which legal
documents are actually available
- acceptance logic: Only requires acceptance of checkboxes that are
shown
- No legal docs support: Users can proceed when no legal documents are
configured
- Proper state management: Fixed checkbox state tracking and mixed-up
test IDs
---------
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
(cherry picked from commit b9b9baf67f)
# Which Problems Are Solved
Comparing the v3 and v4 deployments we noticed an increase in memory
usage. A first analysis revealed that it might be related to the
(multiple) initialization of the `i18n.Translator`, partially related
# How the Problems Are Solved
Initialize the tranlator once (apart from the translator interceptor,
which uses context / request specific information) and pass it to all
necessary middleware.
# Additional Changes
Removed unnecessary error return parameter from the translator
initialization.
# Additional Context
- noticed internally
- backport to v4.x
(cherry picked from commit a0c3ccecf7)
# Which Problems Are Solved
Using the service ping, we want to have some additional insights to how
zitadel is configured. The current resource count report contains
already some amount of configured policies, such as the login_policy.
But we do not know if for example MFA is enforced.
# How the Problems Are Solved
- Added the following counts to the report:
- service users per organization
- MFA enforcements (though login policy)
- Notification policies with password change option enabled
- SCIM provisioned users (using user metadata)
- Since all of the above are conditional based on at least a column
inside a projection, a new `migration.CountTriggerConditional` has been
added, where a condition (column values) and an option to track updates
on that column should be considered for the count.
- For this to be possible, the following changes had to be made to the
existing sql resources:
- the `resource_name` has been added to unique constraint on the
`projection.resource_counts` table
- triggers have been added / changed to individually track `INSERT`,
`UPDATE`(s) and `DELETE` and be able to handle conditions
- an optional argument has been added to the
`projections.count_resource()` function to allow providing the
information to `UP` or `DOWN` count the resource on an update.
# Additional Changes
None
# Additional Context
- partially solves #10244 (reporting audit log retention limit will be
handled in #10245 directly)
- backport to v4.x
(cherry picked from commit 2dbe21fb30)
# Which Problems Are Solved
HTTP Request to HTTP providers for Email or SMS are not signed.
# How the Problems Are Solved
Add a Signing Key to the HTTP Provider resources, which is then used to
generate a header to sign the payload.
# Additional Changes
Additional tests for query side of the SMTP provider.
# Additional Context
Closes#10067
---------
Co-authored-by: Marco A. <marco@zitadel.com>
(cherry picked from commit 8909b9a2a6)
# Which Problems Are Solved
Update the Go toolchain to the latest bugfix release, so we include
latest security fixes in the standard library.
# How the Problems Are Solved
Set the toolchain directive to 1.24.7
# Additional Changes
- go mod tidy
# Additional Context
- https://go.dev/doc/devel/release#go1.24.0
Co-authored-by: Marco A. <marco@zitadel.com>
(cherry picked from commit 4440579f0a)
# Which Problems Are Solved
This PR adds functionality to propagate request headers in actions v2.
# How the Problems Are Solved
The new functionality is added to the`ExecutionHandler` interceptors,
where the incoming request headers (from a list of allowed headers to be
forwarded) are set in the payload of the request before calling the
target.
# Additional Changes
This PR also contains minor fixes to the Actions V2 example docs.
# Additional Context
- Closes#9941
---------
Co-authored-by: Marco A. <marco@zitadel.com>
(cherry picked from commit 51e12e224d)
# Which Problems Are Solved
Some users have reported the need of retrieving users given a metadata
key, metadata value or both. This change introduces metadata search
filter on the `ListUsers()` endpoint to allow Zitadel users to search
for user records by metadata.
The changes affect only v2 APIs.
# How the Problems Are Solved
- Add new search filter to `ListUserRequest`: `MetaKey` and `MetaValue`
- Add SQL indices on metadata key and metadata value
- Update query to left join `user_metadata` table
# Additional Context
- Closes#9053
- Depends on https://github.com/zitadel/zitadel/pull/10567
---------
Co-authored-by: Silvan <27845747+adlerhurst@users.noreply.github.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
(cherry picked from commit 8df402fb4f)
# Which Problems Are Solved
When a project is created by a user with only the `PROJECT_CREATOR`
role, they can no longer view/manage the created project. Although the
project is created, the user sees the following error: `No matching
permissions found (AUTH-3jknH)`. This is due to the
[removal](https://github.com/zitadel/zitadel/pull/9317) of
auto-assignment of the `PROJECT_OWNER` role when a project is newly
created.
# How the Problems Are Solved
By introducing optional fields in the CreateProject API to include a
list of users and a list of project member roles to be assigned to the
users. When there are no roles mentioned, the `PROJECT_OWNER` role is
assigned by default to all the users mentioned in the list.
# Additional Changes
N/A
# Additional Context
- Closes#10561
- Closes#10592
- Should be backported as this issue is not specific to v4
---------
Co-authored-by: conblem <mail@conblem.me>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
Flakiness and conflicts in value from gofakeit.
# How the Problems Are Solved
Move Gofakeit calls to the integration package, to guarantee proper
usage and values for integration testing.
# Additional Changes
None
# Additional Context
None
(cherry picked from commit 492f1826ee)
# Which Problems Are Solved
There was an left-behind index introduced to optimize the old and
removed event execution handler. The index confuses prostgres and it
sometimes picks this index in favor of the projection specific index.
This sometimes leads to bad query performance in the projectio handlers.
# How the Problems Are Solved
Drop the index
# Additional Changes
- none
# Additional Context
- Forgotten in https://github.com/zitadel/zitadel/pull/10564
(cherry picked from commit 54554b8fb9)
# Which Problems Are Solved
Invalid id_tokens used as `id_token_hint` on the authorization endpoints
currently return an error, resp. get display on the endpoint itself.
# How the Problems Are Solved
Ignore invalid id_token_hint errors and just log them.
# Additional Changes
None
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/10673
- backport to v4.x
(cherry picked from commit e158f9447e)
# Which Problems Are Solved
Flakiness in integration tests for organization v2beta service.
# How the Problems Are Solved
Fix eventual consistent handling of integration tests.
# Additional Changes
None
# Additional Context
None
---------
Co-authored-by: Marco A. <marco@zitadel.com>
(cherry picked from commit 75774eb64c)
# Which Problems Are Solved
It was noticed that on actions v2 when subscribing to events, the
webhook would always receive an empty `event_payload`:
```
{
"aggregateID": "336494809936035843",
"aggregateType": "user",
"resourceOwner": "336392597046099971",
"instanceID": "336392597046034435",
"version": "v2",
"sequence": 1,
"event_type": "user.human.added",
"created_at": "2025-09-05T08:55:36.156333Z",
"userID": "336392597046755331",
"event_payload":
{}
}
```
The problem was due to using `json.Marshal` on the `Event` interface,
where the underlying `BaseEvent` prevents the data to be marshalled:
131f70db34/internal/eventstore/event_base.go (L38)
# How the Problems Are Solved
The `Event`s `Unmarshal` function is used with a `json.RawMessage`.
# Additional Changes
none
# Additional Context
- backport for v4.x
- relates to https://github.com/zitadel/zitadel/pull/10651
- relates to https://github.com/zitadel/zitadel/pull/10564
# Which Problems Are Solved
On the user detail page the mfa names and types where not displayed
correctly.
# How the Problems Are Solved
Switched to our internal TypeSafeCellDef and correctly parse the
@zitadel/proto types.
# Additional Context
- Closes#10493
Co-authored-by: Max Peintner <max@caos.ch>
(cherry picked from commit d9554d483c)
# Which Problems Are Solved
Starting with Zitadel v4, the new login UI is enabled by default (for
new instances) through the corresponding feature flag.
There's an additional flag to use the V2 API in console, which is mostly
required to use the login V2 without problems, but was not yet activated
by default (for new instances).
# How the Problems Are Solved
- Enabled the `ConsoleUseV2UserApi` feature flag on the
`defaultInstance`
# Additional Changes
- Cleaned up removed flags on the `defaultInstance`
# Additional Context
- noticed internally
- backport to v4.x
(cherry picked from commit 98bf8359c5)
# Which Problems Are Solved
We noticed a rapid growth of Redis memory usage. Instance By host did
not set the zitadel version, so instance entries got set on every
request again.
# How the Problems Are Solved
Set the version
# Additional Changes
- none
# Additional Context
- internal incident
# Which Problems Are Solved
Unnecessary default permission check in creating an authorization fails
even if the functionality was called internally.
# How the Problems Are Solved
Move permission check to the proper implementation, so that necessary
permission checks are provided by the responsible API.
# Additional Changes
None
# Additional Context
Closes#10624
Co-authored-by: Livio Spring <livio.a@gmail.com>
(cherry picked from commit bdefd9147f)
# Which Problems Are Solved
I noticed some outdated / misleading logs when starting zitadel:
- The `init-projections` were no longer in beta for a long time.
- The LRU auth request cache is disabled by default, which results in
the following message, which has caused confusion by customers:
```level=info msg="auth request cache disabled" error="must provide a positive size"```
# How the Problems Are Solved
- Removed the beta info
- Disable cache initialization if possible
# Additional Changes
None
# Additional Context
- noticed internally
- backport to v4.x
(cherry picked from commit a1ad87387d)
# Which Problems Are Solved
Integration tests were failing with Minified React error 419 caused by
React 19 Suspense boundary issues during server-side rendering (SSR) to
client-side rendering (CSR) transitions.
# How the Problems Are Solved
The fix handles infrastructure-level SSR errors gracefully while
maintaining proper error detection for actual application issues.
- Added Cypress error handling for React 19 SSR hydration errors that
don't affect functionality
# Additional Changes
Enhanced Next.js configuration with React 19 compatibility
optimizations:
- `optimizePackageImports`: @radix-ui/react-tooltip and @heroicons/react
can have large bundle sizes if not optimized. Such packages are
suggested to be optimized in
https://nextjs.org/docs/app/api-reference/config/next-config-js/optimizePackageImports
- `poweredByHeader`: Not that important. Benefits are smaller HTTP
headers, Tiny bandwidth savings, and more professional appearance due to
cleaner response headers, added it as a "security best practice".
# Additional Context
- Replaces #10611
(cherry picked from commit adaa6a8de6)
# Which Problems Are Solved
The
[otelriver](https://github.com/riverqueue/rivercontrib/tree/master/otelriver)
package uses default otel histogram buckets that are designed for
millisecond measurements. OTEL docs also suggest standardizing on using
seconds as the measurement unit. However, the default buckets from
opentelemetry-go are more or less useless when used with seconds as the
smallest measurement is 5 seconds and the largest is nearly 3 hours.
Example:
```
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="ok",tag="[]",le="0"} 0
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="ok",tag="[]",le="5"} 1144
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="ok",tag="[]",le="10"} 1144
<...more buckets here...>
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="ok",tag="[]",le="7500"} 1144
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="ok",tag="[]",le="10000"} 1144
river_work_duration_histogram_seconds_bucket{attempt="1",kind="notification_request",otel_scope_name="github.com/riverqueue/rivercontrib/otelriver",otel_scope_version="",priority="1",queue="notification",status="ok",tag="[]",le="+Inf"} 1144
```
# How the Problems Are Solved
Change the default unit to "ms" from "s" as supported by the middleware
API:
https://riverqueue.com/docs/open-telemetry#list-of-middleware-options
# Additional Changes
None
# Additional Context
None
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
(cherry picked from commit fcdc598320)
# Which Problems Are Solved
The TestServer_Limits_AuditLogRetention is too reliant on time
constraints when checking that a limit is correctly applied. IN case it
takes to long to do all the preparation, there won't be any events to
read and the test will fail.
# How the Problems Are Solved
Don't require any events to be returned.
# Additional Changes
None
# Additional Context
- Noted a lot of pipeline to fail on this step.
- requires backport to at least v4.x
(cherry picked from commit 8574d6fbab)
# Which Problems Are Solved
Correctly display timestamps even if the seconds or nanos property is 0.
# How the Problems Are Solved
Instead of relying on javascript type coercion correctly check explictly
for undefined.
# Additional Changes
Use TypeSafeCellDefModule in personal-access-tokens component.
# Additional Context
- Closes#10032
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Livio Spring <livio.a@gmail.com>
(cherry picked from commit 93fd27aebe)