# Which Problems Are Solved
The current "stable" release tag was no longer maintained.
# How the Problems Are Solved
Remove the tag from the docs.
# Additional Changes
Update the docs to reflect that test run with Ubuntu 22.04 instead of
20.04.
# Additional Context
- relates to https://github.com/zitadel/zitadel/issues/8884
# Which Problems Are Solved
Scheduled handlers use `eventstore.InstanceIDs` to get the all active
instances within a given timeframe. This function scrapes through all
events written within that time frame which can cause heavy load on the
database.
# How the Problems Are Solved
A new query cache `activeInstances` is introduced which caches the ids
of all instances queried by id or host within the configured timeframe.
# Additional Changes
- Changed `default.yaml`
- Removed `HandleActiveInstances` from custom handler configs
- Added `MaxActiveInstances` to define the maximal amount of cached
instance ids
- fixed start-from-init and start-from-setup to start auth and admin
projections twice
- fixed org cache invalidation to use correct index
# Additional Context
- part of #8999
# Which Problems Are Solved
In eventstore queries with aggregate ID exclusion filters, filters on
events creation date where not passed to the sub-query. This results in
a high amount of returned rows from the sub-query and high overall query
cost.
# How the Problems Are Solved
When CreatedAfter and CreatedBefore are used on the global search query,
copy those filters to the sub-query. We already did this for the
position column filter.
# Additional Changes
- none
# Additional Context
- Introduced in https://github.com/zitadel/zitadel/pull/8940
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
There are some problems related to the use of CockroachDB with the new
notification handling (#8931).
See #9002 for details.
# How the Problems Are Solved
- Brought back the previous notification handler as legacy mode.
- Added a configuration to choose between legacy mode and new parallel
workers.
- Enabled legacy mode by default to prevent issues.
# Additional Changes
None
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/9002
- relates to #8931
# Which Problems Are Solved
Some videos in the guides start playing automatically. This prevents a
great user / developer experience.
# How the Problems Are Solved
Stop autoplay.
# Additional Changes
None
# Additional Context
Discussed internally
# Which Problems Are Solved
Slice initialized with a fixed length instead of capacity, this leads to
unexpected results when calling the append function.
# How the Problems Are Solved
fixed slice initialization, slice is initialized with zero length and
with capacity of function's argument
# Additional Changes
test case added
# Additional Context
none
Co-authored-by: Kolokhanin Roman <zuzmic@gmail.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
# Which Problems Are Solved
While running the latest RC / main, we noticed some errors including
context timeouts and rollback issues.
# How the Problems Are Solved
- The transaction context is passed and used for any event being written
and for handling savepoints to be able to handle context timeouts.
- The user projection is not triggered anymore. This will reduce
unnecessary load and potential timeouts if lot of workers are running.
In case a user would not be projected yet, the request event will log an
error and then be skipped / retried on the next run.
- Additionally, the context is checked if being closed after each event
process.
- `latestRetries` now correctly only returns the latest retry events to
be processed
- Default values for notifications have been changed to run workers less
often, more retry delay, but less transaction duration.
# Additional Changes
None
# Additional Context
relates to #8931
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
# Which Problems Are Solved
Instance domains are only computed on read side. This can cause missing
domains if calls are executed shortly after a instance domain (or
instance) was added.
# How the Problems Are Solved
The instance domain is added to the fields table which is filled on
command side.
# Additional Changes
- added setup step to compute instance domains
- instance by host uses fields table instead of instance_domains table
# Additional Context
- part of https://github.com/zitadel/zitadel/issues/8999
# Which Problems Are Solved
- Ensuring that the community meeting schedule is updated with the
latest upcoming event.
# How the Problems Are Solved
- By adding a new entry to the meeting schedule
# Additional Changes
N/A
# Additional Context
N/A
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
If many events are written to the same aggregate id it can happen that
zitadel [starts to retry the push
transaction](48ffc902cc/internal/eventstore/eventstore.go (L101))
because [the locking
behaviour](48ffc902cc/internal/eventstore/v3/sequence.go (L25))
during push does compute the wrong sequence because newly committed
events are not visible to the transaction. These events impact the
current sequence.
In cases with high command traffic on a single aggregate id this can
have severe impact on general performance of zitadel. Because many
connections of the `eventstore pusher` database pool are blocked by each
other.
# How the Problems Are Solved
To improve the performance this locking mechanism was removed and the
business logic of push is moved to sql functions which reduce network
traffic and can be analyzed by the database before the actual push. For
clients of the eventstore framework nothing changed.
# Additional Changes
- after a connection is established prefetches the newly added database
types
- `eventstore.BaseEvent` now returns the correct revision of the event
# Additional Context
- part of https://github.com/zitadel/zitadel/issues/8931
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Elio Bischof <elio@zitadel.com>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Miguel Cabrerizo <30386061+doncicuto@users.noreply.github.com>
Co-authored-by: Joakim Lodén <Loddan@users.noreply.github.com>
Co-authored-by: Yxnt <Yxnt@users.noreply.github.com>
Co-authored-by: Stefan Benz <stefan@caos.ch>
Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com>
Co-authored-by: Zach H <zhirschtritt@gmail.com>
# Which Problems Are Solved
Wrongly created project grants with a unexpected resourceowner can't be
removed as there is a check if the project is existing, the project is
never existing as the wrong resourceowner is used.
# How the Problems Are Solved
There is already a fix related to the resourceowner of the project
grant, which should remove the possibility that this situation can
happen anymore. This PR removes the check for the project existing, as
when the projectgrant is existing and the project is not already
removed, this check is not needed anymore.
# Additional Changes
None
# Additional Context
Closes#8900
# Which Problems Are Solved
There are multiple issues with the metadata and error handling of SAML:
- When providing a SAML metadata for an IdP, which cannot be processed,
the error will only be noticed once a user tries to use the IdP.
- Parsing for metadata with any other encoding than UTF-8 fails.
- Metadata containing an enclosing EntitiesDescriptor around
EntityDescriptor cannot be parsed.
- Metadata's `validUntil` value is always set to 48 hours, which causes
issues on external providers, if processed from a manual down/upload.
- If a SAML response cannot be parsed, only a generic "Authentication
failed" error is returned, the cause is hidden to the user and also to
actions.
# How the Problems Are Solved
- Return parsing errors after create / update and retrieval of an IdP in
the API.
- Prevent the creation and update of an IdP in case of a parsing
failure.
- Added decoders for encodings other than UTF-8 (including ASCII,
windows and ISO, [currently
supported](efd25daf28/encoding/ianaindex/ianaindex.go (L156)))
- Updated parsing to handle both `EntitiesDescriptor` and
`EntityDescriptor` as root element
- `validUntil` will automatically set to the certificate's expiration
time
- Unwrapped the hidden error to be returned. The Login UI will still
only provide a mostly generic error, but action can now access the
underlying error.
# Additional Changes
None
# Additional Context
reported by a customer
# Which Problems Are Solved
Some user v2 API calls checked for permission only on the user itself.
# How the Problems Are Solved
Consistent check for permissions on user v2 API.
# Additional Changes
None
# Additional Context
Closes#7944
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
On the login settings we do have the settings "Force MFA" and "Force MFA
for local authenticated users" this gives the impression, that i can
enable both and then all users should be forced to use an mfa.
But when both settings are enabled, only local users are forced to add
mfa.
# How the Problems Are Solved
The label was wrong, the second one should be "Force MFA for local
authneticated users only", I changed both labels to make it easier to
understand.
Hello everyone,
To support Korean-speaking users who may experience challenges in using
this excellent tool due to language barriers, I have added Korean
language support with the help of ChatGPT.
I hope that this contribution allows ZITADEL to be more useful and
accessible to Korean-speaking users.
Thank you.
---
안녕하세요 여러분, 언어의 어려움으로 이 훌륭한 도구를 활용하는데 곤란함을 겪는 한국어 사용자들을 위하여 ChatGPT의 도움을
받아 한국어 지원을 추가하였습니다.
이 기여를 통해 ZITADEL이 한국어 사용자들에게 유용하게 활용되었으면 좋겠습니다.
감사합니다.
Co-authored-by: Max Peintner <max@caos.ch>
# Which Problems Are Solved
- The quality of the Russian locale in the auth module is currently low,
likely due to automatic translation.
# How the Problems Are Solved
- Corrected grammatical errors and awkward phrasing from
auto-translation (e.g., "footer" → ~"нижний колонтитул"~ "примечание").
- Enhanced alignment with the English (reference) locale, including
improvements to casing and semantics.
- Ensured consistency in terminology (e.g., the "next"/"cancel" buttons
are now consistently translated as "продолжить"/"отмена").
- Improved clarity and readability (e.g., "подтверждение пароля" →
"повторите пароль").
# Additional Changes
N/A
# Additional Context
- Follow-up for PR #6864
Co-authored-by: Fabi <fabienne@zitadel.com>
# Which Problems Are Solved
Domains are processed as still verified in the domain verified
writemodel even if the org is removed.
# How the Problems Are Solved
Handle the org removed event in the writemodel.
# Additional Changes
None
# Additional Context
Closes#8514
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
The action v2 messages were didn't contain anything providing security
for the sent content.
# How the Problems Are Solved
Each Target now has a SigningKey, which can also be newly generated
through the API and returned at creation and through the Get-Endpoints.
There is now a HTTP header "Zitadel-Signature", which is generated with
the SigningKey and Payload, and also contains a timestamp to check with
a tolerance if the message took to long to sent.
# Additional Changes
The functionality to create and check the signature is provided in the
pkg/actions package, and can be reused in the SDK.
# Additional Context
Closes#7924
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.
# How the Problems Are Solved
- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml
Notifications:
# The amount of workers processing the notification request events.
# If set to 0, no notification request events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
# The amount of events a single worker will process in a run.
BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
# Time interval between scheduled notifications for request events
RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
# The amount of workers processing the notification retry events.
# If set to 0, no notification retry events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
# Time interval between scheduled notifications for retry events
RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
# Only instances are projected, for which at least a projection-relevant event exists within the timeframe
# from HandleActiveInstances duration in the past until the projection's current time
# If set to 0 (default), every instance is always considered active
HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
# The maximum duration a transaction remains open
# before it spots left folding additional events
# and updates the table.
TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
# Automatically cancel the notification after the amount of failed attempts
MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
# Automatically cancel the notification if it cannot be handled within a specific time
MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL
# Failed attempts are retried after a confogired delay (with exponential backoff).
# Set a minimum and maximum delay and a factor for the backoff
MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
# Any factor below 1 will be set to 1
RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```
# Additional Changes
None
# Additional Context
- closes#8931
# Which Problems Are Solved
Integration tests are flaky due to eventual consistency.
# How the Problems Are Solved
Remove t.Parallel so that less concurrent requests on multiple instance
happen. This allows the projections to catch up more easily.
# Additional Changes
- none
# Additional Context
- none
# Which Problems Are Solved
When an org is removed, the corresponding fields are not deleted. This
creates issues, such as recreating a new org with the same verified
domain.
# How the Problems Are Solved
Remove the search fields by the org aggregate, instead of just setting
the removed state.
# Additional Changes
- Cleanup migration script that removed current stale fields.
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/8943
- Related to https://github.com/zitadel/zitadel/pull/8790
---------
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
This improves the `ADOPTERS.md` file to better understand its purpose.
# How the Problems Are Solved
Adding additional instructions to the `ADOPTERS.md` file
# Which Problems Are Solved
A number of small problems are fixed relating to the project roles
listed in various places in the UI:
- Fixes issue #8460
- Fixes an issue where the "Master checkbox" that's supposed to check
and uncheck all list items breaks when there's multiple pages of
results. Demonstration images are attached at the end of the PR.
- Fixes an issue where the "Edit Role" dialog opened by clicking on a
role in the list will not save any changes if the role's group is empty
even though empty groups are allowed during creation.
- Fixes issues where the list does not properly update after the user
modifies or deletes some of its entries.
- Fixes an issue for all paginated lists where the page number
information (like "0-25" specifying that items 0 through 25 are shown on
screen) was inaccurate, as described in #8460.
# How the Problems Are Solved
- Fixes buggy handling of pre-selected roles while editing a grant so
that all selected roles are saved instead of only the ones on the
current page.
- Triggers the entire page to be reloaded when a user modifies or
deletes a role to easily ensure the information on the screen is
accurate.
- Revises checkbox logic so that the "Master checkbox" will apply only
to rows on the current page. I think this is the correct behavior but
tell me if it should be changed.
- Other fixes to faulty logic.
# Additional Changes
- I made clicking on a group name toggle all the rows in that group on
the screen, instead of just turning them on. Tell me if this should be
changed back to what it was before.
# Additional Context
- Closes#8460
## An example of the broken checkboxes:
![2024-11-20_03-11-1732091377](https://github.com/user-attachments/assets/9f01f529-aac9-4669-92df-2abbe67e4983)
![2024-11-20_03-11-1732091365](https://github.com/user-attachments/assets/e7b8bed6-5cef-4c9f-9ecf-45ed41640dc6)
![2024-11-20_03-11-1732091357](https://github.com/user-attachments/assets/d404bc78-68fd-472d-b450-6578658f48ab)
![2024-11-20_03-11-1732091348](https://github.com/user-attachments/assets/a5976816-802b-4eab-bc61-58babc0b68f7)
---------
Co-authored-by: Max Peintner <max@caos.ch>
# Which Problems Are Solved
For truly event-based notification handler, we need to be able to filter
out events of aggregates which are already handled. For example when an
event like `notify.success` or `notify.failed` was created on an
aggregate, we no longer require events from that aggregate ID.
# How the Problems Are Solved
Extend the query builder to use a `NOT IN` clause which excludes
aggregate IDs when they have certain events for a certain aggregate
type. For optimization and proper index usages, certain filters are
inherited from the parent query, such as:
- Instance ID
- Instance IDs
- Position offset
This is a prettified query as used by the unit tests:
```sql
SELECT created_at, event_type, "sequence", "position", payload, creator, "owner", instance_id, aggregate_type, aggregate_id, revision
FROM eventstore.events2
WHERE instance_id = $1
AND aggregate_type = $2
AND event_type = $3
AND "position" > $4
AND aggregate_id NOT IN (
SELECT aggregate_id
FROM eventstore.events2
WHERE aggregate_type = $5
AND event_type = ANY($6)
AND instance_id = $7
AND "position" > $8
)
ORDER BY "position" DESC, in_tx_order DESC
LIMIT $9
```
I used this query to run it against the `oidc_session` aggregate looking
for added events, excluding aggregates where a token was revoked,
against a recent position. It fully used index scans:
<details>
```json
[
{
"Plan": {
"Node Type": "Index Scan",
"Parallel Aware": false,
"Async Capable": false,
"Scan Direction": "Forward",
"Index Name": "es_projection",
"Relation Name": "events2",
"Alias": "events2",
"Actual Rows": 2,
"Actual Loops": 1,
"Index Cond": "((instance_id = '286399006995644420'::text) AND (aggregate_type = 'oidc_session'::text) AND (event_type = 'oidc_session.added'::text) AND (\"position\" > 1731582100.784168))",
"Rows Removed by Index Recheck": 0,
"Filter": "(NOT (hashed SubPlan 1))",
"Rows Removed by Filter": 1,
"Plans": [
{
"Node Type": "Index Scan",
"Parent Relationship": "SubPlan",
"Subplan Name": "SubPlan 1",
"Parallel Aware": false,
"Async Capable": false,
"Scan Direction": "Forward",
"Index Name": "es_projection",
"Relation Name": "events2",
"Alias": "events2_1",
"Actual Rows": 1,
"Actual Loops": 1,
"Index Cond": "((instance_id = '286399006995644420'::text) AND (aggregate_type = 'oidc_session'::text) AND (event_type = 'oidc_session.access_token.revoked'::text) AND (\"position\" > 1731582100.784168))",
"Rows Removed by Index Recheck": 0
}
]
},
"Triggers": [
]
}
]
```
</details>
# Additional Changes
- None
# Additional Context
- Related to https://github.com/zitadel/zitadel/issues/8931
---------
Co-authored-by: adlerhurst <silvan.reusser@gmail.com>
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from
7.0.3 to 7.0.6.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md">cross-spawn's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.5...v7.0.6">7.0.6</a>
(2024-11-18)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>update cross-spawn version to 7.0.5 in package-lock.json (<a
href="f700743918">f700743</a>)</li>
</ul>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.4...v7.0.5">7.0.5</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>fix escaping bug introduced by backtracking (<a
href="640d391fde">640d391</a>)</li>
</ul>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.4">7.0.4</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)
(<a
href="5ff3a07d9a">5ff3a07</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="77cd97f3ca"><code>77cd97f</code></a>
chore(release): 7.0.6</li>
<li><a
href="6717de49ff"><code>6717de4</code></a>
chore: upgrade standard-version</li>
<li><a
href="f700743918"><code>f700743</code></a>
fix: update cross-spawn version to 7.0.5 in package-lock.json</li>
<li><a
href="9a7e3b2165"><code>9a7e3b2</code></a>
chore: fix build status badge</li>
<li><a
href="085268352d"><code>0852683</code></a>
chore(release): 7.0.5</li>
<li><a
href="640d391fde"><code>640d391</code></a>
fix: fix escaping bug introduced by backtracking</li>
<li><a
href="bff0c87c8b"><code>bff0c87</code></a>
chore: remove codecov</li>
<li><a
href="a7c6abc6fe"><code>a7c6abc</code></a>
chore: replace travis with github workflows</li>
<li><a
href="9b9246e096"><code>9b9246e</code></a>
chore(release): 7.0.4</li>
<li><a
href="5ff3a07d9a"><code>5ff3a07</code></a>
fix: disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.6">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cross-spawn&package-manager=npm_and_yarn&previous-version=7.0.3&new-version=7.0.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/zitadel/zitadel/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>
Bumps [cross-spawn](https://github.com/moxystudio/node-cross-spawn) from
7.0.3 to 7.0.6.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/moxystudio/node-cross-spawn/blob/master/CHANGELOG.md">cross-spawn's
changelog</a>.</em></p>
<blockquote>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.5...v7.0.6">7.0.6</a>
(2024-11-18)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>update cross-spawn version to 7.0.5 in package-lock.json (<a
href="f700743918">f700743</a>)</li>
</ul>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.4...v7.0.5">7.0.5</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>fix escaping bug introduced by backtracking (<a
href="640d391fde">640d391</a>)</li>
</ul>
<h3><a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.4">7.0.4</a>
(2024-11-07)</h3>
<h3>Bug Fixes</h3>
<ul>
<li>disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)
(<a
href="5ff3a07d9a">5ff3a07</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="77cd97f3ca"><code>77cd97f</code></a>
chore(release): 7.0.6</li>
<li><a
href="6717de49ff"><code>6717de4</code></a>
chore: upgrade standard-version</li>
<li><a
href="f700743918"><code>f700743</code></a>
fix: update cross-spawn version to 7.0.5 in package-lock.json</li>
<li><a
href="9a7e3b2165"><code>9a7e3b2</code></a>
chore: fix build status badge</li>
<li><a
href="085268352d"><code>0852683</code></a>
chore(release): 7.0.5</li>
<li><a
href="640d391fde"><code>640d391</code></a>
fix: fix escaping bug introduced by backtracking</li>
<li><a
href="bff0c87c8b"><code>bff0c87</code></a>
chore: remove codecov</li>
<li><a
href="a7c6abc6fe"><code>a7c6abc</code></a>
chore: replace travis with github workflows</li>
<li><a
href="9b9246e096"><code>9b9246e</code></a>
chore(release): 7.0.4</li>
<li><a
href="5ff3a07d9a"><code>5ff3a07</code></a>
fix: disable regexp backtracking (<a
href="https://redirect.github.com/moxystudio/node-cross-spawn/issues/160">#160</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/moxystudio/node-cross-spawn/compare/v7.0.3...v7.0.6">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cross-spawn&package-manager=npm_and_yarn&previous-version=7.0.3&new-version=7.0.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/zitadel/zitadel/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>
# Which Problems Are Solved
`eventstore.PushWithClient` required the wrong type of for the client
parameter.
# How the Problems Are Solved
Changed type of client from `database.Client` to
`database.QueryExecutor`
# Which Problems Are Solved
Push is not capable of external transactions.
# How the Problems Are Solved
A new function `PushWithClient` is added to the eventstore framework
which allows to pass a client which can either be a `*sql.Client` or
`*sql.Tx` and is used during push.
# Additional Changes
Added interfaces to database package.
# Additional Context
- part of https://github.com/zitadel/zitadel/issues/8931
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
# Which Problems Are Solved
We need a reliable way to lock events that are being processed as part
of a job queue. For example in the notification handlers.
# How the Problems Are Solved
Allow setting `FOR UPDATE [ NOWAIT | SKIP LOCKED ]` to the eventstore
query builder using an open transaction.
- NOWAIT returns an errors if the lock cannot be obtained
- SKIP LOCKED only returns row which are not locked.
- Default is to wait for the lock to be released.
# Additional Changes
- none
# Additional Context
- [Locking
docs](https://www.postgresql.org/docs/17/sql-select.html#SQL-FOR-UPDATE-SHARE)
- Related to https://github.com/zitadel/zitadel/issues/8931
# Which Problems Are Solved
Some links are pointing to the deprecated API v1
# How the Problems Are Solved
Change the link to the API V2
# Additional Changes
For the moment, I don't have the time to add more links in the API v1
pages.
Maybe later, when I will have time, I will add more links
---------
Co-authored-by: Fabi <fabienne@zitadel.com>
# Which Problems Are Solved
Organizations are ofter searched for by ID or primary domain. This
results in many redundant queries, resulting in a performance impact.
# How the Problems Are Solved
Cache Organizaion objects by ID and primary domain.
# Additional Changes
- Adjust integration test config to use all types of cache.
- Adjust integration test lifetimes so the pruner has something to do
while the tests run.
# Additional Context
- Closes#8865
- After #8902
# Which Problems Are Solved
Updating the meeting schedule with the latest community event.
# How the Problems Are Solved
A new event invite with associated details are added to direct community
members on Github to register for our Discord event.
# Additional Changes
N/A
# Additional Context
N/A
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
Explain the usage of the new cache mechanisms.
# How the Problems Are Solved
Provide a dedicated page on caches with reference to `defaults.yaml`.
# Additional Changes
- Fix a broken link tag in token exchange docs.
# Additional Context
- Closes#8855
# Which Problems Are Solved
Load-test requires single endpoint to be used for each test type.
# How the Problems Are Solved
Remove userinfo call from machine tests.
# Additional Changes
- Add load-test/.env to gitignore.
# Additional Context
- Related to #4424
# Which Problems Are Solved
Noisy neighbours can introduce projection latencies because the
projections only query events older than the start timestamp of the
oldest push transaction.
# How the Problems Are Solved
During push we set the application name to
`zitadel_es_pusher_<instance_id>` instead of `zitadel_es_pusher` which
is used to query events by projections.
# Which Problems Are Solved
We want to give adopters a platform to show that they are using ZITADEL
# How the Problems Are Solved
Addding an ADOPTERS.md file
# Additional Changes
none
# Additional Context
none
# Which Problems Are Solved
- ImportHuman was not checking for a `UserStateDeleted` state on import,
resulting in "already existing" errors when attempting to delete and
re-import a user with the same id
# How the Problems Are Solved
Use the `Exists` helper method to check for both `UserStateUnspecified`
and `UserStateDeleted` states on import
# Additional Changes
N/A
# Additional Context
N/A
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
Some SAML IdPs including Google only allow to configure a single
AssertionConsumerService URL.
Since the current metadata provides multiple and the hosted login UI is
not published as neither the first nor with `isDefault=true`, those IdPs
take another and then return an error on sign in.
# How the Problems Are Solved
Allow to reorder the ACS URLs using a query parameter
(`internalUI=true`) when retrieving the metadata endpoint.
This will list the `ui/login/login/externalidp/saml/acs` first and also
set the `isDefault=true`.
# Additional Changes
None
# Additional Context
Reported by a customer
# Which Problems Are Solved
The order of actions on a trigger was not respected on the execution and
not correctly returned when retrieving the flow, for example in Console.
The supposed correction of the order (e.g. in the UI) would then return
a "no changes" error since the order was already as desired.
# How the Problems Are Solved
- Correctly order the actions of a trigger based on their configuration
(`trigger_sequence`).
# Additional Changes
- replaced a `reflect.DeepEqual` with `slices.Equal` for checking the
action list
# Additional Context
- reported by a customer
- requires backports
# Which Problems Are Solved
By having default entries in the `Username` and `ClientName` fields, it
was not possible to unset there parameters. Unsetting them is required
for GCP connections
# How the Problems Are Solved
Set the fields to empty strings.
# Additional Changes
- none
# Additional Context
- none
# Which Problems Are Solved
If a redis cache has connection issues or any other type of permament
error,
it tanks the responsiveness of ZITADEL.
We currently do not support things like Redis cluster or sentinel. So
adding a simple redis cache improves performance but introduces a single
point of failure.
# How the Problems Are Solved
Implement a [circuit
breaker](https://learn.microsoft.com/en-us/previous-versions/msp-n-p/dn589784(v=pandp.10)?redirectedfrom=MSDN)
as
[`redis.Limiter`](https://pkg.go.dev/github.com/redis/go-redis/v9#Limiter)
by wrapping sony's [gobreaker](https://github.com/sony/gobreaker)
package. This package is picked as it seems well maintained and we
already use their `sonyflake` package
# Additional Changes
- The unit tests constructed an unused `redis.Client` and didn't cleanup
the connector. This is now fixed.
# Additional Context
Closes#8864
# Which Problems Are Solved
The setup filter for previous steps and kept getting slower. This is due
to the filter, which did not provide any instanceID and thus resulting
in a full table scan.
# How the Problems Are Solved
- Added an empty instanceID filter (since it's on system level)
# Additional Changes
None
# Additional Context
Noticed internally and during migrations on some regions