# Which Problems Are Solved
Add a cache implementation using Redis single mode. This does not add
support for Redis Cluster or sentinel.
# How the Problems Are Solved
Added the `internal/cache/redis` package. All operations occur
atomically, including setting of secondary indexes, using LUA scripts
where needed.
The [`miniredis`](https://github.com/alicebob/miniredis) package is used
to run unit tests.
# Additional Changes
- Move connector code to `internal/cache/connector/...` and remove
duplicate code from `query` and `command` packages.
- Fix a missed invalidation on the restrictions projection
# Additional Context
Closes#8130
# Which Problems Are Solved
Currently ZITADEL supports RP-initiated logout for clients. Back-channel
logout ensures that user sessions are terminated across all connected
applications, even if the user closes their browser or loses
connectivity providing a more secure alternative for certain use cases.
# How the Problems Are Solved
If the feature is activated and the client used for the authentication
has a back_channel_logout_uri configured, a
`session_logout.back_channel` will be registered. Once a user terminates
their session, a (notification) handler will send a SET (form POST) to
the registered uri containing a logout_token (with the user's ID and
session ID).
- A new feature "back_channel_logout" is added on system and instance
level
- A `back_channel_logout_uri` can be managed on OIDC applications
- Added a `session_logout` aggregate to register and inform about sent
`back_channel` notifications
- Added a `SecurityEventToken` channel and `Form`message type in the
notification handlers
- Added `TriggeredAtOrigin` fields to `HumanSignedOut` and
`TerminateSession` events for notification handling
- Exported various functions and types in the `oidc` package to be able
to reuse for token signing in the back_channel notifier.
- To prevent that current existing session termination events will be
handled, a setup step is added to set the `current_states` for the
`projections.notifications_back_channel_logout` to the current position
- [x] requires https://github.com/zitadel/oidc/pull/671
# Additional Changes
- Updated all OTEL dependencies to v1.29.0, since OIDC already updated
some of them to that version.
- Single Session Termination feature is correctly checked (fixed feature
mapping)
# Additional Context
- closes https://github.com/zitadel/zitadel/issues/8467
- TODO:
- Documentation
- UI to be done: https://github.com/zitadel/zitadel/issues/8469
---------
Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
# Which Problems Are Solved
Cache implementation using a PGX connection pool.
# How the Problems Are Solved
Defines a new schema `cache` in the zitadel database.
A table for string keys and a table for objects is defined.
For postgreSQL, tables are unlogged and partitioned by cache name for
performance.
Cockroach does not have unlogged tables and partitioning is an
enterprise feature that uses alternative syntax combined with sharding.
Regular tables are used here.
# Additional Changes
- `postgres.Config` can return a pxg pool. See following discussion
# Additional Context
- Part of https://github.com/zitadel/zitadel/issues/8648
- Closes https://github.com/zitadel/zitadel/issues/8647
---------
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
When using slog (e.g. in OIDC) the logs field name can not be
overwritten.
This is necessary for example to change log level to severity.
# How the Problems Are Solved
- Update logging library
# Additional Changes
None
# Additional Context
None
# Which Problems Are Solved
Twilio supports a robust, multi-channel verification service that
notably supports multi-region SMS sender numbers required for our use
case. Currently, Zitadel does much of the work of the Twilio Verify (eg.
localization, code generation, messaging) but doesn't support the pool
of sender numbers that Twilio Verify does.
# How the Problems Are Solved
To support this API, we need to be able to store the Twilio Service ID
and send that in a verification request where appropriate: phone number
verification and SMS 2FA code paths.
This PR does the following:
- Adds the ability to use Twilio Verify of standard messaging through
Twilio
- Adds support for international numbers and more reliable verification
messages sent from multiple numbers
- Adds a new Twilio configuration option to support Twilio Verify in the
admin console
- Sends verification SMS messages through Twilio Verify
- Implements Twilio Verification Checks for codes generated through the
same
# Additional Changes
# Additional Context
- base was implemented by @zhirschtritt in
https://github.com/zitadel/zitadel/pull/8268❤️
- closes https://github.com/zitadel/zitadel/issues/8581
---------
Co-authored-by: Zachary Hirschtritt <zachary.hirschtritt@klaviyo.com>
Co-authored-by: Joey Biscoglia <joey.biscoglia@klaviyo.com>
# Which Problems Are Solved
Use a single server instance for API integration tests. This optimizes
the time taken for the integration test pipeline,
because it allows running tests on multiple packages in parallel. Also,
it saves time by not start and stopping a zitadel server for every
package.
# How the Problems Are Solved
- Build a binary with `go build -race -cover ....`
- Integration tests only construct clients. The server remains running
in the background.
- The integration package and tested packages now fully utilize the API.
No more direct database access trough `query` and `command` packages.
- Use Makefile recipes to setup, start and stop the server in the
background.
- The binary has the race detector enabled
- Init and setup jobs are configured to halt immediately on race
condition
- Because the server runs in the background, races are only logged. When
the server is stopped and race logs exist, the Makefile recipe will
throw an error and print the logs.
- Makefile recipes include logic to print logs and convert coverage
reports after the server is stopped.
- Some tests need a downstream HTTP server to make requests, like quota
and milestones. A new `integration/sink` package creates an HTTP server
and uses websockets to forward HTTP request back to the test packages.
The package API uses Go channels for abstraction and easy usage.
# Additional Changes
- Integration test files already used the `//go:build integration`
directive. In order to properly split integration from unit tests,
integration test files need to be in a `integration_test` subdirectory
of their package.
- `UseIsolatedInstance` used to overwrite the `Tester.Client` for each
instance. Now a `Instance` object is returned with a gRPC client that is
connected to the isolated instance's hostname.
- The `Tester` type is now `Instance`. The object is created for the
first instance, used by default in any test. Isolated instances are also
`Instance` objects and therefore benefit from the same methods and
values. The first instance and any other us capable of creating an
isolated instance over the system API.
- All test packages run in an Isolated instance by calling
`NewInstance()`
- Individual tests that use an isolated instance use `t.Parallel()`
# Additional Context
- Closes#6684
- https://go.dev/doc/articles/race_detector
- https://go.dev/doc/build-cover
---------
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
# Which Problems Are Solved
Float64 which was used for the event.Position field is [not precise in
go and gets rounded](https://github.com/golang/go/issues/47300). This
can lead to unprecies position tracking of events and therefore
projections especially on cockcoachdb as the position used there is a
big number.
example of a unprecies position:
exact: 1725257931223002628
float64: 1725257931223002624.000000
# How the Problems Are Solved
The float64 was replaced by
[github.com/jackc/pgx-shopspring-decimal](https://github.com/jackc/pgx-shopspring-decimal).
# Additional Changes
Correct behaviour of makefile for load tests.
Rename `latestSequence`-queries to `latestPosition`
# Which Problems Are Solved
When using a SAML provider, which listed the `X509Certificate` as base64
encoded PEM, the SAMLResponse validation would fail with an error:
`malformed certificate`.
zitadel/saml has been fixed to handle this as well.
# How the Problems Are Solved
Update to zitadel/saml v0.2.0.
# Additional Changes
None
# Additional Context
closes https://github.com/zitadel/zitadel/issues/8039
# Which Problems Are Solved
Use web keys, managed by the `resources/v3alpha/web_keys` API, for OIDC
token signing and verification,
as well as serving the public web keys on the jwks / keys endpoint.
Response header on the keys endpoint now allows caching of the response.
This is now "safe" to do since keys can be created ahead of time and
caches have sufficient time to pickup the change before keys get
enabled.
# How the Problems Are Solved
- The web key format is used in the `getSignerOnce` function in the
`api/oidc` package.
- The public key cache is changed to get and store web keys.
- The jwks / keys endpoint returns the combined set of valid "legacy"
public keys and all available web keys.
- Cache-Control max-age default to 5 minutes and is configured in
`defaults.yaml`.
When the web keys feature is enabled, fallback mechanisms are in place
to obtain and convert "legacy" `query.PublicKey` as web keys when
needed. This allows transitioning to the feature without invalidating
existing tokens. A small performance overhead may be noticed on the keys
endpoint, because 2 queries need to be run sequentially. This will
disappear once the feature is stable and the legacy code gets cleaned
up.
# Additional Changes
- Extend legacy key lifetimes so that tests can be run on an existing
database with more than 6 hours apart.
- Discovery endpoint returns all supported algorithms when the Web Key
feature is enabled.
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/8031
- Part of https://github.com/zitadel/zitadel/issues/7809
- After https://github.com/zitadel/oidc/pull/637
- After https://github.com/zitadel/oidc/pull/638
# Which Problems Are Solved
Currently the OIDC API of ZITADEL only prints parent errors to the logs.
Where 4xx status are typically warn level and 5xx error level. This
makes it hard to debug certain errors for client in multi-instance
environments like ZITADEL cloud, where there is no direct access to
logs. In case of support requests we often can't correlate past
log-lines to the error that was reported.
This change adds the possibility to return the parent error in the
response to the OIDC client. For the moment this only applies to JSON
body responses, not error redirects to the RP.
# How the Problems Are Solved
- New instance-level feature flag: `debug_oidc_parent_error`
- Use the new `WithReturnParentToClient()` function from the oidc lib
introduced in https://github.com/zitadel/oidc/pull/629 for all cases
where `WithParent` was already used and the request context is
available.
# Additional Changes
none
# Additional Context
- Depends on: https://github.com/zitadel/oidc/pull/629
- Related to: https://github.com/zitadel/zitadel/issues/8362
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
# Which Problems Are Solved
To have more insight on the performance, CPU and memory usage of
ZITADEL, we want to enable profiling.
# How the Problems Are Solved
- Allow profiling by configuration.
- Provide Google Cloud Profiler as first implementation
# Additional Changes
None.
# Additional Context
There were possible memory leaks reported:
https://discord.com/channels/927474939156643850/1273210227918897152
Co-authored-by: Silvan <silvan.reusser@gmail.com>
# Which Problems Are Solved
OIDC redirects have wrong headers
# How the Problems Are Solved
This is fixed with https://github.com/zitadel/oidc/pull/632. This change
updates the OIDC lib to a fixed version.
# Which Problems Are Solved
Allow verification of imported passwords hashed with plain md5, without
salt. These are password digests typically created by one of:
- `printf "password" | md5sum` on most linux systems.
- PHP's `md5("password")`
- Python3's `hashlib.md5(b"password").hexdigest()`
# How the Problems Are Solved
- Upgrade passwap to
[v0.6.0](https://github.com/zitadel/passwap/releases/tag/v0.6.0)
- Add md5plain as a new verfier option in `defaults.yaml`
# Additional Changes
- Updated documentation to explain difference between `md5` (crypt) and
`md5plain` verifiers.
# Additional Context
- Requested by customer for import case
# Which Problems Are Solved
Improve the performance of the `admin/v1/import` API endpoint.
Specifaclly the import of large amount of project grants.
# How the Problems Are Solved
`AddProjectGrantWithID` and `AddProjectGrantMember` methods of
`Commands` used to get the current state of the Writemodel to check if
the current GrantID or the combination of GrantID & UserID wasn't
already used. However, the Added events already have protection against
duplication by the `UniqueConstaint` methods.
The queries become very slow when there is a great amount of project
grants. Because all the events are pushed to the aggregate ID of the
project, we had to obtain all related project events, including events
of grantIDs we do not care about. This O(n) duration for bached import
jobs adding many organization granted to a single project.
This change removes the unnecesary state query to improve performance.
# Additional Changes
- Add integration tests for import
# Additional Context
- reported internally
* implement code exchange
* port tokenexchange to v2 tokens
* implement refresh token
* implement client credentials
* implement jwt profile
* implement device token
* cleanup unused code
* fix current unit tests
* add user agent unit test
* unit test domain package
* need refresh token as argument
* test commands create oidc session
* test commands device auth
* fix device auth build error
* implicit for oidc session API
* implement authorize callback handler for legacy implicit mode
* upgrade oidc module to working draft
* add missing auth methods and time
* handle all errors in defer
* do not fail auth request on error
the oauth2 Go client automagically retries on any error. If we fail the auth request on the first error, the next attempt will always fail with the Errors.AuthRequest.NoCode, because the auth request state is already set to failed.
The original error is then already lost and the oauth2 library does not return the original error.
Therefore we should not fail the auth request.
Might be worth discussing and perhaps send a bug report to Oauth2?
* fix code flow tests by explicitly setting code exchanged
* fix unit tests in command package
* return allowed scope from client credential client
* add device auth done reducer
* carry nonce thru session into ID token
* fix token exchange integration tests
* allow project role scope prefix in client credentials client
* gci formatting
* do not return refresh token in client credentials and jwt profile
* check org scope
* solve linting issue on authorize callback error
* end session based on v2 session ID
* use preferred language and user agent ID for v2 access tokens
* pin oidc v3.23.2
* add integration test for jwt profile and client credentials with org scopes
* refresh token v1 to v2
* add user token v2 audit event
* add activity trigger
* cleanup and set panics for unused methods
* use the encrypted code for v1 auth request get by code
* add missing event translation
* fix pipeline errors (hopefully)
* fix another test
* revert pointer usage of preferred language
* solve browser info panic in device auth
* remove duplicate entries in AMRToAuthMethodTypes to prevent future `mfa` claim
* revoke v1 refresh token to prevent reuse
* fix terminate oidc session
* always return a new refresh toke in refresh token grant
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* chore: use pgx v5
* chore: update go version
* remove direct pq dependency
* remove unnecessary type
* scan test
* map scanner
* converter
* uint8 number array
* duration
* most unit tests work
* unit tests work
* chore: coverage
* go 1.21
* linting
* int64 gopfertammi
* retry go 1.22
* retry go 1.22
* revert to go v1.21.5
* update go toolchain to 1.21.8
* go 1.21.8
* remove test flag
* go 1.21.5
* linting
* update toolchain
* use correct array
* use correct array
* add byte array
* correct value
* correct error message
* go 1.21 compatible
* add token exchange feature flag
* allow setting reason and actor to access tokens
* impersonation
* set token types and scopes in response
* upgrade oidc to working draft state
* fix tests
* audience and scope validation
* id toke and jwt as input
* return id tokens
* add grant type token exchange to app config
* add integration tests
* check and deny actors in api calls
* fix instance setting tests by triggering projection on write and cleanup
* insert sleep statements again
* solve linting issues
* add translations
* pin oidc v3.15.0
* resolve comments, add event translation
* fix refreshtoken test
* use ValidateAuthReqScopes from oidc
* apparently the linter can't make up its mind
* persist actor thru refresh tokens and check in tests
* remove unneeded triggers
This PR adds the functionality to manage user schemas through the new user schema service.
It includes the possibility to create a basic JSON schema and also provides a way on defining permissions (read, write) for owner and self context with an annotation.
Further annotations for OIDC claims and SAML attribute mappings will follow.
A guide on how to create a schema and assign permissions has been started. It will be extended though out the process of implementing the schema and users based on those.
Note:
This feature is in an early stage and therefore not enabled by default. To test it out, please enable the UserSchema feature flag on your instance / system though the feature service.
* fix(backend): respect start flags in all commands
Currently flags like --externalDomain do only work in the last
registered command which currently is start-from-setup.
This creates the flags globally in the init function in uses them for
all start commands.
* fix(backend): remove viper defaults in start flags
At this point viper is not yet initialized so this defaults would have
not effect either.
* Remove flag name variables and run go mod tidy
---------
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
* fix(oidc): ignore public key expiry for ID Token hints
This splits the key sets used for access token and ID token hints.
ID Token hints should be able to be verified by with public keys that are already expired.
However, we do not want to change this behavior for Access Tokens,
where an error for an expired public key is still returned.
The public key cache is modified to purge public keys based on last use,
instead of expiry.
The cache is shared between both verifiers.
* resolve review comments
* pin oidc 3.11
* chore(deps): upgrade all go deps
Also `go mod tidy`.
Added comments with URLs for package version lists to makefile commands.
* Update Makefile
Co-authored-by: Livio Spring <livio.a@gmail.com>
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* cleanup todo
* pass id token details to oidc
* feat(oidc): id token for device authorization
This changes updates to the newest oidc version,
so the Device Authorization grant can return ID tokens when
the scope `openid` is set.
There is also some refactoring done, so that the eventstore can be
queried directly when polling for state.
The projection is cleaned up to a minimum with only data required for the login UI.
* try to be explicit wit hthe timezone to fix github
* pin oidc v3.8.0
* remove TBD entry
This change adds a core_generate_all make target.
It installs the required tools and runs generate on the complete project.
`golang/mock` is no longer maintained and a fork is available
from the Uber folks. So the latter is used as tool.
All the mock files have been regenerated and are part of the PR.
The obsolete `tools` directory has been removed,
as all the tools are now part of specific make targets.
Co-authored-by: Silvan <silvan.reusser@gmail.com>
* get key by id and cache them
* userinfo from events for v2 tokens
* improve keyset caching
* concurrent token and client checks
* client and project in single query
* logging and otel
* drop owner_removed column on apps and authN tables
* userinfo and project roles in go routines
* get oidc user info from projections and add actions
* add avatar URL
* some cleanup
* pull oidc work branch
* remove storage from server
* add config flag for experimental introspection
* legacy introspection flag
* drop owner_removed column on user projections
* drop owner_removed column on useer_metadata
* query userinfo unit test
* query introspection client test
* add user_grants to the userinfo query
* handle PAT scopes
* bring triggers back
* test instance keys query
* add userinfo unit tests
* unit test keys
* go mod tidy
* solve some bugs
* fix missing preferred login name
* do not run triggers in go routines, they seem to deadlock
* initialize the trigger handlers late with a sync.OnceValue
* Revert "do not run triggers in go routines, they seem to deadlock"
This reverts commit 2a03da2127.
* add missing translations
* chore: update go version for linting
* pin oidc version
* parse a global time location for query test
* fix linter complains
* upgrade go lint
* fix more linting issues
---------
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
* chore(deps): upgrade all go modules
This change upgrades all go.mod dependecies. As well as Makefile tools.
There where some imports that still used the old and deprecated
`github.com/golang/protobuf/ptypes` package.
These have been moved to the equivelant
`google.golang.org/protobuf/types/known` package.
The `internal/proto` package is removed as was only used once.
With a simple refactor in the Validator it became completely obsolete.
* fix validate unit test
* cleanup merge
* update otel
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
* feat(oidc): use the new oidc server interface
* rename from provider to server
* pin logging and oidc packages
* use oidc introspection fix branch
* add overloaded methods with tracing
* cleanup unused code
* include latest oidc fixes
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
This pr upgrades oidc to v3 . Function signature changes have been migrated as well. Specifically there are more client calls that take a context now. Where feasable a context is added to those calls. Where a context is not (easily) available context.TODO() is used as a reminder for when it does.
Related to #6619
* start feature flags
* base feature events on domain const
* setup default features
* allow setting feature in system api
* allow setting feature in admin api
* set settings in login based on feature
* fix rebasing
* unit tests
* i18n
* update policy after domain discovery
* some changes from review
* check feature and value type
* check feature and value type