mirror of
https://github.com/zitadel/zitadel.git
synced 2024-12-15 12:27:59 +00:00
8537805ea5
# Which Problems Are Solved The current handling of notification follows the same pattern as all other projections: Created events are handled sequentially (based on "position") by a handler. During the process, a lot of information is aggregated (user, texts, templates, ...). This leads to back pressure on the projection since the handling of events might take longer than the time before a new event (to be handled) is created. # How the Problems Are Solved - The current user notification handler creates separate notification events based on the user / session events. - These events contain all the present and required information including the userID. - These notification events get processed by notification workers, which gather the necessary information (recipient address, texts, templates) to send out these notifications. - If a notification fails, a retry event is created based on the current notification request including the current state of the user (this prevents race conditions, where a user is changed in the meantime and the notification already gets the new state). - The retry event will be handled after a backoff delay. This delay increases with every attempt. - If the configured amount of attempts is reached or the message expired (based on config), a cancel event is created, letting the workers know, the notification must no longer be handled. - In case of successful send, a sent event is created for the notification aggregate and the existing "sent" events for the user / session object is stored. - The following is added to the defaults.yaml to allow configuration of the notification workers: ```yaml Notifications: # The amount of workers processing the notification request events. # If set to 0, no notification request events will be handled. This can be useful when running in # multi binary / pod setup and allowing only certain executables to process the events. Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS # The amount of events a single worker will process in a run. BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT # Time interval between scheduled notifications for request events RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY # The amount of workers processing the notification retry events. # If set to 0, no notification retry events will be handled. This can be useful when running in # multi binary / pod setup and allowing only certain executables to process the events. RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS # Time interval between scheduled notifications for retry events RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY # Only instances are projected, for which at least a projection-relevant event exists within the timeframe # from HandleActiveInstances duration in the past until the projection's current time # If set to 0 (default), every instance is always considered active HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES # The maximum duration a transaction remains open # before it spots left folding additional events # and updates the table. TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION # Automatically cancel the notification after the amount of failed attempts MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS # Automatically cancel the notification if it cannot be handled within a specific time MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL # Failed attempts are retried after a confogired delay (with exponential backoff). # Set a minimum and maximum delay and a factor for the backoff MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY # Any factor below 1 will be set to 1 RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR ``` # Additional Changes None # Additional Context - closes #8931 |
||
---|---|---|
.. | ||
schema | ||
testdata/fuzz/FuzzFromRefreshToken | ||
action.go | ||
application_api.go | ||
application_key.go | ||
application_oauth.go | ||
application_oidc_test.go | ||
application_oidc.go | ||
application_saml.go | ||
application.go | ||
asset.go | ||
auth_request_test.go | ||
auth_request.go | ||
authn_key.go | ||
browser_info.go | ||
bucket.go | ||
custom_login_text.go | ||
custom_message_text.go | ||
custom_text.go | ||
debug_events.go | ||
device_auth_test.go | ||
device_auth.go | ||
deviceauthstate_string.go | ||
execution.go | ||
expiration.go | ||
factors.go | ||
feature.go | ||
flow.go | ||
human_address.go | ||
human_email_test.go | ||
human_email.go | ||
human_otp.go | ||
human_password.go | ||
human_phone_test.go | ||
human_phone.go | ||
human_profile.go | ||
human_test.go | ||
human_web_auth_n.go | ||
human.go | ||
idp_config.go | ||
idp.go | ||
instance_domain.go | ||
instance.go | ||
key_pair.go | ||
language.go | ||
machine_key.go | ||
machine.go | ||
member.go | ||
metadata.go | ||
mfa.go | ||
next_step.go | ||
notification.go | ||
object.go | ||
oidc_code_challenge.go | ||
oidc_error_reason_test.go | ||
oidc_error_reason.go | ||
oidc_mapping_field.go | ||
oidc_session.go | ||
oidc_settings.go | ||
oidcresponsemode_enumer.go | ||
org_domain_test.go | ||
org_domain.go | ||
org.go | ||
permission.go | ||
policy_domain.go | ||
policy_label_test.go | ||
policy_label.go | ||
policy_login_test.go | ||
policy_login.go | ||
policy_mail_template.go | ||
policy_password_age.go | ||
policy_password_complexity.go | ||
policy_password_lockout.go | ||
policy_privacy.go | ||
policy.go | ||
project_grant_member.go | ||
project_grant.go | ||
project_role.go | ||
project.go | ||
provider.go | ||
refresh_token_test.go | ||
refresh_token.go | ||
request.go | ||
roles.go | ||
search_method.go | ||
secret_generator.go | ||
secretgeneratortype_enumer.go | ||
session.go | ||
sms.go | ||
smtp.go | ||
target.go | ||
token_test.go | ||
token.go | ||
tokenreason_enumer.go | ||
url_template_test.go | ||
url_template.go | ||
user_agent_test.go | ||
user_agent.go | ||
user_grant.go | ||
user_idp_link.go | ||
user_schema.go | ||
user_v2_passkey_test.go | ||
user_v2_passkey.go | ||
user.go | ||
web_key.go |