2021-01-04 13:52:13 +00:00
|
|
|
package domain
|
|
|
|
|
feat(notification): use event worker pool (#8962)
# Which Problems Are Solved
The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.
# How the Problems Are Solved
- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml
Notifications:
# The amount of workers processing the notification request events.
# If set to 0, no notification request events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
# The amount of events a single worker will process in a run.
BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
# Time interval between scheduled notifications for request events
RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
# The amount of workers processing the notification retry events.
# If set to 0, no notification retry events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
# Time interval between scheduled notifications for retry events
RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
# Only instances are projected, for which at least a projection-relevant event exists within the timeframe
# from HandleActiveInstances duration in the past until the projection's current time
# If set to 0 (default), every instance is always considered active
HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
# The maximum duration a transaction remains open
# before it spots left folding additional events
# and updates the table.
TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
# Automatically cancel the notification after the amount of failed attempts
MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
# Automatically cancel the notification if it cannot be handled within a specific time
MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL
# Failed attempts are retried after a confogired delay (with exponential backoff).
# Set a minimum and maximum delay and a factor for the backoff
MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
# Any factor below 1 will be set to 1
RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```
# Additional Changes
None
# Additional Context
- closes #8931
2024-11-27 15:01:17 +00:00
|
|
|
import (
|
|
|
|
"time"
|
|
|
|
)
|
|
|
|
|
2021-01-04 13:52:13 +00:00
|
|
|
type NotificationType int32
|
|
|
|
|
|
|
|
const (
|
|
|
|
NotificationTypeEmail NotificationType = iota
|
|
|
|
NotificationTypeSms
|
|
|
|
|
|
|
|
notificationCount
|
|
|
|
)
|
|
|
|
|
2022-03-07 13:22:37 +00:00
|
|
|
type NotificationProviderState int32
|
|
|
|
|
|
|
|
const (
|
|
|
|
NotificationProviderStateUnspecified NotificationProviderState = iota
|
|
|
|
NotificationProviderStateActive
|
|
|
|
NotificationProviderStateRemoved
|
|
|
|
|
|
|
|
notificationProviderCount
|
|
|
|
)
|
|
|
|
|
|
|
|
func (s NotificationProviderState) Exists() bool {
|
|
|
|
return s == NotificationProviderStateActive
|
|
|
|
}
|
|
|
|
|
|
|
|
type NotificationProviderType int32
|
|
|
|
|
|
|
|
const (
|
|
|
|
NotificationProviderTypeFile NotificationProviderType = iota
|
|
|
|
NotificationProviderTypeLog
|
|
|
|
|
|
|
|
notificationProviderTypeCount
|
|
|
|
)
|
feat(notification): use event worker pool (#8962)
# Which Problems Are Solved
The current handling of notification follows the same pattern as all
other projections:
Created events are handled sequentially (based on "position") by a
handler. During the process, a lot of information is aggregated (user,
texts, templates, ...).
This leads to back pressure on the projection since the handling of
events might take longer than the time before a new event (to be
handled) is created.
# How the Problems Are Solved
- The current user notification handler creates separate notification
events based on the user / session events.
- These events contain all the present and required information
including the userID.
- These notification events get processed by notification workers, which
gather the necessary information (recipient address, texts, templates)
to send out these notifications.
- If a notification fails, a retry event is created based on the current
notification request including the current state of the user (this
prevents race conditions, where a user is changed in the meantime and
the notification already gets the new state).
- The retry event will be handled after a backoff delay. This delay
increases with every attempt.
- If the configured amount of attempts is reached or the message expired
(based on config), a cancel event is created, letting the workers know,
the notification must no longer be handled.
- In case of successful send, a sent event is created for the
notification aggregate and the existing "sent" events for the user /
session object is stored.
- The following is added to the defaults.yaml to allow configuration of
the notification workers:
```yaml
Notifications:
# The amount of workers processing the notification request events.
# If set to 0, no notification request events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS
# The amount of events a single worker will process in a run.
BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT
# Time interval between scheduled notifications for request events
RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY
# The amount of workers processing the notification retry events.
# If set to 0, no notification retry events will be handled. This can be useful when running in
# multi binary / pod setup and allowing only certain executables to process the events.
RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS
# Time interval between scheduled notifications for retry events
RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY
# Only instances are projected, for which at least a projection-relevant event exists within the timeframe
# from HandleActiveInstances duration in the past until the projection's current time
# If set to 0 (default), every instance is always considered active
HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES
# The maximum duration a transaction remains open
# before it spots left folding additional events
# and updates the table.
TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION
# Automatically cancel the notification after the amount of failed attempts
MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS
# Automatically cancel the notification if it cannot be handled within a specific time
MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL
# Failed attempts are retried after a confogired delay (with exponential backoff).
# Set a minimum and maximum delay and a factor for the backoff
MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY
MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY
# Any factor below 1 will be set to 1
RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR
```
# Additional Changes
None
# Additional Context
- closes #8931
2024-11-27 15:01:17 +00:00
|
|
|
|
|
|
|
type NotificationArguments struct {
|
|
|
|
Origin string `json:"origin,omitempty"`
|
|
|
|
Domain string `json:"domain,omitempty"`
|
|
|
|
Expiry time.Duration `json:"expiry,omitempty"`
|
|
|
|
TempUsername string `json:"tempUsername,omitempty"`
|
|
|
|
ApplicationName string `json:"applicationName,omitempty"`
|
|
|
|
CodeID string `json:"codeID,omitempty"`
|
|
|
|
SessionID string `json:"sessionID,omitempty"`
|
|
|
|
AuthRequestID string `json:"authRequestID,omitempty"`
|
|
|
|
}
|
|
|
|
|
|
|
|
// ToMap creates a type safe map of the notification arguments.
|
|
|
|
// Since these arguments are used in text template, all keys must be PascalCase and types must remain the same (e.g. Duration).
|
|
|
|
func (n *NotificationArguments) ToMap() map[string]interface{} {
|
|
|
|
m := make(map[string]interface{})
|
|
|
|
if n == nil {
|
|
|
|
return m
|
|
|
|
}
|
|
|
|
m["Origin"] = n.Origin
|
|
|
|
m["Domain"] = n.Domain
|
|
|
|
m["Expiry"] = n.Expiry
|
|
|
|
m["TempUsername"] = n.TempUsername
|
|
|
|
m["ApplicationName"] = n.ApplicationName
|
|
|
|
m["CodeID"] = n.CodeID
|
|
|
|
m["SessionID"] = n.SessionID
|
|
|
|
m["AuthRequestID"] = n.AuthRequestID
|
|
|
|
return m
|
|
|
|
}
|