zitadel/cmd/setup/setup.go

468 lines
18 KiB
Go
Raw Normal View History

package setup
import (
"context"
"embed"
_ "embed"
"net/http"
perf(oidc): nest position clause for session terminated query (#8738) # Which Problems Are Solved Optimize the query that checks for terminated sessions in the access token verifier. The verifier is used in auth middleware, userinfo and introspection. # How the Problems Are Solved The previous implementation built a query for certain events and then appended a single `PositionAfter` clause. This caused the postgreSQL planner to use indexes only for the instance ID, aggregate IDs, aggregate types and event types. Followed by an expensive sequential scan for the position. This resulting in internal over-fetching of rows before the final filter was applied. ![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c) Furthermore, the query was searching for events which are not always applicable. For example, there was always a session ID search and if there was a user ID, we would also search for a browser fingerprint in event payload (expensive). Even if those argument string would be empty. This PR changes: 1. Nest the position query, so that a full `instance_id, aggregate_id, aggregate_type, event_type, "position"` index can be matched. 2. Redefine the `es_wm` index to include the `position` column. 3. Only search for events for the IDs that actually have a value. Do not search (noop) if none of session ID, user ID or fingerpint ID are set. New query plan: ![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775) # Additional Changes - cleanup how we load multi-statement migrations and make that a bit more reusable. # Additional Context - Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
"path/filepath"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/zitadel/logging"
"github.com/zitadel/zitadel/cmd/build"
"github.com/zitadel/zitadel/cmd/encryption"
"github.com/zitadel/zitadel/cmd/key"
"github.com/zitadel/zitadel/cmd/tls"
admin_handler "github.com/zitadel/zitadel/internal/admin/repository/eventsourcing/handler"
admin_view "github.com/zitadel/zitadel/internal/admin/repository/eventsourcing/view"
internal_authz "github.com/zitadel/zitadel/internal/api/authz"
auth_handler "github.com/zitadel/zitadel/internal/auth/repository/eventsourcing/handler"
auth_view "github.com/zitadel/zitadel/internal/auth/repository/eventsourcing/view"
"github.com/zitadel/zitadel/internal/authz"
authz_es "github.com/zitadel/zitadel/internal/authz/repository/eventsourcing/eventstore"
"github.com/zitadel/zitadel/internal/cache/connector"
"github.com/zitadel/zitadel/internal/command"
cryptoDB "github.com/zitadel/zitadel/internal/crypto/database"
"github.com/zitadel/zitadel/internal/database"
"github.com/zitadel/zitadel/internal/database/dialect"
"github.com/zitadel/zitadel/internal/domain"
"github.com/zitadel/zitadel/internal/eventstore"
old_es "github.com/zitadel/zitadel/internal/eventstore/repository/sql"
new_es "github.com/zitadel/zitadel/internal/eventstore/v3"
feat: restrict languages (#6931) * feat: return 404 or 409 if org reg disallowed * fix: system limit permissions * feat: add iam limits api * feat: disallow public org registrations on default instance * add integration test * test: integration * fix test * docs: describe public org registrations * avoid updating docs deps * fix system limits integration test * silence integration tests * fix linting * ignore strange linter complaints * review * improve reset properties naming * redefine the api * use restrictions aggregate * test query * simplify and test projection * test commands * fix unit tests * move integration test * support restrictions on default instance * also test GetRestrictions * self review * lint * abstract away resource owner * fix tests * configure supported languages * fix allowed languages * fix tests * default lang must not be restricted * preferred language must be allowed * change preferred languages * check languages everywhere * lint * test command side * lint * add integration test * add integration test * restrict supported ui locales * lint * lint * cleanup * lint * allow undefined preferred language * fix integration tests * update main * fix env var * ignore linter * ignore linter * improve integration test config * reduce cognitive complexity * compile * check for duplicates * remove useless restriction checks * review * revert restriction renaming * fix language restrictions * lint * generate * allow custom texts for supported langs for now * fix tests * cleanup * cleanup * cleanup * lint * unsupported preferred lang is allowed * fix integration test * finish reverting to old property name * finish reverting to old property name * load languages * refactor(i18n): centralize translators and fs * lint * amplify no validations on preferred languages * fix integration test * lint * fix resetting allowed languages * test unchanged restrictions
2023-12-05 11:12:01 +00:00
"github.com/zitadel/zitadel/internal/i18n"
"github.com/zitadel/zitadel/internal/migration"
notify_handler "github.com/zitadel/zitadel/internal/notification"
"github.com/zitadel/zitadel/internal/query"
"github.com/zitadel/zitadel/internal/query/projection"
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
es_v4 "github.com/zitadel/zitadel/internal/v2/eventstore"
es_v4_pg "github.com/zitadel/zitadel/internal/v2/eventstore/postgres"
"github.com/zitadel/zitadel/internal/webauthn"
)
var (
//go:embed steps.yaml
defaultSteps []byte
stepFiles []string
)
func New() *cobra.Command {
cmd := &cobra.Command{
Use: "setup",
Short: "setup ZITADEL instance",
Long: `sets up data to start ZITADEL.
Requirements:
- cockroachdb`,
Run: func(cmd *cobra.Command, args []string) {
err := tls.ModeFromFlag(cmd)
logging.OnError(err).Fatal("invalid tlsMode")
err = BindInitProjections(cmd)
logging.OnError(err).Fatal("unable to bind \"init-projections\" flag")
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
err = bindForMirror(cmd)
logging.OnError(err).Fatal("unable to bind \"for-mirror\" flag")
feat(cli): setup (#3267) * commander * commander * selber! * move to packages * fix(errors): implement Is interface * test: command * test: commands * add init steps * setup tenant * add default step yaml * possibility to set password * merge v2 into v2-commander * fix: rename iam command side to instance * fix: rename iam command side to instance * fix: rename iam command side to instance * fix: rename iam command side to instance * fix: search query builder can filter events in memory * fix: filters for add member * fix(setup): add `ExternalSecure` to config * chore: name iam to instance * fix: matching * remove unsued func * base url * base url * test(command): filter funcs * test: commands * fix: rename orgiampolicy to domain policy * start from init * commands * config * fix indexes and add constraints * fixes * fix: merge conflicts * fix: protos * fix: md files * setup * add deprecated org iam policy again * typo * fix search query * fix filter * Apply suggestions from code review * remove custom org from org setup * add todos for verification * change apps creation * simplify package structure * fix error * move preparation helper for tests * fix unique constraints * fix config mapping in setup * fix error handling in encryption_keys.go * fix projection config * fix query from old views to projection * fix setup of mgmt api * set iam project and fix instance projection * imports Co-authored-by: Livio Amstutz <livio.a@gmail.com> Co-authored-by: fabi <fabienne.gerschwiler@gmail.com>
2022-03-28 08:05:09 +00:00
config := MustNewConfig(viper.GetViper())
steps := MustNewSteps(viper.New())
masterKey, err := key.MasterKey(cmd)
logging.OnError(err).Panic("No master key provided")
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
Setup(cmd.Context(), config, steps, masterKey)
},
}
cmd.AddCommand(NewCleanup())
Flags(cmd)
return cmd
}
func Flags(cmd *cobra.Command) {
cmd.PersistentFlags().StringArrayVar(&stepFiles, "steps", nil, "paths to step files to overwrite default steps")
cmd.Flags().Bool("init-projections", viper.GetBool("InitProjections"), "beta feature: initializes projections after they are created, allows smooth start as projections are up to date")
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
cmd.Flags().Bool("for-mirror", viper.GetBool("ForMirror"), "use this flag if you want to mirror your existing data")
key.AddMasterKeyFlag(cmd)
tls.AddTLSModeFlag(cmd)
}
func BindInitProjections(cmd *cobra.Command) error {
return viper.BindPFlag("InitProjections.Enabled", cmd.Flags().Lookup("init-projections"))
}
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
func bindForMirror(cmd *cobra.Command) error {
return viper.BindPFlag("ForMirror", cmd.Flags().Lookup("for-mirror"))
}
func Setup(ctx context.Context, config *Config, steps *Steps, masterKey string) {
logging.Info("setup started")
feat: restrict languages (#6931) * feat: return 404 or 409 if org reg disallowed * fix: system limit permissions * feat: add iam limits api * feat: disallow public org registrations on default instance * add integration test * test: integration * fix test * docs: describe public org registrations * avoid updating docs deps * fix system limits integration test * silence integration tests * fix linting * ignore strange linter complaints * review * improve reset properties naming * redefine the api * use restrictions aggregate * test query * simplify and test projection * test commands * fix unit tests * move integration test * support restrictions on default instance * also test GetRestrictions * self review * lint * abstract away resource owner * fix tests * configure supported languages * fix allowed languages * fix tests * default lang must not be restricted * preferred language must be allowed * change preferred languages * check languages everywhere * lint * test command side * lint * add integration test * add integration test * restrict supported ui locales * lint * lint * cleanup * lint * allow undefined preferred language * fix integration tests * update main * fix env var * ignore linter * ignore linter * improve integration test config * reduce cognitive complexity * compile * check for duplicates * remove useless restriction checks * review * revert restriction renaming * fix language restrictions * lint * generate * allow custom texts for supported langs for now * fix tests * cleanup * cleanup * cleanup * lint * unsupported preferred lang is allowed * fix integration test * finish reverting to old property name * finish reverting to old property name * load languages * refactor(i18n): centralize translators and fs * lint * amplify no validations on preferred languages * fix integration test * lint * fix resetting allowed languages * test unchanged restrictions
2023-12-05 11:12:01 +00:00
i18n.MustLoadSupportedLanguagesFromDir()
queryDBClient, err := database.Connect(config.Database, false, dialect.DBPurposeQuery)
logging.OnError(err).Fatal("unable to connect to database")
esPusherDBClient, err := database.Connect(config.Database, false, dialect.DBPurposeEventPusher)
logging.OnError(err).Fatal("unable to connect to database")
projectionDBClient, err := database.Connect(config.Database, false, dialect.DBPurposeProjectionSpooler)
logging.OnError(err).Fatal("unable to connect to database")
config.Eventstore.Querier = old_es.NewCRDB(queryDBClient)
esV3 := new_es.NewEventstore(esPusherDBClient)
config.Eventstore.Pusher = esV3
config.Eventstore.Searcher = esV3
eventstoreClient := eventstore.NewEventstore(config.Eventstore)
logging.OnError(err).Fatal("unable to start eventstore")
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
eventstoreV4 := es_v4.NewEventstoreFromOne(es_v4_pg.New(queryDBClient, &es_v4_pg.Config{
MaxRetries: config.Eventstore.MaxRetries,
}))
feat(cli): setup (#3267) * commander * commander * selber! * move to packages * fix(errors): implement Is interface * test: command * test: commands * add init steps * setup tenant * add default step yaml * possibility to set password * merge v2 into v2-commander * fix: rename iam command side to instance * fix: rename iam command side to instance * fix: rename iam command side to instance * fix: rename iam command side to instance * fix: search query builder can filter events in memory * fix: filters for add member * fix(setup): add `ExternalSecure` to config * chore: name iam to instance * fix: matching * remove unsued func * base url * base url * test(command): filter funcs * test: commands * fix: rename orgiampolicy to domain policy * start from init * commands * config * fix indexes and add constraints * fixes * fix: merge conflicts * fix: protos * fix: md files * setup * add deprecated org iam policy again * typo * fix search query * fix filter * Apply suggestions from code review * remove custom org from org setup * add todos for verification * change apps creation * simplify package structure * fix error * move preparation helper for tests * fix unique constraints * fix config mapping in setup * fix error handling in encryption_keys.go * fix projection config * fix query from old views to projection * fix setup of mgmt api * set iam project and fix instance projection * imports Co-authored-by: Livio Amstutz <livio.a@gmail.com> Co-authored-by: fabi <fabienne.gerschwiler@gmail.com>
2022-03-28 08:05:09 +00:00
steps.s1ProjectionTable = &ProjectionTable{dbClient: queryDBClient.DB}
steps.s2AssetsTable = &AssetTable{dbClient: queryDBClient.DB}
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
steps.FirstInstance.Skip = config.ForMirror || steps.FirstInstance.Skip
steps.FirstInstance.instanceSetup = config.DefaultInstance
steps.FirstInstance.userEncryptionKey = config.EncryptionKeys.User
steps.FirstInstance.smtpEncryptionKey = config.EncryptionKeys.SMTP
steps.FirstInstance.oidcEncryptionKey = config.EncryptionKeys.OIDC
steps.FirstInstance.masterKey = masterKey
steps.FirstInstance.db = queryDBClient
steps.FirstInstance.es = eventstoreClient
steps.FirstInstance.defaults = config.SystemDefaults
steps.FirstInstance.zitadelRoles = config.InternalAuthZ.RolePermissionMappings
steps.FirstInstance.externalDomain = config.ExternalDomain
steps.FirstInstance.externalSecure = config.ExternalSecure
steps.FirstInstance.externalPort = config.ExternalPort
steps.s5LastFailed = &LastFailed{dbClient: queryDBClient.DB}
steps.s6OwnerRemoveColumns = &OwnerRemoveColumns{dbClient: queryDBClient.DB}
steps.s7LogstoreTables = &LogstoreTables{dbClient: queryDBClient.DB, username: config.Database.Username(), dbType: config.Database.Type()}
steps.s8AuthTokens = &AuthTokenIndexes{dbClient: queryDBClient}
steps.CorrectCreationDate.dbClient = esPusherDBClient
steps.s12AddOTPColumns = &AddOTPColumns{dbClient: queryDBClient}
steps.s13FixQuotaProjection = &FixQuotaConstraints{dbClient: queryDBClient}
steps.s14NewEventsTable = &NewEventsTable{dbClient: esPusherDBClient}
steps.s15CurrentStates = &CurrentProjectionState{dbClient: queryDBClient}
steps.s16UniqueConstraintsLower = &UniqueConstraintToLower{dbClient: queryDBClient}
steps.s17AddOffsetToUniqueConstraints = &AddOffsetToCurrentStates{dbClient: queryDBClient}
steps.s18AddLowerFieldsToLoginNames = &AddLowerFieldsToLoginNames{dbClient: queryDBClient}
steps.s19AddCurrentStatesIndex = &AddCurrentSequencesIndex{dbClient: queryDBClient}
steps.s20AddByUserSessionIndex = &AddByUserIndexToSession{dbClient: queryDBClient}
steps.s21AddBlockFieldToLimits = &AddBlockFieldToLimits{dbClient: queryDBClient}
steps.s22ActiveInstancesIndex = &ActiveInstanceEvents{dbClient: queryDBClient}
steps.s23CorrectGlobalUniqueConstraints = &CorrectGlobalUniqueConstraints{dbClient: esPusherDBClient}
steps.s24AddActorToAuthTokens = &AddActorToAuthTokens{dbClient: queryDBClient}
steps.s25User11AddLowerFieldsToVerifiedEmail = &User11AddLowerFieldsToVerifiedEmail{dbClient: esPusherDBClient}
steps.s26AuthUsers3 = &AuthUsers3{dbClient: esPusherDBClient}
steps.s27IDPTemplate6SAMLNameIDFormat = &IDPTemplate6SAMLNameIDFormat{dbClient: esPusherDBClient}
steps.s28AddFieldTable = &AddFieldTable{dbClient: esPusherDBClient}
steps.s29FillFieldsForProjectGrant = &FillFieldsForProjectGrant{eventstore: eventstoreClient}
steps.s30FillFieldsForOrgDomainVerified = &FillFieldsForOrgDomainVerified{eventstore: eventstoreClient}
steps.s31AddAggregateIndexToFields = &AddAggregateIndexToFields{dbClient: esPusherDBClient}
steps.s32AddAuthSessionID = &AddAuthSessionID{dbClient: esPusherDBClient}
steps.s33SMSConfigs3TwilioAddVerifyServiceSid = &SMSConfigs3TwilioAddVerifyServiceSid{dbClient: esPusherDBClient}
steps.s34AddCacheSchema = &AddCacheSchema{dbClient: queryDBClient}
perf(oidc): nest position clause for session terminated query (#8738) # Which Problems Are Solved Optimize the query that checks for terminated sessions in the access token verifier. The verifier is used in auth middleware, userinfo and introspection. # How the Problems Are Solved The previous implementation built a query for certain events and then appended a single `PositionAfter` clause. This caused the postgreSQL planner to use indexes only for the instance ID, aggregate IDs, aggregate types and event types. Followed by an expensive sequential scan for the position. This resulting in internal over-fetching of rows before the final filter was applied. ![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c) Furthermore, the query was searching for events which are not always applicable. For example, there was always a session ID search and if there was a user ID, we would also search for a browser fingerprint in event payload (expensive). Even if those argument string would be empty. This PR changes: 1. Nest the position query, so that a full `instance_id, aggregate_id, aggregate_type, event_type, "position"` index can be matched. 2. Redefine the `es_wm` index to include the `position` column. 3. Only search for events for the IDs that actually have a value. Do not search (noop) if none of session ID, user ID or fingerpint ID are set. New query plan: ![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775) # Additional Changes - cleanup how we load multi-statement migrations and make that a bit more reusable. # Additional Context - Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
steps.s35AddPositionToIndexEsWm = &AddPositionToIndexEsWm{dbClient: esPusherDBClient}
steps.s36FillV2Milestones = &FillV3Milestones{dbClient: queryDBClient, eventstore: eventstoreClient}
feat(OIDC): add back channel logout (#8837) # Which Problems Are Solved Currently ZITADEL supports RP-initiated logout for clients. Back-channel logout ensures that user sessions are terminated across all connected applications, even if the user closes their browser or loses connectivity providing a more secure alternative for certain use cases. # How the Problems Are Solved If the feature is activated and the client used for the authentication has a back_channel_logout_uri configured, a `session_logout.back_channel` will be registered. Once a user terminates their session, a (notification) handler will send a SET (form POST) to the registered uri containing a logout_token (with the user's ID and session ID). - A new feature "back_channel_logout" is added on system and instance level - A `back_channel_logout_uri` can be managed on OIDC applications - Added a `session_logout` aggregate to register and inform about sent `back_channel` notifications - Added a `SecurityEventToken` channel and `Form`message type in the notification handlers - Added `TriggeredAtOrigin` fields to `HumanSignedOut` and `TerminateSession` events for notification handling - Exported various functions and types in the `oidc` package to be able to reuse for token signing in the back_channel notifier. - To prevent that current existing session termination events will be handled, a setup step is added to set the `current_states` for the `projections.notifications_back_channel_logout` to the current position - [x] requires https://github.com/zitadel/oidc/pull/671 # Additional Changes - Updated all OTEL dependencies to v1.29.0, since OIDC already updated some of them to that version. - Single Session Termination feature is correctly checked (fixed feature mapping) # Additional Context - closes https://github.com/zitadel/zitadel/issues/8467 - TODO: - Documentation - UI to be done: https://github.com/zitadel/zitadel/issues/8469 --------- Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 14:57:17 +00:00
steps.s37Apps7OIDConfigsBackChannelLogoutURI = &Apps7OIDConfigsBackChannelLogoutURI{dbClient: esPusherDBClient}
steps.s38BackChannelLogoutNotificationStart = &BackChannelLogoutNotificationStart{dbClient: esPusherDBClient, esClient: eventstoreClient}
steps.s39DeleteStaleOrgFields = &DeleteStaleOrgFields{dbClient: esPusherDBClient}
refactor(eventstore): move push logic to sql (#8816) # Which Problems Are Solved If many events are written to the same aggregate id it can happen that zitadel [starts to retry the push transaction](https://github.com/zitadel/zitadel/blob/48ffc902cc90237d693e7104fc742ee927478da7/internal/eventstore/eventstore.go#L101) because [the locking behaviour](https://github.com/zitadel/zitadel/blob/48ffc902cc90237d693e7104fc742ee927478da7/internal/eventstore/v3/sequence.go#L25) during push does compute the wrong sequence because newly committed events are not visible to the transaction. These events impact the current sequence. In cases with high command traffic on a single aggregate id this can have severe impact on general performance of zitadel. Because many connections of the `eventstore pusher` database pool are blocked by each other. # How the Problems Are Solved To improve the performance this locking mechanism was removed and the business logic of push is moved to sql functions which reduce network traffic and can be analyzed by the database before the actual push. For clients of the eventstore framework nothing changed. # Additional Changes - after a connection is established prefetches the newly added database types - `eventstore.BaseEvent` now returns the correct revision of the event # Additional Context - part of https://github.com/zitadel/zitadel/issues/8931 --------- Co-authored-by: Tim Möhlmann <tim+github@zitadel.com> Co-authored-by: Livio Spring <livio.a@gmail.com> Co-authored-by: Max Peintner <max@caos.ch> Co-authored-by: Elio Bischof <elio@zitadel.com> Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> Co-authored-by: Miguel Cabrerizo <30386061+doncicuto@users.noreply.github.com> Co-authored-by: Joakim Lodén <Loddan@users.noreply.github.com> Co-authored-by: Yxnt <Yxnt@users.noreply.github.com> Co-authored-by: Stefan Benz <stefan@caos.ch> Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com> Co-authored-by: Zach H <zhirschtritt@gmail.com>
2024-12-04 13:51:40 +00:00
steps.s40InitPushFunc = &InitPushFunc{dbClient: esPusherDBClient}
steps.s41FillFieldsForInstanceDomains = &FillFieldsForInstanceDomains{eventstore: eventstoreClient}
err = projection.Create(ctx, projectionDBClient, eventstoreClient, config.Projections, nil, nil, nil)
logging.OnError(err).Fatal("unable to start projections")
repeatableSteps := []migration.RepeatableMigration{
&externalConfigChange{
es: eventstoreClient,
ExternalDomain: config.ExternalDomain,
ExternalPort: config.ExternalPort,
ExternalSecure: config.ExternalSecure,
defaults: config.SystemDefaults,
},
&projectionTables{
es: eventstoreClient,
Version: build.Version(),
},
}
for _, step := range []migration.Migration{
steps.s14NewEventsTable,
refactor(eventstore): move push logic to sql (#8816) # Which Problems Are Solved If many events are written to the same aggregate id it can happen that zitadel [starts to retry the push transaction](https://github.com/zitadel/zitadel/blob/48ffc902cc90237d693e7104fc742ee927478da7/internal/eventstore/eventstore.go#L101) because [the locking behaviour](https://github.com/zitadel/zitadel/blob/48ffc902cc90237d693e7104fc742ee927478da7/internal/eventstore/v3/sequence.go#L25) during push does compute the wrong sequence because newly committed events are not visible to the transaction. These events impact the current sequence. In cases with high command traffic on a single aggregate id this can have severe impact on general performance of zitadel. Because many connections of the `eventstore pusher` database pool are blocked by each other. # How the Problems Are Solved To improve the performance this locking mechanism was removed and the business logic of push is moved to sql functions which reduce network traffic and can be analyzed by the database before the actual push. For clients of the eventstore framework nothing changed. # Additional Changes - after a connection is established prefetches the newly added database types - `eventstore.BaseEvent` now returns the correct revision of the event # Additional Context - part of https://github.com/zitadel/zitadel/issues/8931 --------- Co-authored-by: Tim Möhlmann <tim+github@zitadel.com> Co-authored-by: Livio Spring <livio.a@gmail.com> Co-authored-by: Max Peintner <max@caos.ch> Co-authored-by: Elio Bischof <elio@zitadel.com> Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> Co-authored-by: Miguel Cabrerizo <30386061+doncicuto@users.noreply.github.com> Co-authored-by: Joakim Lodén <Loddan@users.noreply.github.com> Co-authored-by: Yxnt <Yxnt@users.noreply.github.com> Co-authored-by: Stefan Benz <stefan@caos.ch> Co-authored-by: Harsha Reddy <harsha.reddy@klaviyo.com> Co-authored-by: Zach H <zhirschtritt@gmail.com>
2024-12-04 13:51:40 +00:00
steps.s40InitPushFunc,
steps.s1ProjectionTable,
steps.s2AssetsTable,
steps.s28AddFieldTable,
steps.s31AddAggregateIndexToFields,
steps.FirstInstance,
steps.s5LastFailed,
steps.s6OwnerRemoveColumns,
steps.s7LogstoreTables,
steps.s8AuthTokens,
steps.s12AddOTPColumns,
steps.s13FixQuotaProjection,
steps.s15CurrentStates,
steps.s16UniqueConstraintsLower,
steps.s17AddOffsetToUniqueConstraints,
steps.s19AddCurrentStatesIndex,
steps.s20AddByUserSessionIndex,
steps.s22ActiveInstancesIndex,
steps.s23CorrectGlobalUniqueConstraints,
steps.s24AddActorToAuthTokens,
steps.s26AuthUsers3,
steps.s29FillFieldsForProjectGrant,
steps.s30FillFieldsForOrgDomainVerified,
steps.s34AddCacheSchema,
perf(oidc): nest position clause for session terminated query (#8738) # Which Problems Are Solved Optimize the query that checks for terminated sessions in the access token verifier. The verifier is used in auth middleware, userinfo and introspection. # How the Problems Are Solved The previous implementation built a query for certain events and then appended a single `PositionAfter` clause. This caused the postgreSQL planner to use indexes only for the instance ID, aggregate IDs, aggregate types and event types. Followed by an expensive sequential scan for the position. This resulting in internal over-fetching of rows before the final filter was applied. ![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c) Furthermore, the query was searching for events which are not always applicable. For example, there was always a session ID search and if there was a user ID, we would also search for a browser fingerprint in event payload (expensive). Even if those argument string would be empty. This PR changes: 1. Nest the position query, so that a full `instance_id, aggregate_id, aggregate_type, event_type, "position"` index can be matched. 2. Redefine the `es_wm` index to include the `position` column. 3. Only search for events for the IDs that actually have a value. Do not search (noop) if none of session ID, user ID or fingerpint ID are set. New query plan: ![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775) # Additional Changes - cleanup how we load multi-statement migrations and make that a bit more reusable. # Additional Context - Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
steps.s35AddPositionToIndexEsWm,
steps.s36FillV2Milestones,
feat(OIDC): add back channel logout (#8837) # Which Problems Are Solved Currently ZITADEL supports RP-initiated logout for clients. Back-channel logout ensures that user sessions are terminated across all connected applications, even if the user closes their browser or loses connectivity providing a more secure alternative for certain use cases. # How the Problems Are Solved If the feature is activated and the client used for the authentication has a back_channel_logout_uri configured, a `session_logout.back_channel` will be registered. Once a user terminates their session, a (notification) handler will send a SET (form POST) to the registered uri containing a logout_token (with the user's ID and session ID). - A new feature "back_channel_logout" is added on system and instance level - A `back_channel_logout_uri` can be managed on OIDC applications - Added a `session_logout` aggregate to register and inform about sent `back_channel` notifications - Added a `SecurityEventToken` channel and `Form`message type in the notification handlers - Added `TriggeredAtOrigin` fields to `HumanSignedOut` and `TerminateSession` events for notification handling - Exported various functions and types in the `oidc` package to be able to reuse for token signing in the back_channel notifier. - To prevent that current existing session termination events will be handled, a setup step is added to set the `current_states` for the `projections.notifications_back_channel_logout` to the current position - [x] requires https://github.com/zitadel/oidc/pull/671 # Additional Changes - Updated all OTEL dependencies to v1.29.0, since OIDC already updated some of them to that version. - Single Session Termination feature is correctly checked (fixed feature mapping) # Additional Context - closes https://github.com/zitadel/zitadel/issues/8467 - TODO: - Documentation - UI to be done: https://github.com/zitadel/zitadel/issues/8469 --------- Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 14:57:17 +00:00
steps.s38BackChannelLogoutNotificationStart,
steps.s41FillFieldsForInstanceDomains,
} {
mustExecuteMigration(ctx, eventstoreClient, step, "migration failed")
}
for _, repeatableStep := range repeatableSteps {
mustExecuteMigration(ctx, eventstoreClient, repeatableStep, "unable to migrate repeatable step")
}
// These steps are executed after the repeatable steps because they add fields projections
for _, step := range []migration.Migration{
steps.s18AddLowerFieldsToLoginNames,
steps.s21AddBlockFieldToLimits,
steps.s25User11AddLowerFieldsToVerifiedEmail,
steps.s27IDPTemplate6SAMLNameIDFormat,
steps.s32AddAuthSessionID,
steps.s33SMSConfigs3TwilioAddVerifyServiceSid,
feat(OIDC): add back channel logout (#8837) # Which Problems Are Solved Currently ZITADEL supports RP-initiated logout for clients. Back-channel logout ensures that user sessions are terminated across all connected applications, even if the user closes their browser or loses connectivity providing a more secure alternative for certain use cases. # How the Problems Are Solved If the feature is activated and the client used for the authentication has a back_channel_logout_uri configured, a `session_logout.back_channel` will be registered. Once a user terminates their session, a (notification) handler will send a SET (form POST) to the registered uri containing a logout_token (with the user's ID and session ID). - A new feature "back_channel_logout" is added on system and instance level - A `back_channel_logout_uri` can be managed on OIDC applications - Added a `session_logout` aggregate to register and inform about sent `back_channel` notifications - Added a `SecurityEventToken` channel and `Form`message type in the notification handlers - Added `TriggeredAtOrigin` fields to `HumanSignedOut` and `TerminateSession` events for notification handling - Exported various functions and types in the `oidc` package to be able to reuse for token signing in the back_channel notifier. - To prevent that current existing session termination events will be handled, a setup step is added to set the `current_states` for the `projections.notifications_back_channel_logout` to the current position - [x] requires https://github.com/zitadel/oidc/pull/671 # Additional Changes - Updated all OTEL dependencies to v1.29.0, since OIDC already updated some of them to that version. - Single Session Termination feature is correctly checked (fixed feature mapping) # Additional Context - closes https://github.com/zitadel/zitadel/issues/8467 - TODO: - Documentation - UI to be done: https://github.com/zitadel/zitadel/issues/8469 --------- Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 14:57:17 +00:00
steps.s37Apps7OIDConfigsBackChannelLogoutURI,
steps.s39DeleteStaleOrgFields,
} {
mustExecuteMigration(ctx, eventstoreClient, step, "migration failed")
}
// projection initialization must be done last, since the steps above might add required columns to the projections
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
if !config.ForMirror && config.InitProjections.Enabled {
initProjections(
ctx,
eventstoreClient,
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
eventstoreV4,
queryDBClient,
projectionDBClient,
masterKey,
config,
)
}
}
func mustExecuteMigration(ctx context.Context, eventstoreClient *eventstore.Eventstore, step migration.Migration, errorMsg string) {
err := migration.Migrate(ctx, eventstoreClient, step)
logging.WithFields("name", step.String()).OnError(err).Fatal(errorMsg)
}
perf(oidc): nest position clause for session terminated query (#8738) # Which Problems Are Solved Optimize the query that checks for terminated sessions in the access token verifier. The verifier is used in auth middleware, userinfo and introspection. # How the Problems Are Solved The previous implementation built a query for certain events and then appended a single `PositionAfter` clause. This caused the postgreSQL planner to use indexes only for the instance ID, aggregate IDs, aggregate types and event types. Followed by an expensive sequential scan for the position. This resulting in internal over-fetching of rows before the final filter was applied. ![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c) Furthermore, the query was searching for events which are not always applicable. For example, there was always a session ID search and if there was a user ID, we would also search for a browser fingerprint in event payload (expensive). Even if those argument string would be empty. This PR changes: 1. Nest the position query, so that a full `instance_id, aggregate_id, aggregate_type, event_type, "position"` index can be matched. 2. Redefine the `es_wm` index to include the `position` column. 3. Only search for events for the IDs that actually have a value. Do not search (noop) if none of session ID, user ID or fingerpint ID are set. New query plan: ![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775) # Additional Changes - cleanup how we load multi-statement migrations and make that a bit more reusable. # Additional Context - Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
// readStmt reads a single file from the embedded FS,
// under the folder/typ/filename path.
// Typ describes the database dialect and may be omitted if no
// dialect specific migration is specified.
func readStmt(fs embed.FS, folder, typ, filename string) (string, error) {
perf(oidc): nest position clause for session terminated query (#8738) # Which Problems Are Solved Optimize the query that checks for terminated sessions in the access token verifier. The verifier is used in auth middleware, userinfo and introspection. # How the Problems Are Solved The previous implementation built a query for certain events and then appended a single `PositionAfter` clause. This caused the postgreSQL planner to use indexes only for the instance ID, aggregate IDs, aggregate types and event types. Followed by an expensive sequential scan for the position. This resulting in internal over-fetching of rows before the final filter was applied. ![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c) Furthermore, the query was searching for events which are not always applicable. For example, there was always a session ID search and if there was a user ID, we would also search for a browser fingerprint in event payload (expensive). Even if those argument string would be empty. This PR changes: 1. Nest the position query, so that a full `instance_id, aggregate_id, aggregate_type, event_type, "position"` index can be matched. 2. Redefine the `es_wm` index to include the `position` column. 3. Only search for events for the IDs that actually have a value. Do not search (noop) if none of session ID, user ID or fingerpint ID are set. New query plan: ![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775) # Additional Changes - cleanup how we load multi-statement migrations and make that a bit more reusable. # Additional Context - Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
stmt, err := fs.ReadFile(filepath.Join(folder, typ, filename))
return string(stmt), err
}
perf(oidc): nest position clause for session terminated query (#8738) # Which Problems Are Solved Optimize the query that checks for terminated sessions in the access token verifier. The verifier is used in auth middleware, userinfo and introspection. # How the Problems Are Solved The previous implementation built a query for certain events and then appended a single `PositionAfter` clause. This caused the postgreSQL planner to use indexes only for the instance ID, aggregate IDs, aggregate types and event types. Followed by an expensive sequential scan for the position. This resulting in internal over-fetching of rows before the final filter was applied. ![Screenshot_20241007_105803](https://github.com/user-attachments/assets/f2d91976-be87-428b-b604-a211399b821c) Furthermore, the query was searching for events which are not always applicable. For example, there was always a session ID search and if there was a user ID, we would also search for a browser fingerprint in event payload (expensive). Even if those argument string would be empty. This PR changes: 1. Nest the position query, so that a full `instance_id, aggregate_id, aggregate_type, event_type, "position"` index can be matched. 2. Redefine the `es_wm` index to include the `position` column. 3. Only search for events for the IDs that actually have a value. Do not search (noop) if none of session ID, user ID or fingerpint ID are set. New query plan: ![Screenshot_20241007_110648](https://github.com/user-attachments/assets/c3234c33-1b76-4b33-a4a9-796f69f3d775) # Additional Changes - cleanup how we load multi-statement migrations and make that a bit more reusable. # Additional Context - Related to https://github.com/zitadel/zitadel/issues/7639
2024-10-07 12:49:55 +00:00
type statement struct {
file string
query string
}
// readStatements reads all files from the embedded FS,
// under the folder/type path.
// Typ describes the database dialect and may be omitted if no
// dialect specific migration is specified.
func readStatements(fs embed.FS, folder, typ string) ([]statement, error) {
basePath := filepath.Join(folder, typ)
dir, err := fs.ReadDir(basePath)
if err != nil {
return nil, err
}
statements := make([]statement, len(dir))
for i, file := range dir {
statements[i].file = file.Name()
statements[i].query, err = readStmt(fs, folder, typ, file.Name())
if err != nil {
return nil, err
}
}
return statements, nil
}
func initProjections(
ctx context.Context,
eventstoreClient *eventstore.Eventstore,
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
eventstoreV4 *es_v4.EventStore,
queryDBClient,
projectionDBClient *database.DB,
masterKey string,
config *Config,
) {
logging.Info("init-projections is currently in beta")
keyStorage, err := cryptoDB.NewKeyStorage(queryDBClient, masterKey)
logging.OnError(err).Fatal("unable to start key storage")
keys, err := encryption.EnsureEncryptionKeys(ctx, config.EncryptionKeys, keyStorage)
logging.OnError(err).Fatal("unable to ensure encryption keys")
err = projection.Create(
ctx,
queryDBClient,
eventstoreClient,
projection.Config{
RetryFailedAfter: config.InitProjections.RetryFailedAfter,
MaxFailureCount: config.InitProjections.MaxFailureCount,
BulkLimit: config.InitProjections.BulkLimit,
},
keys.OIDC,
keys.SAML,
config.SystemAPIUsers,
)
logging.OnError(err).Fatal("unable to start projections")
for _, p := range projection.Projections() {
err := migration.Migrate(ctx, eventstoreClient, p)
logging.WithFields("name", p.String()).OnError(err).Fatal("migration failed")
}
staticStorage, err := config.AssetStorage.NewStorage(queryDBClient.DB)
logging.OnError(err).Fatal("unable to start asset storage")
adminView, err := admin_view.StartView(queryDBClient)
logging.OnError(err).Fatal("unable to start admin view")
admin_handler.Register(ctx,
admin_handler.Config{
Client: queryDBClient,
Eventstore: eventstoreClient,
BulkLimit: config.InitProjections.BulkLimit,
FailureCountUntilSkip: uint64(config.InitProjections.MaxFailureCount),
},
adminView,
staticStorage,
)
for _, p := range admin_handler.Projections() {
err := migration.Migrate(ctx, eventstoreClient, p)
logging.WithFields("name", p.String()).OnError(err).Fatal("migration failed")
}
sessionTokenVerifier := internal_authz.SessionTokenVerifier(keys.OIDC)
cacheConnectors, err := connector.StartConnectors(config.Caches, queryDBClient)
logging.OnError(err).Fatal("unable to start caches")
queries, err := query.StartQueries(
ctx,
eventstoreClient,
feat(cmd): mirror (#7004) # Which Problems Are Solved Adds the possibility to mirror an existing database to a new one. For that a new command was added `zitadel mirror`. Including it's subcommands for a more fine grained mirror of the data. Sub commands: * `zitadel mirror eventstore`: copies only events and their unique constraints * `zitadel mirror system`: mirrors the data of the `system`-schema * `zitadel mirror projections`: runs all projections * `zitadel mirror auth`: copies auth requests * `zitadel mirror verify`: counts the amount of rows in the source and destination database and prints the diff. The command requires one of the following flags: * `--system`: copies all instances of the system * `--instance <instance-id>`, `--instance <comma separated list of instance ids>`: copies only the defined instances The command is save to execute multiple times by adding the `--replace`-flag. This replaces currently existing data except of the `events`-table # Additional Changes A `--for-mirror`-flag was added to `zitadel setup` to prepare the new database. The flag skips the creation of the first instances and initial run of projections. It is now possible to skip the creation of the first instance during setup by setting `FirstInstance.Skip` to true in the steps configuration. # Additional info It is currently not possible to merge multiple databases. See https://github.com/zitadel/zitadel/issues/7964 for more details. It is currently not possible to use files. See https://github.com/zitadel/zitadel/issues/7966 for more information. closes https://github.com/zitadel/zitadel/issues/7586 closes https://github.com/zitadel/zitadel/issues/7486 ### Definition of Ready - [x] I am happy with the code - [x] Short description of the feature/issue is added in the pr description - [x] PR is linked to the corresponding user story - [x] Acceptance criteria are met - [x] All open todos and follow ups are defined in a new ticket and justified - [x] Deviations from the acceptance criteria and design are agreed with the PO and documented. - [x] No debug or dead code - [x] My code has no repetitions - [x] Critical parts are tested automatically - [ ] Where possible E2E tests are implemented - [x] Documentation/examples are up-to-date - [x] All non-functional requirements are met - [x] Functionality of the acceptance criteria is checked manually on the dev system. --------- Co-authored-by: Livio Spring <livio.a@gmail.com>
2024-05-30 09:35:30 +00:00
eventstoreV4.Querier,
queryDBClient,
projectionDBClient,
cacheConnectors,
config.Projections,
config.SystemDefaults,
keys.IDPConfig,
keys.OTP,
keys.OIDC,
keys.SAML,
keys.Target,
config.InternalAuthZ.RolePermissionMappings,
sessionTokenVerifier,
func(q *query.Queries) domain.PermissionCheck {
return func(ctx context.Context, permission, orgID, resourceID string) (err error) {
return internal_authz.CheckPermission(ctx, &authz_es.UserMembershipRepo{Queries: q}, config.InternalAuthZ.RolePermissionMappings, permission, orgID, resourceID)
}
},
0, // not needed for projections
nil, // not needed for projections
false,
)
logging.OnError(err).Fatal("unable to start queries")
authView, err := auth_view.StartView(queryDBClient, keys.OIDC, queries, eventstoreClient)
logging.OnError(err).Fatal("unable to start admin view")
auth_handler.Register(ctx,
auth_handler.Config{
Client: queryDBClient,
Eventstore: eventstoreClient,
BulkLimit: config.InitProjections.BulkLimit,
FailureCountUntilSkip: uint64(config.InitProjections.MaxFailureCount),
},
authView,
queries,
)
for _, p := range auth_handler.Projections() {
err := migration.Migrate(ctx, eventstoreClient, p)
logging.WithFields("name", p.String()).OnError(err).Fatal("migration failed")
}
authZRepo, err := authz.Start(queries, eventstoreClient, queryDBClient, keys.OIDC, config.ExternalSecure)
logging.OnError(err).Fatal("unable to start authz repo")
permissionCheck := func(ctx context.Context, permission, orgID, resourceID string) (err error) {
return internal_authz.CheckPermission(ctx, authZRepo, config.InternalAuthZ.RolePermissionMappings, permission, orgID, resourceID)
}
commands, err := command.StartCommands(ctx,
eventstoreClient,
cacheConnectors,
config.SystemDefaults,
config.InternalAuthZ.RolePermissionMappings,
staticStorage,
&webauthn.Config{
DisplayName: config.WebAuthNName,
ExternalSecure: config.ExternalSecure,
},
config.ExternalDomain,
config.ExternalSecure,
config.ExternalPort,
keys.IDPConfig,
keys.OTP,
keys.SMTP,
keys.SMS,
keys.User,
keys.DomainVerification,
keys.OIDC,
keys.SAML,
keys.Target,
&http.Client{},
permissionCheck,
sessionTokenVerifier,
config.OIDC.DefaultAccessTokenLifetime,
config.OIDC.DefaultRefreshTokenExpiration,
config.OIDC.DefaultRefreshTokenIdleExpiration,
config.DefaultInstance.SecretGenerators,
)
logging.OnError(err).Fatal("unable to start commands")
notify_handler.Register(
ctx,
config.Projections.Customizations["notifications"],
config.Projections.Customizations["notificationsquotas"],
feat(OIDC): add back channel logout (#8837) # Which Problems Are Solved Currently ZITADEL supports RP-initiated logout for clients. Back-channel logout ensures that user sessions are terminated across all connected applications, even if the user closes their browser or loses connectivity providing a more secure alternative for certain use cases. # How the Problems Are Solved If the feature is activated and the client used for the authentication has a back_channel_logout_uri configured, a `session_logout.back_channel` will be registered. Once a user terminates their session, a (notification) handler will send a SET (form POST) to the registered uri containing a logout_token (with the user's ID and session ID). - A new feature "back_channel_logout" is added on system and instance level - A `back_channel_logout_uri` can be managed on OIDC applications - Added a `session_logout` aggregate to register and inform about sent `back_channel` notifications - Added a `SecurityEventToken` channel and `Form`message type in the notification handlers - Added `TriggeredAtOrigin` fields to `HumanSignedOut` and `TerminateSession` events for notification handling - Exported various functions and types in the `oidc` package to be able to reuse for token signing in the back_channel notifier. - To prevent that current existing session termination events will be handled, a setup step is added to set the `current_states` for the `projections.notifications_back_channel_logout` to the current position - [x] requires https://github.com/zitadel/oidc/pull/671 # Additional Changes - Updated all OTEL dependencies to v1.29.0, since OIDC already updated some of them to that version. - Single Session Termination feature is correctly checked (fixed feature mapping) # Additional Context - closes https://github.com/zitadel/zitadel/issues/8467 - TODO: - Documentation - UI to be done: https://github.com/zitadel/zitadel/issues/8469 --------- Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 14:57:17 +00:00
config.Projections.Customizations["backchannel"],
config.Projections.Customizations["telemetry"],
feat(notification): use event worker pool (#8962) # Which Problems Are Solved The current handling of notification follows the same pattern as all other projections: Created events are handled sequentially (based on "position") by a handler. During the process, a lot of information is aggregated (user, texts, templates, ...). This leads to back pressure on the projection since the handling of events might take longer than the time before a new event (to be handled) is created. # How the Problems Are Solved - The current user notification handler creates separate notification events based on the user / session events. - These events contain all the present and required information including the userID. - These notification events get processed by notification workers, which gather the necessary information (recipient address, texts, templates) to send out these notifications. - If a notification fails, a retry event is created based on the current notification request including the current state of the user (this prevents race conditions, where a user is changed in the meantime and the notification already gets the new state). - The retry event will be handled after a backoff delay. This delay increases with every attempt. - If the configured amount of attempts is reached or the message expired (based on config), a cancel event is created, letting the workers know, the notification must no longer be handled. - In case of successful send, a sent event is created for the notification aggregate and the existing "sent" events for the user / session object is stored. - The following is added to the defaults.yaml to allow configuration of the notification workers: ```yaml Notifications: # The amount of workers processing the notification request events. # If set to 0, no notification request events will be handled. This can be useful when running in # multi binary / pod setup and allowing only certain executables to process the events. Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS # The amount of events a single worker will process in a run. BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT # Time interval between scheduled notifications for request events RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY # The amount of workers processing the notification retry events. # If set to 0, no notification retry events will be handled. This can be useful when running in # multi binary / pod setup and allowing only certain executables to process the events. RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS # Time interval between scheduled notifications for retry events RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY # Only instances are projected, for which at least a projection-relevant event exists within the timeframe # from HandleActiveInstances duration in the past until the projection's current time # If set to 0 (default), every instance is always considered active HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES # The maximum duration a transaction remains open # before it spots left folding additional events # and updates the table. TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION # Automatically cancel the notification after the amount of failed attempts MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS # Automatically cancel the notification if it cannot be handled within a specific time MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL # Failed attempts are retried after a confogired delay (with exponential backoff). # Set a minimum and maximum delay and a factor for the backoff MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY # Any factor below 1 will be set to 1 RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR ``` # Additional Changes None # Additional Context - closes #8931
2024-11-27 15:01:17 +00:00
config.Notifications,
*config.Telemetry,
config.ExternalDomain,
config.ExternalPort,
config.ExternalSecure,
commands,
queries,
eventstoreClient,
config.Login.DefaultOTPEmailURLV2,
config.SystemDefaults.Notifications.FileSystemPath,
keys.User,
keys.SMTP,
keys.SMS,
feat(OIDC): add back channel logout (#8837) # Which Problems Are Solved Currently ZITADEL supports RP-initiated logout for clients. Back-channel logout ensures that user sessions are terminated across all connected applications, even if the user closes their browser or loses connectivity providing a more secure alternative for certain use cases. # How the Problems Are Solved If the feature is activated and the client used for the authentication has a back_channel_logout_uri configured, a `session_logout.back_channel` will be registered. Once a user terminates their session, a (notification) handler will send a SET (form POST) to the registered uri containing a logout_token (with the user's ID and session ID). - A new feature "back_channel_logout" is added on system and instance level - A `back_channel_logout_uri` can be managed on OIDC applications - Added a `session_logout` aggregate to register and inform about sent `back_channel` notifications - Added a `SecurityEventToken` channel and `Form`message type in the notification handlers - Added `TriggeredAtOrigin` fields to `HumanSignedOut` and `TerminateSession` events for notification handling - Exported various functions and types in the `oidc` package to be able to reuse for token signing in the back_channel notifier. - To prevent that current existing session termination events will be handled, a setup step is added to set the `current_states` for the `projections.notifications_back_channel_logout` to the current position - [x] requires https://github.com/zitadel/oidc/pull/671 # Additional Changes - Updated all OTEL dependencies to v1.29.0, since OIDC already updated some of them to that version. - Single Session Termination feature is correctly checked (fixed feature mapping) # Additional Context - closes https://github.com/zitadel/zitadel/issues/8467 - TODO: - Documentation - UI to be done: https://github.com/zitadel/zitadel/issues/8469 --------- Co-authored-by: Hidde Wieringa <hidde@hiddewieringa.nl>
2024-10-31 14:57:17 +00:00
keys.OIDC,
config.OIDC.DefaultBackChannelLogoutLifetime,
feat(notification): use event worker pool (#8962) # Which Problems Are Solved The current handling of notification follows the same pattern as all other projections: Created events are handled sequentially (based on "position") by a handler. During the process, a lot of information is aggregated (user, texts, templates, ...). This leads to back pressure on the projection since the handling of events might take longer than the time before a new event (to be handled) is created. # How the Problems Are Solved - The current user notification handler creates separate notification events based on the user / session events. - These events contain all the present and required information including the userID. - These notification events get processed by notification workers, which gather the necessary information (recipient address, texts, templates) to send out these notifications. - If a notification fails, a retry event is created based on the current notification request including the current state of the user (this prevents race conditions, where a user is changed in the meantime and the notification already gets the new state). - The retry event will be handled after a backoff delay. This delay increases with every attempt. - If the configured amount of attempts is reached or the message expired (based on config), a cancel event is created, letting the workers know, the notification must no longer be handled. - In case of successful send, a sent event is created for the notification aggregate and the existing "sent" events for the user / session object is stored. - The following is added to the defaults.yaml to allow configuration of the notification workers: ```yaml Notifications: # The amount of workers processing the notification request events. # If set to 0, no notification request events will be handled. This can be useful when running in # multi binary / pod setup and allowing only certain executables to process the events. Workers: 1 # ZITADEL_NOTIFIACATIONS_WORKERS # The amount of events a single worker will process in a run. BulkLimit: 10 # ZITADEL_NOTIFIACATIONS_BULKLIMIT # Time interval between scheduled notifications for request events RequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_REQUEUEEVERY # The amount of workers processing the notification retry events. # If set to 0, no notification retry events will be handled. This can be useful when running in # multi binary / pod setup and allowing only certain executables to process the events. RetryWorkers: 1 # ZITADEL_NOTIFIACATIONS_RETRYWORKERS # Time interval between scheduled notifications for retry events RetryRequeueEvery: 2s # ZITADEL_NOTIFIACATIONS_RETRYREQUEUEEVERY # Only instances are projected, for which at least a projection-relevant event exists within the timeframe # from HandleActiveInstances duration in the past until the projection's current time # If set to 0 (default), every instance is always considered active HandleActiveInstances: 0s # ZITADEL_NOTIFIACATIONS_HANDLEACTIVEINSTANCES # The maximum duration a transaction remains open # before it spots left folding additional events # and updates the table. TransactionDuration: 1m # ZITADEL_NOTIFIACATIONS_TRANSACTIONDURATION # Automatically cancel the notification after the amount of failed attempts MaxAttempts: 3 # ZITADEL_NOTIFIACATIONS_MAXATTEMPTS # Automatically cancel the notification if it cannot be handled within a specific time MaxTtl: 5m # ZITADEL_NOTIFIACATIONS_MAXTTL # Failed attempts are retried after a confogired delay (with exponential backoff). # Set a minimum and maximum delay and a factor for the backoff MinRetryDelay: 1s # ZITADEL_NOTIFIACATIONS_MINRETRYDELAY MaxRetryDelay: 20s # ZITADEL_NOTIFIACATIONS_MAXRETRYDELAY # Any factor below 1 will be set to 1 RetryDelayFactor: 1.5 # ZITADEL_NOTIFIACATIONS_RETRYDELAYFACTOR ``` # Additional Changes None # Additional Context - closes #8931
2024-11-27 15:01:17 +00:00
queryDBClient,
)
for _, p := range notify_handler.Projections() {
err := migration.Migrate(ctx, eventstoreClient, p)
logging.WithFields("name", p.String()).OnError(err).Fatal("migration failed")
}
}