feat: jobs for projection tables (#1730)

* job queue

* wg improvements

* start handler

* statement

* statements

* imporve handler

* improve statement

* statement in seperate file

* move handlers

* move query/old to query

* handler

* read models

* bulk works

* cleanup

* contrib

* rename readmodel to projection

* rename read_models schema to projections

* rename read_models schema to projections

* search query as func,
bulk iterates as long as new events

* add event sequence less query

* update checks for events between current sequence and sequence of first statement if it has previous sequence 0

* cleanup crdb projection

* refactor projection handler

* start with testing

* tests for handler

* remove todo

* refactor statement: remove table name,
add tests

* improve projection handler shutdown,
no savepoint if noop stmt,
tests for stmt handler

* tests

* start failed events

* seperate branch for contrib

* move statement constructors to crdb pkg

* correct import

* Subscribe for eventtypes (#1800)

* fix: is default (#1737)

* fix: use email as username on global org (#1738)

* fix: use email as username on global org

* Update user_human.go

* Update register_handler.go

* chore(deps): update docusaurus (#1739)

* chore: remove PAT and use GH Token (#1716)

* chore: remove PAT and use GH Token

* fix env

* fix env

* fix env

* md lint

* trigger ci

* change user

* fix GH bug

* replace login part

* chore: add GH Token to sem rel (#1746)

* chore: add GH Token to sem rel

* try branch

* add GH Token

* remove test branch again

* docs: changes acme to acme-caos (#1744)

* changes acme to acme-caos

* Apply suggestions from code review

Co-authored-by: Florian Forster <florian@caos.ch>

Co-authored-by: Maximilian Panne <maximilian.panne@gmail.com>
Co-authored-by: Florian Forster <florian@caos.ch>

* feat: add additional origins on applications (#1691)

* feat: add additional origins on applications

* app additional redirects

* chore(deps-dev): bump @angular/cli from 11.2.8 to 11.2.11 in /console (#1706)

* fix: show org with regex (#1688)

* fix: flag mapping (#1699)

* chore(deps-dev): bump @angular/cli from 11.2.8 to 11.2.11 in /console

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.2.8 to 11.2.11.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.2.8...v11.2.11)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump stylelint from 13.10.0 to 13.13.1 in /console (#1703)

* fix: show org with regex (#1688)

* fix: flag mapping (#1699)

* chore(deps-dev): bump stylelint from 13.10.0 to 13.13.1 in /console

Bumps [stylelint](https://github.com/stylelint/stylelint) from 13.10.0 to 13.13.1.
- [Release notes](https://github.com/stylelint/stylelint/releases)
- [Changelog](https://github.com/stylelint/stylelint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/stylelint/stylelint/compare/13.10.0...13.13.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @types/node from 14.14.37 to 15.0.1 in /console (#1702)

* fix: show org with regex (#1688)

* fix: flag mapping (#1699)

* chore(deps-dev): bump @types/node from 14.14.37 to 15.0.1 in /console

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.37 to 15.0.1.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump ts-protoc-gen from 0.14.0 to 0.15.0 in /console (#1701)

* fix: show org with regex (#1688)

* fix: flag mapping (#1699)

* chore(deps): bump ts-protoc-gen from 0.14.0 to 0.15.0 in /console

Bumps [ts-protoc-gen](https://github.com/improbable-eng/ts-protoc-gen) from 0.14.0 to 0.15.0.
- [Release notes](https://github.com/improbable-eng/ts-protoc-gen/releases)
- [Changelog](https://github.com/improbable-eng/ts-protoc-gen/blob/master/CHANGELOG.md)
- [Commits](https://github.com/improbable-eng/ts-protoc-gen/compare/0.14.0...0.15.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @types/jasmine from 3.6.9 to 3.6.10 in /console (#1682)

Bumps [@types/jasmine](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/jasmine) from 3.6.9 to 3.6.10.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/jasmine)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump @types/google-protobuf in /console (#1681)

Bumps [@types/google-protobuf](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/google-protobuf) from 3.7.4 to 3.15.2.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/google-protobuf)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump grpc from 1.24.5 to 1.24.7 in /console (#1666)

Bumps [grpc](https://github.com/grpc/grpc-node) from 1.24.5 to 1.24.7.
- [Release notes](https://github.com/grpc/grpc-node/releases)
- [Commits](https://github.com/grpc/grpc-node/compare/grpc@1.24.5...grpc@1.24.7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* lock

* chore(deps-dev): bump @angular/language-service from 11.2.9 to 11.2.12 in /console (#1704)

* fix: show org with regex (#1688)

* fix: flag mapping (#1699)

* chore(deps-dev): bump @angular/language-service in /console

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.2.9 to 11.2.12.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.2.12/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Silvan <silvan.reusser@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* package lock

* downgrade grpc

* downgrade protobuf types

* revert npm packs 🥸

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Silvan <silvan.reusser@gmail.com>

* docs: update run and start section texts (#1745)

* update run and start section texts

* adds showcase

Co-authored-by: Maximilian Panne <maximilian.panne@gmail.com>

* fix: additional origin list (#1753)

* fix: handle api configs in authz handler (#1755)

* fix(console): add model for api keys, fix toast, binding (#1757)

* fix: add model for api keys, fix toast, binding

* show api clientid

* fix: missing patchvalue (#1758)

* feat: refresh token (#1728)

* begin refresh tokens

* refresh tokens

* list and revoke refresh tokens

* handle remove

* tests for refresh tokens

* uniqueness and default expiration

* rename oidc token methods

* cleanup

* migration version

* Update internal/static/i18n/en.yaml

Co-authored-by: Fabi <38692350+fgerschwiler@users.noreply.github.com>

* fixes

* feat: update oidc pkg for refresh tokens

Co-authored-by: Fabi <38692350+fgerschwiler@users.noreply.github.com>

* fix: correct json name of clientId in key.json (#1760)

* fix: migration version (#1767)

* start subscription

* eventtypes

* fix(login): links (#1778)

* fix(login): href for help

* fix(login): correct link to tos

* fix: access tokens for service users and refresh token infos (#1779)

* fix: access token for service user

* handle info from refresh request

* uniqueness

* postpone access token uniqueness change

* chore(coc): recommend code of conduct (#1782)

* subscribe for events

* feat(console): refresh toggle out of granttype context (#1785)

* refresh toggle

* disable if not code flow, lint

* lint

* fix: change oidc config order

* accept refresh option within flow

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

* fix: refresh token activation (#1795)

* fix: oidc grant type check

* docs: add offline_access scope

* docs: update refresh token status in supported grant types

* fix: update oidc pkg

* fix: check refresh token grant type (#1796)

* configuration structs

* org admins

* failed events

* fixes

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Livio Amstutz <livio.a@gmail.com>
Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: mffap <mpa@caos.ch>
Co-authored-by: Maximilian Panne <maximilian.panne@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Fabi <38692350+fgerschwiler@users.noreply.github.com>

* remove comment

* aggregate reducer

* remove eventtypes

* add protoc-get-validate to mod

* fix transaltion

* upsert

* add gender on org admins,
allow to retry failed stmts after configurable time

* remove if

* sub queries

* fix: tests

* add builder to tests

* new search query

* rename searchquerybuilder to builder

* remove comment from code

* test with multiple queries

* add filters test

* current sequences

* make org and org_admins work again

* add aggregate type to current sequence

* fix(contibute): listing

* add validate module

* fix: search queries

* feat(eventstore): previous aggregate root sequence (#1810)

* feat(eventstore): previous aggregate root sequence

* fix tests

* fix: eventstore v1 test

* add col to all mocked rows

* next try

* fix mig

* rename aggregate root to aggregate type

* update comment

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

* small refactorings

* allow update multiple current sequences

* unique log id

* fix migrations

* rename org admin to org owner

* improve error handling and logging

* fix(migration): optimize prev agg root seq

* fix: projection handler test

* fix: sub queries

* small fixes

* additional event types

* correct org owner projection

* fix primary key

* feat(eventstore): jobs for projections (#2026)

* fix: template names in login (#1974)

* fix: template names in login

* fix: error.html

* fix: check for features on mgmt only (#1976)

* fix: add sentry in ui, http and projection handlers (#1977)

* fix: add sentry in ui, http and projection handlers

* fix test

* fix(eventstore): sub queries (#1805)

* sub queries

* fix: tests

* add builder to tests

* new search query

* rename searchquerybuilder to builder

* remove comment from code

* test with multiple queries

* add filters test

* fix(contibute): listing

* add validate module

* fix: search queries

* remove unused event type in query

* ignore query if error in marshal

* go mod tidy

* update privacy policy query

* update queries

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

* feat: Extend oidc idp with oauth endpoints (#1980)

* feat: add oauth attributes to oidc idp configuration

* feat: return idpconfig id on create idp

* feat: tests

* feat: descriptions

* feat: docs

* feat: tests

* docs: update to beta 3 (#1984)

* fix: role assertion (#1986)

* fix: enum to display access token role assertion

* improve assertion descriptions

* fix nil pointer

* docs: eventstore (#1982)

* docs: eventstore

* Apply suggestions from code review

Co-authored-by: Florian Forster <florian@caos.ch>

Co-authored-by: Florian Forster <florian@caos.ch>

* fix(sentry): trigger sentry release (#1989)

* feat(send sentry release): send sentry release

* fix(moved step and added releasetag): moved step and added releasetag

* fix: set version for sentry release (#1990)

* feat(send sentry release): send sentry release

* fix(moved step and added releasetag): moved step and added releasetag

* fix(corrected var name): corrected var name

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

* fix: log error reason on terminate session (#1973)

* fix: return default language file, if requested lang does not exist for default login texts (#1988)

* fix: return default language file, if requested lang doesnt exists

* feat: read default translation file

* feat: docs

* fix: race condition in auth request unmarshalling (#1993)

* feat: handle ui_locales in login (#1994)

* fix: handle ui_locales in login

* move supportedlanguage func into i18n package

* update oidc pkg

* fix: handle closed channels on unsubscribe (#1995)

* fix: give restore more time (#1997)

* fix: translation file read (#2009)

* feat: translation file read

* feat: readme

* fix: enable idp add button for iam users (#2010)

* fix: filter event_data (#2011)

* feat: Custom message files (#1992)

* feat: add get custom message text to admin api

* feat: read custom message texts from files

* feat: get languages in apis

* feat: get languages in apis

* feat: get languages in apis

* feat: pr feedback

* feat: docs

* feat: merge main

* fix: sms notification (#2013)

* fix: phone verifications

* feat: fix password reset as sms

* fix: phone verification

* fix: grpc status in sentry and validation interceptors (#2012)

* fix: remove oauth endpoints from oidc config proto (#2014)

* try with view

* fix(console): disable sw (#2021)

* fix: disable sw

* angular.json disable sw

* project projections

* fix typos

* customize projections

* customizable projections,
add change date to projects

Co-authored-by: Livio Amstutz <livio.a@gmail.com>
Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Fabi <38692350+fgerschwiler@users.noreply.github.com>
Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: mffap <mpa@caos.ch>
Co-authored-by: Christian Jakob <47860090+thesephirot@users.noreply.github.com>
Co-authored-by: Elio Bischof <eliobischof@gmail.com>

* env file

* typo

* correct users

* correct migration

* fix: merge fail

* fix test

* fix(tests): unordered matcher

* improve currentSequenceMatcher

* correct certs

* correct certs

* add zitadel database on database list

* refctor switch in match

* enable all handlers

* Delete io.env

* cleanup

* add handlers

* rename view to projection

* rename view to projection

* fix type typo

* remove unnecessary logs

* refactor stmts

* simplify interval calculation

* fix tests

* fix unlock test

* fix migration

* migs

* fix(operator): update cockroach and flyway versions (#2138)

* chore(deps): bump k8s.io/apiextensions-apiserver from 0.19.2 to 0.21.3

Bumps [k8s.io/apiextensions-apiserver](https://github.com/kubernetes/apiextensions-apiserver) from 0.19.2 to 0.21.3.
- [Release notes](https://github.com/kubernetes/apiextensions-apiserver/releases)
- [Commits](https://github.com/kubernetes/apiextensions-apiserver/compare/v0.19.2...v0.21.3)

---
updated-dependencies:
- dependency-name: k8s.io/apiextensions-apiserver
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* chore(deps): bump google.golang.org/api from 0.34.0 to 0.52.0

Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.34.0 to 0.52.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/master/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.34.0...v0.52.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* start update dependencies

* update mods and otlp

* fix(build): update to go 1.16

* old version for k8s mods

* update k8s versions

* update orbos

* fix(operator): update cockroach and flyway version

* Update images.go

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Benz <stefan@caos.ch>

* fix import

* fix typo

* fix(migration): add org projection

* fix(projection): correct table for org events in org owners

* better insert stmt

* fix typo

* fix typo

* set max connection lifetime

* set max conns and conn lifetime in eventstore v1

* configure sql connection settings

* add mig for agg type index

* fix replace tab in yaml

* check requeue at least 500ms

* split column in column and condition

* remove useless comment

* mig versions

* fix migs

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: Livio Amstutz <livio.a@gmail.com>
Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: mffap <mpa@caos.ch>
Co-authored-by: Maximilian Panne <maximilian.panne@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Fabi <38692350+fgerschwiler@users.noreply.github.com>
Co-authored-by: Christian Jakob <47860090+thesephirot@users.noreply.github.com>
Co-authored-by: Elio Bischof <eliobischof@gmail.com>
Co-authored-by: Stefan Benz <stefan@caos.ch>
This commit is contained in:
Silvan
2021-08-19 08:31:56 +02:00
committed by GitHub
parent 9ba8184829
commit 37ee5b4bab
69 changed files with 6449 additions and 547 deletions

View File

@@ -32,6 +32,7 @@ import (
mgmt_es "github.com/caos/zitadel/internal/management/repository/eventsourcing"
"github.com/caos/zitadel/internal/notification"
"github.com/caos/zitadel/internal/query"
"github.com/caos/zitadel/internal/query/projection"
"github.com/caos/zitadel/internal/setup"
"github.com/caos/zitadel/internal/static"
static_config "github.com/caos/zitadel/internal/static/config"
@@ -57,6 +58,7 @@ type Config struct {
EventstoreBase types.SQLBase
Commands command.Config
Queries query.Config
Projections projection.Config
AuthZ authz.Config
Auth auth_es.Config
@@ -148,17 +150,16 @@ func startZitadel(configPaths []string) {
if err != nil {
logging.Log("MAIN-Ddv21").OnError(err).Fatal("cannot start eventstore for queries")
}
queries, err := query.StartQueries(esQueries, conf.SystemDefaults)
if err != nil {
logging.Log("ZITAD-WpeJY").OnError(err).Fatal("cannot start queries")
}
queries, err := query.StartQueries(ctx, esQueries, conf.Projections, conf.SystemDefaults)
logging.Log("MAIN-WpeJY").OnError(err).Fatal("cannot start queries")
authZRepo, err := authz.Start(ctx, conf.AuthZ, conf.InternalAuthZ, conf.SystemDefaults, queries)
logging.Log("MAIN-s9KOw").OnError(err).Fatal("error starting authz repo")
verifier := internal_authz.Start(authZRepo)
esCommands, err := eventstore.StartWithUser(conf.EventstoreBase, conf.Commands.Eventstore)
if err != nil {
logging.Log("ZITAD-iRCMm").OnError(err).Fatal("cannot start eventstore for commands")
}
store, err := conf.AssetStorage.Config.NewStorage()
logging.Log("ZITAD-Bfhe2").OnError(err).Fatal("Unable to start asset storage")
@@ -166,12 +167,14 @@ func startZitadel(configPaths []string) {
if err != nil {
logging.Log("ZITAD-bmNiJ").OnError(err).Fatal("cannot start commands")
}
var authRepo *auth_es.EsRepository
if *authEnabled || *oidcEnabled || *loginEnabled {
authRepo, err = auth_es.Start(conf.Auth, conf.InternalAuthZ, conf.SystemDefaults, commands, queries, authZRepo, esQueries)
logging.Log("MAIN-9oRw6").OnError(err).Fatal("error starting auth repo")
}
verifier := internal_authz.Start(authZRepo)
startAPI(ctx, conf, verifier, authZRepo, authRepo, commands, queries, store)
startUI(ctx, conf, authRepo, commands, queries, store)

View File

@@ -31,15 +31,25 @@ EventstoreBase:
Host: $ZITADEL_EVENTSTORE_HOST
Port: $ZITADEL_EVENTSTORE_PORT
Database: 'eventstore'
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
Cert: $CR_EVENTSTORE_CERT
Key: $CR_EVENTSTORE_KEY
Commands:
Eventstore:
User: 'eventstore'
Password: $CR_EVENTSTORE_PASSWORD
MaxOpenConns: 5
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
Cert: $CR_EVENTSTORE_CERT
Key: $CR_EVENTSTORE_KEY
@@ -47,10 +57,39 @@ Queries:
Eventstore:
User: 'queries'
Password: $CR_QUERIES_PASSWORD
MaxOpenConns: 2
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
Cert: $CR_QUERIES_CERT
Key: $CR_QUERIES_KEY
Projections:
RequeueEvery: 10s
RetryFailedAfter: 1s
MaxFailureCount: 5
BulkLimit: 200
CRDB:
Host: $ZITADEL_EVENTSTORE_HOST
Port: $ZITADEL_EVENTSTORE_PORT
User: 'queries'
Database: 'zitadel'
Schema: 'projections'
Password: $CR_QUERIES_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
Cert: $CR_QUERIES_CERT
Key: $CR_QUERIES_KEY
Customizations:
projects:
BulkLimit: 2000
AuthZ:
Repository:
Eventstore:
@@ -62,6 +101,9 @@ AuthZ:
User: 'authz'
Database: 'eventstore'
Password: $CR_AUTHZ_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -77,6 +119,9 @@ AuthZ:
User: 'authz'
Database: 'authz'
Password: $CR_AUTHZ_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -100,6 +145,9 @@ Auth:
User: 'auth'
Database: 'eventstore'
Password: $CR_AUTH_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -116,6 +164,9 @@ Auth:
User: 'auth'
Database: 'auth'
Password: $CR_AUTH_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -127,6 +178,9 @@ Auth:
User: 'auth'
Database: 'auth'
Password: $CR_AUTH_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -150,6 +204,9 @@ Admin:
User: 'adminapi'
Database: 'eventstore'
Password: $CR_ADMINAPI_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -165,6 +222,9 @@ Admin:
User: 'adminapi'
Database: 'adminapi'
Password: $CR_ADMINAPI_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -188,6 +248,9 @@ Mgmt:
User: 'management'
Database: 'eventstore'
Password: $CR_MANAGEMENT_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -203,6 +266,9 @@ Mgmt:
User: 'management'
Database: 'management'
Password: $CR_MANAGEMENT_PASSWORD
MaxOpenConns: 3
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -312,6 +378,9 @@ Notification:
User: 'notification'
Database: 'eventstore'
Password: $CR_NOTIFICATION_PASSWORD
MaxOpenConns: 2
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT
@@ -327,6 +396,9 @@ Notification:
User: 'notification'
Database: 'notification'
Password: $CR_NOTIFICATION_PASSWORD
MaxOpenConns: 2
MaxConnLifetime: 5m
MaxConnIdleTime: 5m
SSL:
Mode: $CR_SSL_MODE
RootCert: $CR_ROOT_CERT

View File

@@ -4,7 +4,7 @@ import (
"time"
"github.com/caos/zitadel/internal/command"
"github.com/caos/zitadel/internal/eventstore/v1"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/static"
"github.com/caos/zitadel/internal/admin/repository/eventsourcing/view"

View File

@@ -4,31 +4,28 @@ import (
"context"
"strings"
"github.com/caos/zitadel/internal/eventstore/v1"
iam_model "github.com/caos/zitadel/internal/iam/model"
iam_view "github.com/caos/zitadel/internal/iam/repository/view"
es_sdk "github.com/caos/zitadel/internal/eventstore/v1/sdk"
org_view "github.com/caos/zitadel/internal/org/repository/view"
proj_view "github.com/caos/zitadel/internal/project/repository/view"
"github.com/caos/zitadel/internal/user/repository/view"
"github.com/caos/zitadel/internal/user/repository/view/model"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/domain"
"github.com/caos/zitadel/internal/errors"
caos_errs "github.com/caos/zitadel/internal/errors"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
es_models "github.com/caos/zitadel/internal/eventstore/v1/models"
"github.com/caos/zitadel/internal/eventstore/v1/query"
es_sdk "github.com/caos/zitadel/internal/eventstore/v1/sdk"
"github.com/caos/zitadel/internal/eventstore/v1/spooler"
iam_model "github.com/caos/zitadel/internal/iam/model"
iam_es_model "github.com/caos/zitadel/internal/iam/repository/eventsourcing/model"
iam_view "github.com/caos/zitadel/internal/iam/repository/view"
org_model "github.com/caos/zitadel/internal/org/model"
org_es_model "github.com/caos/zitadel/internal/org/repository/eventsourcing/model"
org_view "github.com/caos/zitadel/internal/org/repository/view"
proj_model "github.com/caos/zitadel/internal/project/model"
proj_es_model "github.com/caos/zitadel/internal/project/repository/eventsourcing/model"
proj_view "github.com/caos/zitadel/internal/project/repository/view"
usr_model "github.com/caos/zitadel/internal/user/model"
usr_es_model "github.com/caos/zitadel/internal/user/repository/eventsourcing/model"
"github.com/caos/zitadel/internal/user/repository/view"
"github.com/caos/zitadel/internal/user/repository/view/model"
grant_es_model "github.com/caos/zitadel/internal/usergrant/repository/eventsourcing/model"
view_model "github.com/caos/zitadel/internal/usergrant/repository/view/model"
)
@@ -375,7 +372,7 @@ func (u *UserGrant) setIamProjectID() error {
}
if iam.SetUpDone < domain.StepCount-1 {
return caos_errs.ThrowPreconditionFailed(nil, "HANDL-s5DTs", "Setup not done")
return errors.ThrowPreconditionFailed(nil, "HANDL-s5DTs", "Setup not done")
}
u.iamProjectID = iam.IAMProjectID
return nil
@@ -442,7 +439,7 @@ func (u *UserGrant) OnSuccess() error {
func (u *UserGrant) getUserByID(userID string) (*model.UserView, error) {
user, usrErr := u.view.UserByID(userID)
if usrErr != nil && !caos_errs.IsNotFound(usrErr) {
if usrErr != nil && !errors.IsNotFound(usrErr) {
return nil, usrErr
}
if user == nil {
@@ -459,7 +456,7 @@ func (u *UserGrant) getUserByID(userID string) (*model.UserView, error) {
}
}
if userCopy.State == int32(usr_model.UserStateDeleted) {
return nil, caos_errs.ThrowNotFound(nil, "HANDLER-m9dos", "Errors.User.NotFound")
return nil, errors.ThrowNotFound(nil, "HANDLER-m9dos", "Errors.User.NotFound")
}
return &userCopy, nil
}

View File

@@ -3,11 +3,10 @@ package handler
import (
"time"
"github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/authz/repository/eventsourcing/view"
sd "github.com/caos/zitadel/internal/config/systemdefaults"
"github.com/caos/zitadel/internal/config/types"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/eventstore/v1/query"
)

View File

@@ -47,7 +47,7 @@ func eventPusherToEvents(eventsPushes ...eventstore.EventPusher) []*repository.E
}
events[i] = &repository.Event{
AggregateID: event.Aggregate().ID,
AggregateType: repository.AggregateType(event.Aggregate().Typ),
AggregateType: repository.AggregateType(event.Aggregate().Type),
ResourceOwner: event.Aggregate().ResourceOwner,
EditorService: event.EditorService(),
EditorUser: event.EditorUser(),
@@ -140,7 +140,8 @@ func eventFromEventPusher(event eventstore.EventPusher) *repository.Event {
return &repository.Event{
ID: "",
Sequence: 0,
PreviousSequence: 0,
PreviousAggregateSequence: 0,
PreviousAggregateTypeSequence: 0,
CreationDate: time.Time{},
Type: repository.EventType(event.Type()),
Data: data,
@@ -148,7 +149,7 @@ func eventFromEventPusher(event eventstore.EventPusher) *repository.Event {
EditorUser: event.EditorUser(),
Version: repository.Version(event.Aggregate().Version),
AggregateID: event.Aggregate().ID,
AggregateType: repository.AggregateType(event.Aggregate().Typ),
AggregateType: repository.AggregateType(event.Aggregate().Type),
ResourceOwner: event.Aggregate().ResourceOwner,
}
}

View File

@@ -8,8 +8,7 @@ type Duration struct {
time.Duration
}
func (d *Duration) UnmarshalText(data []byte) error {
var err error
func (d *Duration) UnmarshalText(data []byte) (err error) {
d.Duration, err = time.ParseDuration(string(data))
return err
}

View File

@@ -19,13 +19,18 @@ type SQL struct {
User string
Password string
Database string
Schema string
SSL *ssl
MaxOpenConns uint32
MaxConnLifetime Duration
MaxConnIdleTime Duration
}
type SQLBase struct {
Host string
Port string
Database string
Schema string
SSL sslBase
}
@@ -86,8 +91,10 @@ func (s *SQL) Start() (*sql.DB, error) {
}
// as we open many sql clients we set the max
// open cons deep. now 3(maxconn) * 8(clients) = max 24 conns per pod
client.SetMaxOpenConns(3)
client.SetMaxIdleConns(3)
client.SetMaxOpenConns(int(s.MaxOpenConns))
client.SetConnMaxLifetime(s.MaxConnLifetime.Duration)
client.SetConnMaxIdleTime(s.MaxConnIdleTime.Duration)
return client, nil
}

View File

@@ -19,7 +19,7 @@ func NewAggregate(
) *Aggregate {
a := &Aggregate{
ID: id,
Typ: typ,
Type: typ,
ResourceOwner: authz.GetCtxData(ctx).OrgID,
Version: version,
}
@@ -47,7 +47,7 @@ func AggregateFromWriteModel(
) *Aggregate {
return &Aggregate{
ID: wm.AggregateID,
Typ: typ,
Type: typ,
ResourceOwner: wm.ResourceOwner,
Version: version,
}
@@ -57,8 +57,8 @@ func AggregateFromWriteModel(
type Aggregate struct {
//ID is the unique identitfier of this aggregate
ID string `json:"-"`
//Typ is the name of the aggregate.
Typ AggregateType `json:"-"`
//Type is the name of the aggregate.
Type AggregateType `json:"-"`
//ResourceOwner is the org this aggregates belongs to
ResourceOwner string `json:"-"`
//Version is the semver this aggregate represents

View File

@@ -36,6 +36,10 @@ type EventReader interface {
Sequence() uint64
CreationDate() time.Time
//PreviousAggregateSequence returns the previous sequence of the aggregate root (e.g. for org.42508134)
PreviousAggregateSequence() uint64
//PreviousAggregateTypeSequence returns the previous sequence of the aggregate type (e.g. for org)
PreviousAggregateTypeSequence() uint64
//DataAsBytes returns the payload of the event. It represent the changed fields by the event
DataAsBytes() []byte
}

View File

@@ -11,16 +11,18 @@ import (
//BaseEvent represents the minimum metadata of an event
type BaseEvent struct {
EventType EventType
EventType EventType `json:"-"`
aggregate Aggregate
sequence uint64
creationDate time.Time
previousAggregateSequence uint64
previousAggregateTypeSequence uint64
//User is the user who created the event
//User who created the event
User string `json:"-"`
//Service is the service which created the event
//Service which created the event
Service string `json:"-"`
Data []byte `json:"-"`
}
@@ -60,18 +62,30 @@ func (e *BaseEvent) DataAsBytes() []byte {
return e.Data
}
//PreviousAggregateSequence implements EventReader
func (e *BaseEvent) PreviousAggregateSequence() uint64 {
return e.previousAggregateSequence
}
//PreviousAggregateTypeSequence implements EventReader
func (e *BaseEvent) PreviousAggregateTypeSequence() uint64 {
return e.previousAggregateTypeSequence
}
//BaseEventFromRepo maps a stored event to a BaseEvent
func BaseEventFromRepo(event *repository.Event) *BaseEvent {
return &BaseEvent{
aggregate: Aggregate{
ID: event.AggregateID,
Typ: AggregateType(event.AggregateType),
Type: AggregateType(event.AggregateType),
ResourceOwner: event.ResourceOwner,
Version: Version(event.Version),
},
EventType: EventType(event.Type),
creationDate: event.CreationDate,
sequence: event.Sequence,
previousAggregateSequence: event.PreviousAggregateSequence,
previousAggregateTypeSequence: event.PreviousAggregateTypeSequence,
Service: event.EditorService,
User: event.EditorUser,
Data: event.Data,
@@ -79,7 +93,7 @@ func BaseEventFromRepo(event *repository.Event) *BaseEvent {
}
//NewBaseEventForPush is the constructor for event's which will be pushed into the eventstore
// the resource owner of the aggregate is only used if it's the first event of this aggregateroot
// the resource owner of the aggregate is only used if it's the first event of this aggregate type
// afterwards the resource owner of the first previous events is taken
func NewBaseEventForPush(ctx context.Context, aggregate *Aggregate, typ EventType) *BaseEvent {
return &BaseEvent{

View File

@@ -66,7 +66,7 @@ func eventsToRepository(pushEvents []EventPusher) (events []*repository.Event, c
}
events[i] = &repository.Event{
AggregateID: event.Aggregate().ID,
AggregateType: repository.AggregateType(event.Aggregate().Typ),
AggregateType: repository.AggregateType(event.Aggregate().Type),
ResourceOwner: event.Aggregate().ResourceOwner,
EditorService: event.EditorService(),
EditorUser: event.EditorUser(),

View File

@@ -1329,8 +1329,8 @@ func compareEvents(t *testing.T, want, got *repository.Event) {
if want.Version != got.Version {
t.Errorf("wrong version got %q want %q", got.Version, want.Version)
}
if want.PreviousSequence != got.PreviousSequence {
t.Errorf("wrong previous sequence got %d want %d", got.PreviousSequence, want.PreviousSequence)
if want.PreviousAggregateSequence != got.PreviousAggregateSequence {
t.Errorf("wrong previous sequence got %d want %d", got.PreviousAggregateSequence, want.PreviousAggregateSequence)
}
}

View File

@@ -0,0 +1,71 @@
package crdb
import (
"database/sql"
"strconv"
"strings"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
)
const (
currentSequenceStmtFormat = `SELECT current_sequence, aggregate_type FROM %s WHERE projection_name = $1 FOR UPDATE`
updateCurrentSequencesStmtFormat = `UPSERT INTO %s (projection_name, aggregate_type, current_sequence, timestamp) VALUES `
)
type currentSequences map[eventstore.AggregateType]uint64
func (h *StatementHandler) currentSequences(query func(string, ...interface{}) (*sql.Rows, error)) (currentSequences, error) {
rows, err := query(h.currentSequenceStmt, h.ProjectionName)
if err != nil {
return nil, err
}
defer rows.Close()
sequences := make(currentSequences, len(h.aggregates))
for rows.Next() {
var (
aggregateType eventstore.AggregateType
sequence uint64
)
err = rows.Scan(&sequence, &aggregateType)
if err != nil {
return nil, errors.ThrowInternal(err, "CRDB-dbatK", "scan failed")
}
sequences[aggregateType] = sequence
}
if err = rows.Close(); err != nil {
return nil, errors.ThrowInternal(err, "CRDB-h5i5m", "close rows failed")
}
if err = rows.Err(); err != nil {
return nil, errors.ThrowInternal(err, "CRDB-O8zig", "errors in scanning rows")
}
return sequences, nil
}
func (h *StatementHandler) updateCurrentSequences(tx *sql.Tx, sequences currentSequences) error {
valueQueries := make([]string, 0, len(sequences))
valueCounter := 0
values := make([]interface{}, 0, len(sequences)*3)
for aggregate, sequence := range sequences {
valueQueries = append(valueQueries, "($"+strconv.Itoa(valueCounter+1)+", $"+strconv.Itoa(valueCounter+2)+", $"+strconv.Itoa(valueCounter+3)+", NOW())")
valueCounter += 3
values = append(values, h.ProjectionName, aggregate, sequence)
}
res, err := tx.Exec(h.updateSequencesBaseStmt+strings.Join(valueQueries, ", "), values...)
if err != nil {
return errors.ThrowInternal(err, "CRDB-TrH2Z", "unable to exec update sequence")
}
if rows, _ := res.RowsAffected(); rows < 1 {
return errSeqNotUpdated
}
return nil
}

View File

@@ -0,0 +1,306 @@
package crdb
import (
"database/sql"
"database/sql/driver"
"log"
"strings"
"time"
"github.com/DATA-DOG/go-sqlmock"
"github.com/caos/zitadel/internal/eventstore"
)
type mockExpectation func(sqlmock.Sqlmock)
func expectFailureCount(tableName string, projectionName string, failedSeq, failureCount uint64) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectQuery(`WITH failures AS \(SELECT failure_count FROM `+tableName+` WHERE projection_name = \$1 AND failed_sequence = \$2\) SELECT IF\(EXISTS\(SELECT failure_count FROM failures\), \(SELECT failure_count FROM failures\), 0\) AS failure_count`).
WithArgs(projectionName, failedSeq).
WillReturnRows(
sqlmock.NewRows([]string{"failure_count"}).
AddRow(failureCount),
)
}
}
func expectUpdateFailureCount(tableName string, projectionName string, seq, failureCount uint64) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec(`UPSERT INTO `+tableName+` \(projection_name, failed_sequence, failure_count, error\) VALUES \(\$1, \$2, \$3, \$4\)`).
WithArgs(projectionName, seq, failureCount, sqlmock.AnyArg()).WillReturnResult(sqlmock.NewResult(1, 1))
}
}
func expectCreate(projectionName string, columnNames, placeholders []string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
args := make([]driver.Value, len(columnNames))
for i := 0; i < len(columnNames); i++ {
args[i] = sqlmock.AnyArg()
placeholders[i] = `\` + placeholders[i]
}
m.ExpectExec("INSERT INTO " + projectionName + ` \(` + strings.Join(columnNames, ", ") + `\) VALUES \(` + strings.Join(placeholders, ", ") + `\)`).
WithArgs(args...).
WillReturnResult(sqlmock.NewResult(1, 1))
}
}
func expectCreateErr(projectionName string, columnNames, placeholders []string, err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
args := make([]driver.Value, len(columnNames))
for i := 0; i < len(columnNames); i++ {
args[i] = sqlmock.AnyArg()
placeholders[i] = `\` + placeholders[i]
}
m.ExpectExec("INSERT INTO " + projectionName + ` \(` + strings.Join(columnNames, ", ") + `\) VALUES \(` + strings.Join(placeholders, ", ") + `\)`).
WithArgs(args...).
WillReturnError(err)
}
}
func expectBegin() func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectBegin()
}
}
func expectBeginErr(err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectBegin().WillReturnError(err)
}
}
func expectCommit() func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectCommit()
}
}
func expectCommitErr(err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectCommit().WillReturnError(err)
}
}
func expectRollback() func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectRollback()
}
}
func expectSavePoint() func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("SAVEPOINT push_stmt").
WillReturnResult(sqlmock.NewResult(1, 1))
}
}
func expectSavePointErr(err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("SAVEPOINT push_stmt").
WillReturnError(err)
}
}
func expectSavePointRollback() func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("ROLLBACK TO SAVEPOINT push_stmt").
WillReturnResult(sqlmock.NewResult(1, 1))
}
}
func expectSavePointRollbackErr(err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("ROLLBACK TO SAVEPOINT push_stmt").
WillReturnError(err)
}
}
func expectSavePointRelease() func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("RELEASE push_stmt").
WillReturnResult(sqlmock.NewResult(1, 1))
}
}
func expectCurrentSequence(tableName, projection string, seq uint64, aggregateType string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectQuery(`SELECT current_sequence, aggregate_type FROM ` + tableName + ` WHERE projection_name = \$1 FOR UPDATE`).
WithArgs(
projection,
).
WillReturnRows(
sqlmock.NewRows([]string{"current_sequence", "aggregate_type"}).
AddRow(seq, aggregateType),
)
}
}
func expectCurrentSequenceErr(tableName, projection string, err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectQuery(`SELECT current_sequence, aggregate_type FROM ` + tableName + ` WHERE projection_name = \$1 FOR UPDATE`).
WithArgs(
projection,
).
WillReturnError(err)
}
}
func expectCurrentSequenceNoRows(tableName, projection string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectQuery(`SELECT current_sequence, aggregate_type FROM ` + tableName + ` WHERE projection_name = \$1 FOR UPDATE`).
WithArgs(
projection,
).
WillReturnRows(
sqlmock.NewRows([]string{"current_sequence", "aggregate_type"}),
)
}
}
func expectCurrentSequenceScanErr(tableName, projection string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectQuery(`SELECT current_sequence, aggregate_type FROM ` + tableName + ` WHERE projection_name = \$1 FOR UPDATE`).
WithArgs(
projection,
).
WillReturnRows(
sqlmock.NewRows([]string{"current_sequence", "aggregate_type"}).
RowError(0, sql.ErrTxDone).
AddRow(0, "agg"),
)
}
}
func expectUpdateCurrentSequence(tableName, projection string, seq uint64, aggregateType string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("UPSERT INTO "+tableName+` \(projection_name, aggregate_type, current_sequence, timestamp\) VALUES \(\$1, \$2, \$3, NOW\(\)\)`).
WithArgs(
projection,
aggregateType,
seq,
).
WillReturnResult(
sqlmock.NewResult(1, 1),
)
}
}
func expectUpdateTwoCurrentSequence(tableName, projection string, sequences currentSequences) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
matcher := &currentSequenceMatcher{seq: sequences}
m.ExpectExec("UPSERT INTO "+tableName+` \(projection_name, aggregate_type, current_sequence, timestamp\) VALUES \(\$1, \$2, \$3, NOW\(\)\), \(\$4, \$5, \$6, NOW\(\)\)`).
WithArgs(
projection,
matcher,
matcher,
projection,
matcher,
matcher,
).
WillReturnResult(
sqlmock.NewResult(1, 1),
)
}
}
type currentSequenceMatcher struct {
seq currentSequences
currentAggregate eventstore.AggregateType
}
func (m *currentSequenceMatcher) Match(value driver.Value) bool {
switch v := value.(type) {
case string:
if m.currentAggregate != "" {
log.Printf("expected sequence of %s but got next aggregate type %s", m.currentAggregate, value)
return false
}
_, ok := m.seq[eventstore.AggregateType(v)]
if !ok {
return false
}
m.currentAggregate = eventstore.AggregateType(v)
return true
default:
seq := m.seq[m.currentAggregate]
m.currentAggregate = ""
delete(m.seq, m.currentAggregate)
return int64(seq) == value.(int64)
}
}
func expectUpdateCurrentSequenceErr(tableName, projection string, seq uint64, err error, aggregateType string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("UPSERT INTO "+tableName+` \(projection_name, aggregate_type, current_sequence, timestamp\) VALUES \(\$1, \$2, \$3, NOW\(\)\)`).
WithArgs(
projection,
aggregateType,
seq,
).
WillReturnError(err)
}
}
func expectUpdateCurrentSequenceNoRows(tableName, projection string, seq uint64, aggregateType string) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec("UPSERT INTO "+tableName+` \(projection_name, aggregate_type, current_sequence, timestamp\) VALUES \(\$1, \$2, \$3, NOW\(\)\)`).
WithArgs(
projection,
aggregateType,
seq,
).
WillReturnResult(
sqlmock.NewResult(0, 0),
)
}
}
func expectLock(lockTable, workerName string, d time.Duration) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec(`INSERT INTO `+lockTable+
` \(locker_id, locked_until, projection_name\) VALUES \(\$1, now\(\)\+\$2::INTERVAL, \$3\)`+
` ON CONFLICT \(projection_name\)`+
` DO UPDATE SET locker_id = \$1, locked_until = now\(\)\+\$2::INTERVAL`+
` WHERE `+lockTable+`\.projection_name = \$3 AND \(`+lockTable+`\.locker_id = \$1 OR `+lockTable+`\.locked_until < now\(\)\)`).
WithArgs(
workerName,
float64(d),
projectionName,
).
WillReturnResult(
sqlmock.NewResult(1, 1),
)
}
}
func expectLockNoRows(lockTable, workerName string, d time.Duration) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec(`INSERT INTO `+lockTable+
` \(locker_id, locked_until, projection_name\) VALUES \(\$1, now\(\)\+\$2::INTERVAL, \$3\)`+
` ON CONFLICT \(projection_name\)`+
` DO UPDATE SET locker_id = \$1, locked_until = now\(\)\+\$2::INTERVAL`+
` WHERE `+lockTable+`\.projection_name = \$3 AND \(`+lockTable+`\.locker_id = \$1 OR `+lockTable+`\.locked_until < now\(\)\)`).
WithArgs(
workerName,
float64(d),
projectionName,
).
WillReturnResult(driver.ResultNoRows)
}
}
func expectLockErr(lockTable, workerName string, d time.Duration, err error) func(sqlmock.Sqlmock) {
return func(m sqlmock.Sqlmock) {
m.ExpectExec(`INSERT INTO `+lockTable+
` \(locker_id, locked_until, projection_name\) VALUES \(\$1, now\(\)\+\$2::INTERVAL, \$3\)`+
` ON CONFLICT \(projection_name\)`+
` DO UPDATE SET locker_id = \$1, locked_until = now\(\)\+\$2::INTERVAL`+
` WHERE `+lockTable+`\.projection_name = \$3 AND \(`+lockTable+`\.locker_id = \$1 OR `+lockTable+`\.locked_until < now\(\)\)`).
WithArgs(
workerName,
float64(d),
projectionName,
).
WillReturnError(err)
}
}

View File

@@ -0,0 +1,53 @@
package crdb
import (
"database/sql"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore/handler"
)
const (
setFailureCountStmtFormat = "UPSERT INTO %s" +
" (projection_name, failed_sequence, failure_count, error)" +
" VALUES ($1, $2, $3, $4)"
failureCountStmtFormat = "WITH failures AS (SELECT failure_count FROM %s WHERE projection_name = $1 AND failed_sequence = $2)" +
" SELECT IF(" +
"EXISTS(SELECT failure_count FROM failures)," +
" (SELECT failure_count FROM failures)," +
" 0" +
") AS failure_count"
)
func (h *StatementHandler) handleFailedStmt(tx *sql.Tx, stmt handler.Statement, execErr error) (shouldContinue bool) {
failureCount, err := h.failureCount(tx, stmt.Sequence)
if err != nil {
logging.LogWithFields("CRDB-WJaFk", "projection", h.ProjectionName, "seq", stmt.Sequence).WithError(err).Warn("unable to get failure count")
return false
}
failureCount += 1
err = h.setFailureCount(tx, stmt.Sequence, failureCount, execErr)
logging.LogWithFields("CRDB-cI0dB", "projection", h.ProjectionName, "seq", stmt.Sequence).OnError(err).Warn("unable to update failure count")
return failureCount >= h.maxFailureCount
}
func (h *StatementHandler) failureCount(tx *sql.Tx, seq uint64) (count uint, err error) {
row := tx.QueryRow(h.failureCountStmt, h.ProjectionName, seq)
if err = row.Err(); err != nil {
return 0, errors.ThrowInternal(err, "CRDB-Unnex", "unable to update failure count")
}
if err = row.Scan(&count); err != nil {
return 0, errors.ThrowInternal(err, "CRDB-RwSMV", "unable to scann count")
}
return count, nil
}
func (h *StatementHandler) setFailureCount(tx *sql.Tx, seq uint64, count uint, err error) error {
_, dbErr := tx.Exec(h.setFailureCountStmt, h.ProjectionName, seq, count, err.Error())
if dbErr != nil {
return errors.ThrowInternal(dbErr, "CRDB-4Ht4x", "set failure count failed")
}
return nil
}

View File

@@ -0,0 +1,268 @@
package crdb
import (
"context"
"database/sql"
"fmt"
"os"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
"github.com/caos/zitadel/internal/id"
)
var (
errSeqNotUpdated = errors.ThrowInternal(nil, "CRDB-79GWt", "current sequence not updated")
)
type StatementHandlerConfig struct {
handler.ProjectionHandlerConfig
Client *sql.DB
SequenceTable string
LockTable string
FailedEventsTable string
MaxFailureCount uint
BulkLimit uint64
Reducers []handler.AggregateReducer
}
type StatementHandler struct {
*handler.ProjectionHandler
client *sql.DB
sequenceTable string
currentSequenceStmt string
updateSequencesBaseStmt string
maxFailureCount uint
failureCountStmt string
setFailureCountStmt string
lockStmt string
aggregates []eventstore.AggregateType
reduces map[eventstore.EventType]handler.Reduce
workerName string
bulkLimit uint64
}
func NewStatementHandler(
ctx context.Context,
config StatementHandlerConfig,
) StatementHandler {
workerName, err := os.Hostname()
if err != nil || workerName == "" {
workerName, err = id.SonyFlakeGenerator.Next()
logging.Log("SPOOL-bdO56").OnError(err).Panic("unable to generate lockID")
}
aggregateTypes := make([]eventstore.AggregateType, 0, len(config.Reducers))
reduces := make(map[eventstore.EventType]handler.Reduce, len(config.Reducers))
for _, aggReducer := range config.Reducers {
aggregateTypes = append(aggregateTypes, aggReducer.Aggregate)
for _, eventReducer := range aggReducer.EventRedusers {
reduces[eventReducer.Event] = eventReducer.Reduce
}
}
h := StatementHandler{
ProjectionHandler: handler.NewProjectionHandler(config.ProjectionHandlerConfig),
client: config.Client,
sequenceTable: config.SequenceTable,
maxFailureCount: config.MaxFailureCount,
currentSequenceStmt: fmt.Sprintf(currentSequenceStmtFormat, config.SequenceTable),
updateSequencesBaseStmt: fmt.Sprintf(updateCurrentSequencesStmtFormat, config.SequenceTable),
failureCountStmt: fmt.Sprintf(failureCountStmtFormat, config.FailedEventsTable),
setFailureCountStmt: fmt.Sprintf(setFailureCountStmtFormat, config.FailedEventsTable),
lockStmt: fmt.Sprintf(lockStmtFormat, config.LockTable),
aggregates: aggregateTypes,
reduces: reduces,
workerName: workerName,
bulkLimit: config.BulkLimit,
}
go h.ProjectionHandler.Process(
ctx,
h.reduce,
h.Update,
h.Lock,
h.Unlock,
h.SearchQuery,
)
h.ProjectionHandler.Handler.Subscribe(h.aggregates...)
return h
}
func (h *StatementHandler) SearchQuery() (*eventstore.SearchQueryBuilder, uint64, error) {
sequences, err := h.currentSequences(h.client.Query)
if err != nil {
return nil, 0, err
}
queryBuilder := eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).Limit(h.bulkLimit)
for _, aggregateType := range h.aggregates {
queryBuilder.
AddQuery().
AggregateTypes(aggregateType).
SequenceGreater(sequences[aggregateType])
}
return queryBuilder, h.bulkLimit, nil
}
//Update implements handler.Update
func (h *StatementHandler) Update(ctx context.Context, stmts []handler.Statement, reduce handler.Reduce) (unexecutedStmts []handler.Statement, err error) {
tx, err := h.client.BeginTx(ctx, nil)
if err != nil {
return stmts, errors.ThrowInternal(err, "CRDB-e89Gq", "begin failed")
}
sequences, err := h.currentSequences(tx.Query)
if err != nil {
tx.Rollback()
return stmts, err
}
//checks for events between create statement and current sequence
// because there could be events between current sequence and a creation event
// and we cannot check via stmt.PreviousSequence
if stmts[0].PreviousSequence == 0 {
previousStmts, err := h.fetchPreviousStmts(ctx, stmts[0].Sequence, sequences, reduce)
if err != nil {
tx.Rollback()
return stmts, err
}
stmts = append(previousStmts, stmts...)
}
lastSuccessfulIdx := h.executeStmts(tx, stmts, sequences)
if lastSuccessfulIdx >= 0 {
err = h.updateCurrentSequences(tx, sequences)
if err != nil {
tx.Rollback()
return stmts, err
}
}
if err = tx.Commit(); err != nil {
return stmts, err
}
if lastSuccessfulIdx == -1 {
return stmts, handler.ErrSomeStmtsFailed
}
unexecutedStmts = make([]handler.Statement, len(stmts)-(lastSuccessfulIdx+1))
copy(unexecutedStmts, stmts[lastSuccessfulIdx+1:])
stmts = nil
if len(unexecutedStmts) > 0 {
return unexecutedStmts, handler.ErrSomeStmtsFailed
}
return unexecutedStmts, nil
}
func (h *StatementHandler) fetchPreviousStmts(
ctx context.Context,
stmtSeq uint64,
sequences currentSequences,
reduce handler.Reduce,
) (previousStmts []handler.Statement, err error) {
query := eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent)
queriesAdded := false
for _, aggregateType := range h.aggregates {
if stmtSeq <= sequences[aggregateType] {
continue
}
query.
AddQuery().
AggregateTypes(aggregateType).
SequenceGreater(sequences[aggregateType]).
SequenceLess(stmtSeq)
queriesAdded = true
}
if !queriesAdded {
return nil, nil
}
events, err := h.Eventstore.FilterEvents(ctx, query)
if err != nil {
return nil, err
}
for _, event := range events {
stmts, err := reduce(event)
if err != nil {
return nil, err
}
previousStmts = append(previousStmts, stmts...)
}
return previousStmts, nil
}
func (h *StatementHandler) executeStmts(
tx *sql.Tx,
stmts []handler.Statement,
sequences currentSequences,
) int {
lastSuccessfulIdx := -1
for i, stmt := range stmts {
if stmt.Sequence <= sequences[stmt.AggregateType] {
continue
}
if stmt.PreviousSequence > 0 && stmt.PreviousSequence != sequences[stmt.AggregateType] {
logging.LogWithFields("CRDB-jJBJn", "projection", h.ProjectionName, "aggregateType", stmt.AggregateType, "seq", stmt.Sequence, "prevSeq", stmt.PreviousSequence, "currentSeq", sequences[stmt.AggregateType]).Warn("sequences do not match")
break
}
err := h.executeStmt(tx, stmt)
if err == nil {
sequences[stmt.AggregateType], lastSuccessfulIdx = stmt.Sequence, i
continue
}
shouldContinue := h.handleFailedStmt(tx, stmt, err)
if !shouldContinue {
break
}
sequences[stmt.AggregateType], lastSuccessfulIdx = stmt.Sequence, i
}
return lastSuccessfulIdx
}
//executeStmt handles sql statements
//an error is returned if the statement could not be inserted properly
func (h *StatementHandler) executeStmt(tx *sql.Tx, stmt handler.Statement) error {
if stmt.IsNoop() {
return nil
}
_, err := tx.Exec("SAVEPOINT push_stmt")
if err != nil {
return errors.ThrowInternal(err, "CRDB-i1wp6", "unable to create savepoint")
}
err = stmt.Execute(tx, h.ProjectionName)
if err != nil {
_, rollbackErr := tx.Exec("ROLLBACK TO SAVEPOINT push_stmt")
if rollbackErr != nil {
return errors.ThrowInternal(rollbackErr, "CRDB-zzp3P", "rollback to savepoint failed")
}
return errors.ThrowInternal(err, "CRDB-oRkaN", "unable execute stmt")
}
_, err = tx.Exec("RELEASE push_stmt")
if err != nil {
return errors.ThrowInternal(err, "CRDB-qWgwT", "unable to release savepoint")
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,60 @@
package crdb
import (
"context"
"time"
"github.com/caos/zitadel/internal/errors"
)
const (
lockStmtFormat = "INSERT INTO %[1]s" +
" (locker_id, locked_until, projection_name) VALUES ($1, now()+$2::INTERVAL, $3)" +
" ON CONFLICT (projection_name)" +
" DO UPDATE SET locker_id = $1, locked_until = now()+$2::INTERVAL" +
" WHERE %[1]s.projection_name = $3 AND (%[1]s.locker_id = $1 OR %[1]s.locked_until < now())"
)
func (h *StatementHandler) Lock(ctx context.Context, lockDuration time.Duration) <-chan error {
errs := make(chan error)
go h.handleLock(ctx, errs, lockDuration)
return errs
}
func (h *StatementHandler) handleLock(ctx context.Context, errs chan error, lockDuration time.Duration) {
renewLock := time.NewTimer(0)
for {
select {
case <-renewLock.C:
errs <- h.renewLock(ctx, lockDuration)
//refresh the lock 500ms before it times out. 500ms should be enough for one transaction
renewLock.Reset(lockDuration - (500 * time.Millisecond))
case <-ctx.Done():
close(errs)
renewLock.Stop()
return
}
}
}
func (h *StatementHandler) renewLock(ctx context.Context, lockDuration time.Duration) error {
//the unit of crdb interval is seconds (https://www.cockroachlabs.com/docs/stable/interval.html).
res, err := h.client.Exec(h.lockStmt, h.workerName, lockDuration.Seconds(), h.ProjectionName)
if err != nil {
return errors.ThrowInternal(err, "CRDB-uaDoR", "unable to execute lock")
}
if rows, _ := res.RowsAffected(); rows == 0 {
return errors.ThrowAlreadyExists(nil, "CRDB-mmi4J", "projection already locked")
}
return nil
}
func (h *StatementHandler) Unlock() error {
_, err := h.client.Exec(h.lockStmt, h.workerName, float64(0), h.ProjectionName)
if err != nil {
return errors.ThrowUnknown(err, "CRDB-JjfwO", "unlock failed")
}
return nil
}

View File

@@ -0,0 +1,286 @@
package crdb
import (
"context"
"database/sql"
"errors"
"fmt"
"testing"
"time"
z_errs "github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore/handler"
"github.com/DATA-DOG/go-sqlmock"
)
const (
workerName = "test_worker"
projectionName = "my_projection"
lockTable = "my_lock_table"
)
var (
renewNoRowsAffectedErr = z_errs.ThrowAlreadyExists(nil, "CRDB-mmi4J", "projection already locked")
errLock = errors.New("lock err")
)
func TestStatementHandler_handleLock(t *testing.T) {
type want struct {
expectations []mockExpectation
}
type args struct {
lockDuration time.Duration
ctx context.Context
errMock *errsMock
}
tests := []struct {
name string
want want
args args
}{
{
name: "lock fails",
want: want{
expectations: []mockExpectation{
expectLock(lockTable, workerName, 2),
expectLock(lockTable, workerName, 2),
expectLockErr(lockTable, workerName, 2, errLock),
},
},
args: args{
lockDuration: 2 * time.Second,
ctx: context.Background(),
errMock: &errsMock{
errs: make(chan error),
successfulIters: 2,
shouldErr: true,
},
},
},
{
name: "success",
want: want{
expectations: []mockExpectation{
expectLock(lockTable, workerName, 2),
expectLock(lockTable, workerName, 2),
},
},
args: args{
lockDuration: 2 * time.Second,
ctx: context.Background(),
errMock: &errsMock{
errs: make(chan error),
successfulIters: 2,
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
client, mock, err := sqlmock.New()
if err != nil {
t.Fatal(err)
}
h := &StatementHandler{
ProjectionHandler: &handler.ProjectionHandler{
ProjectionName: projectionName,
},
client: client,
workerName: workerName,
lockStmt: fmt.Sprintf(lockStmtFormat, lockTable),
}
for _, expectation := range tt.want.expectations {
expectation(mock)
}
ctx, cancel := context.WithCancel(tt.args.ctx)
go tt.args.errMock.handleErrs(t, cancel)
go h.handleLock(ctx, tt.args.errMock.errs, tt.args.lockDuration)
<-ctx.Done()
mock.MatchExpectationsInOrder(true)
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("expectations not met: %v", err)
}
})
}
}
func TestStatementHandler_renewLock(t *testing.T) {
type want struct {
expectations []mockExpectation
isErr func(err error) bool
}
type args struct {
lockDuration time.Duration
}
tests := []struct {
name string
want want
args args
}{
{
name: "lock fails",
want: want{
expectations: []mockExpectation{
expectLockErr(lockTable, workerName, 1, sql.ErrTxDone),
},
isErr: func(err error) bool {
return errors.Is(err, sql.ErrTxDone)
},
},
args: args{
lockDuration: 1 * time.Second,
},
},
{
name: "lock no rows",
want: want{
expectations: []mockExpectation{
expectLockNoRows(lockTable, workerName, 2),
},
isErr: func(err error) bool {
return errors.As(err, &renewNoRowsAffectedErr)
},
},
args: args{
lockDuration: 2 * time.Second,
},
},
{
name: "success",
want: want{
expectations: []mockExpectation{
expectLock(lockTable, workerName, 3),
},
isErr: func(err error) bool {
return errors.Is(err, nil)
},
},
args: args{
lockDuration: 3 * time.Second,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
client, mock, err := sqlmock.New()
if err != nil {
t.Fatal(err)
}
h := &StatementHandler{
ProjectionHandler: &handler.ProjectionHandler{
ProjectionName: projectionName,
},
client: client,
workerName: workerName,
lockStmt: fmt.Sprintf(lockStmtFormat, lockTable),
}
for _, expectation := range tt.want.expectations {
expectation(mock)
}
err = h.renewLock(context.Background(), tt.args.lockDuration)
if !tt.want.isErr(err) {
t.Errorf("unexpected error = %v", err)
}
mock.MatchExpectationsInOrder(true)
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("expectations not met: %v", err)
}
})
}
}
func TestStatementHandler_Unlock(t *testing.T) {
type want struct {
expectations []mockExpectation
isErr func(err error) bool
}
tests := []struct {
name string
want want
}{
{
name: "unlock fails",
want: want{
expectations: []mockExpectation{
expectLockErr(lockTable, workerName, 0, sql.ErrTxDone),
},
isErr: func(err error) bool {
return errors.Is(err, sql.ErrTxDone)
},
},
},
{
name: "success",
want: want{
expectations: []mockExpectation{
expectLock(lockTable, workerName, 0),
},
isErr: func(err error) bool {
return errors.Is(err, nil)
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
client, mock, err := sqlmock.New()
if err != nil {
t.Fatal(err)
}
h := &StatementHandler{
ProjectionHandler: &handler.ProjectionHandler{
ProjectionName: projectionName,
},
client: client,
workerName: workerName,
lockStmt: fmt.Sprintf(lockStmtFormat, lockTable),
}
for _, expectation := range tt.want.expectations {
expectation(mock)
}
err = h.Unlock()
if !tt.want.isErr(err) {
t.Errorf("unexpected error = %v", err)
}
mock.MatchExpectationsInOrder(true)
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("expectations not met: %v", err)
}
})
}
}
type errsMock struct {
errs chan error
successfulIters int
shouldErr bool
}
func (m *errsMock) handleErrs(t *testing.T, cancel func()) {
for i := 0; i < m.successfulIters; i++ {
if err := <-m.errs; err != nil {
t.Errorf("unexpected err in iteration %d: %v", i, err)
cancel()
return
}
}
if m.shouldErr {
if err := <-m.errs; err == nil {
t.Error("error must not be nil")
}
}
cancel()
}

View File

@@ -0,0 +1,16 @@
package crdb
import (
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
)
//reduce implements handler.Reduce function
func (h *StatementHandler) reduce(event eventstore.EventReader) ([]handler.Statement, error) {
reduce, ok := h.reduces[event.Type()]
if !ok {
return []handler.Statement{NewNoOpStatement(event)}, nil
}
return reduce(event)
}

View File

@@ -0,0 +1,190 @@
package crdb
import (
"strconv"
"strings"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
)
type execOption func(*execConfig)
type execConfig struct {
tableName string
args []interface{}
err error
}
func WithTableSuffix(name string) func(*execConfig) {
return func(o *execConfig) {
o.tableName += "_" + name
}
}
func NewCreateStatement(event eventstore.EventReader, values []handler.Column, opts ...execOption) handler.Statement {
cols, params, args := columnsToQuery(values)
columnNames := strings.Join(cols, ", ")
valuesPlaceholder := strings.Join(params, ", ")
config := execConfig{
args: args,
}
if len(values) == 0 {
config.err = handler.ErrNoValues
}
q := func(config execConfig) string {
return "INSERT INTO " + config.tableName + " (" + columnNames + ") VALUES (" + valuesPlaceholder + ")"
}
return handler.Statement{
AggregateType: event.Aggregate().Type,
Sequence: event.Sequence(),
PreviousSequence: event.PreviousAggregateTypeSequence(),
Execute: exec(config, q, opts),
}
}
func NewUpsertStatement(event eventstore.EventReader, values []handler.Column, opts ...execOption) handler.Statement {
cols, params, args := columnsToQuery(values)
columnNames := strings.Join(cols, ", ")
valuesPlaceholder := strings.Join(params, ", ")
config := execConfig{
args: args,
}
if len(values) == 0 {
config.err = handler.ErrNoValues
}
q := func(config execConfig) string {
return "UPSERT INTO " + config.tableName + " (" + columnNames + ") VALUES (" + valuesPlaceholder + ")"
}
return handler.Statement{
AggregateType: event.Aggregate().Type,
Sequence: event.Sequence(),
PreviousSequence: event.PreviousAggregateTypeSequence(),
Execute: exec(config, q, opts),
}
}
func NewUpdateStatement(event eventstore.EventReader, values []handler.Column, conditions []handler.Condition, opts ...execOption) handler.Statement {
cols, params, args := columnsToQuery(values)
wheres, whereArgs := conditionsToWhere(conditions, len(params))
args = append(args, whereArgs...)
columnNames := strings.Join(cols, ", ")
valuesPlaceholder := strings.Join(params, ", ")
wheresPlaceholders := strings.Join(wheres, " AND ")
config := execConfig{
args: args,
}
if len(values) == 0 {
config.err = handler.ErrNoValues
}
if len(conditions) == 0 {
config.err = handler.ErrNoCondition
}
q := func(config execConfig) string {
return "UPDATE " + config.tableName + " SET (" + columnNames + ") = (" + valuesPlaceholder + ") WHERE " + wheresPlaceholders
}
return handler.Statement{
AggregateType: event.Aggregate().Type,
Sequence: event.Sequence(),
PreviousSequence: event.PreviousAggregateTypeSequence(),
Execute: exec(config, q, opts),
}
}
func NewDeleteStatement(event eventstore.EventReader, conditions []handler.Condition, opts ...execOption) handler.Statement {
wheres, args := conditionsToWhere(conditions, 0)
wheresPlaceholders := strings.Join(wheres, " AND ")
config := execConfig{
args: args,
}
if len(conditions) == 0 {
config.err = handler.ErrNoCondition
}
q := func(config execConfig) string {
return "DELETE FROM " + config.tableName + " WHERE " + wheresPlaceholders
}
return handler.Statement{
AggregateType: event.Aggregate().Type,
Sequence: event.Sequence(),
PreviousSequence: event.PreviousAggregateTypeSequence(),
Execute: exec(config, q, opts),
}
}
func NewNoOpStatement(event eventstore.EventReader) handler.Statement {
return handler.Statement{
AggregateType: event.Aggregate().Type,
Sequence: event.Sequence(),
PreviousSequence: event.PreviousAggregateTypeSequence(),
}
}
func columnsToQuery(cols []handler.Column) (names []string, parameters []string, values []interface{}) {
names = make([]string, len(cols))
values = make([]interface{}, len(cols))
parameters = make([]string, len(cols))
for i, col := range cols {
names[i] = col.Name
values[i] = col.Value
parameters[i] = "$" + strconv.Itoa(i+1)
}
return names, parameters, values
}
func conditionsToWhere(cols []handler.Condition, paramOffset int) (wheres []string, values []interface{}) {
wheres = make([]string, len(cols))
values = make([]interface{}, len(cols))
for i, col := range cols {
wheres[i] = "(" + col.Name + " = $" + strconv.Itoa(i+1+paramOffset) + ")"
values[i] = col.Value
}
return wheres, values
}
type query func(config execConfig) string
func exec(config execConfig, q query, opts []execOption) func(ex handler.Executer, projectionName string) error {
return func(ex handler.Executer, projectionName string) error {
if projectionName == "" {
return handler.ErrNoProjection
}
if config.err != nil {
return config.err
}
config.tableName = projectionName
for _, opt := range opts {
opt(&config)
}
if _, err := ex.Exec(q(config), config.args...); err != nil {
return errors.ThrowInternal(err, "CRDB-pKtsr", "exec failed")
}
return nil
}
}

View File

@@ -0,0 +1,822 @@
package crdb
import (
"database/sql"
"errors"
"reflect"
"testing"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
)
type wantExecuter struct {
query string
args []interface{}
t *testing.T
wasExecuted bool
shouldExecute bool
}
var errTestErr = errors.New("some error")
func (ex *wantExecuter) check(t *testing.T) {
t.Helper()
if ex.wasExecuted && !ex.shouldExecute {
t.Error("executer should not be executed")
} else if !ex.wasExecuted && ex.shouldExecute {
t.Error("executer should be executed")
} else if ex.wasExecuted != ex.shouldExecute {
t.Errorf("executed missmatched should be %t, but was %t", ex.shouldExecute, ex.wasExecuted)
}
}
func (ex *wantExecuter) Exec(query string, args ...interface{}) (sql.Result, error) {
ex.t.Helper()
ex.wasExecuted = true
if query != ex.query {
ex.t.Errorf("wrong query:\n expected:\n %q\n got:\n %q", ex.query, query)
}
if !reflect.DeepEqual(ex.args, args) {
ex.t.Errorf("wrong args:\n expected:\n %v\n got:\n %v", ex.args, args)
}
return nil, nil
}
func TestNewCreateStatement(t *testing.T) {
type args struct {
table string
event *testEvent
values []handler.Column
}
type want struct {
aggregateType eventstore.AggregateType
sequence uint64
previousSequence uint64
table string
executer *wantExecuter
isErr func(error) bool
}
tests := []struct {
name string
args args
want want
}{
{
name: "no table",
args: args{
table: "",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
},
want: want{
table: "",
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoProjection)
},
},
},
{
name: "no values",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoValues)
},
},
},
{
name: "correct",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
query: "INSERT INTO my_table (col1) VALUES ($1)",
shouldExecute: true,
args: []interface{}{"val"},
},
isErr: func(err error) bool {
return err == nil
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.want.executer.t = t
stmt := NewCreateStatement(tt.args.event, tt.args.values)
err := stmt.Execute(tt.want.executer, tt.args.table)
if !tt.want.isErr(err) {
t.Errorf("unexpected error: %v", err)
}
tt.want.executer.check(t)
})
}
}
func TestNewUpsertStatement(t *testing.T) {
type args struct {
table string
event *testEvent
values []handler.Column
}
type want struct {
aggregateType eventstore.AggregateType
sequence uint64
previousSequence uint64
table string
executer *wantExecuter
isErr func(error) bool
}
tests := []struct {
name string
args args
want want
}{
{
name: "no table",
args: args{
table: "",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
},
want: want{
table: "",
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoProjection)
},
},
},
{
name: "no values",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoValues)
},
},
},
{
name: "correct",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
query: "UPSERT INTO my_table (col1) VALUES ($1)",
shouldExecute: true,
args: []interface{}{"val"},
},
isErr: func(err error) bool {
return err == nil
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.want.executer.t = t
stmt := NewUpsertStatement(tt.args.event, tt.args.values)
err := stmt.Execute(tt.want.executer, tt.args.table)
if !tt.want.isErr(err) {
t.Errorf("unexpected error: %v", err)
}
tt.want.executer.check(t)
})
}
}
func TestNewUpdateStatement(t *testing.T) {
type args struct {
table string
event *testEvent
values []handler.Column
conditions []handler.Condition
}
type want struct {
table string
aggregateType eventstore.AggregateType
sequence uint64
previousSequence uint64
executer *wantExecuter
isErr func(error) bool
}
tests := []struct {
name string
args args
want want
}{
{
name: "no table",
args: args{
table: "",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
conditions: []handler.Condition{
{
Name: "col2",
Value: 1,
},
},
},
want: want{
table: "",
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoProjection)
},
},
},
{
name: "no values",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{},
conditions: []handler.Condition{
{
Name: "col2",
Value: 1,
},
},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoValues)
},
},
},
{
name: "no conditions",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
conditions: []handler.Condition{},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoCondition)
},
},
},
{
name: "correct",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
values: []handler.Column{
{
Name: "col1",
Value: "val",
},
},
conditions: []handler.Condition{
{
Name: "col2",
Value: 1,
},
},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
query: "UPDATE my_table SET (col1) = ($1) WHERE (col2 = $2)",
shouldExecute: true,
args: []interface{}{"val", 1},
},
isErr: func(err error) bool {
return err == nil
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.want.executer.t = t
stmt := NewUpdateStatement(tt.args.event, tt.args.values, tt.args.conditions)
err := stmt.Execute(tt.want.executer, tt.args.table)
if !tt.want.isErr(err) {
t.Errorf("unexpected error: %v", err)
}
tt.want.executer.check(t)
})
}
}
func TestNewDeleteStatement(t *testing.T) {
type args struct {
table string
event *testEvent
conditions []handler.Condition
}
type want struct {
table string
aggregateType eventstore.AggregateType
sequence uint64
previousSequence uint64
executer *wantExecuter
isErr func(error) bool
}
tests := []struct {
name string
args args
want want
}{
{
name: "no table",
args: args{
table: "",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
conditions: []handler.Condition{
{
Name: "col2",
Value: 1,
},
},
},
want: want{
table: "",
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoProjection)
},
},
},
{
name: "no conditions",
args: args{
table: "my_table",
event: &testEvent{
aggregateType: "agg",
sequence: 1,
previousSequence: 0,
},
conditions: []handler.Condition{},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
shouldExecute: false,
},
isErr: func(err error) bool {
return errors.Is(err, handler.ErrNoCondition)
},
},
},
{
name: "correct",
args: args{
table: "my_table",
event: &testEvent{
sequence: 1,
previousSequence: 0,
aggregateType: "agg",
},
conditions: []handler.Condition{
{
Name: "col1",
Value: 1,
},
},
},
want: want{
table: "my_table",
aggregateType: "agg",
sequence: 1,
previousSequence: 1,
executer: &wantExecuter{
query: "DELETE FROM my_table WHERE (col1 = $1)",
shouldExecute: true,
args: []interface{}{1},
},
isErr: func(err error) bool {
return err == nil
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.want.executer.t = t
stmt := NewDeleteStatement(tt.args.event, tt.args.conditions)
err := stmt.Execute(tt.want.executer, tt.args.table)
if !tt.want.isErr(err) {
t.Errorf("unexpected error: %v", err)
}
tt.want.executer.check(t)
})
}
}
func TestNewNoOpStatement(t *testing.T) {
type args struct {
event *testEvent
}
tests := []struct {
name string
args args
want handler.Statement
}{
{
name: "generate correctly",
args: args{
event: &testEvent{
aggregateType: "agg",
sequence: 5,
previousSequence: 3,
},
},
want: handler.Statement{
AggregateType: "agg",
Execute: nil,
Sequence: 5,
PreviousSequence: 3,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := NewNoOpStatement(tt.args.event); !reflect.DeepEqual(got, tt.want) {
t.Errorf("NewNoOpStatement() = %v, want %v", got, tt.want)
}
})
}
}
func TestStatement_Execute(t *testing.T) {
type fields struct {
execute func(ex handler.Executer, projectionName string) error
}
type want struct {
isErr func(error) bool
}
type args struct {
projectionName string
}
tests := []struct {
name string
args args
fields fields
want want
}{
{
name: "execute returns no error",
fields: fields{
execute: func(ex handler.Executer, projectionName string) error { return nil },
},
args: args{
projectionName: "my_projection",
},
want: want{
isErr: func(err error) bool {
return err == nil
},
},
},
{
name: "execute returns error",
args: args{
projectionName: "my_projection",
},
fields: fields{
execute: func(ex handler.Executer, projectionName string) error { return errTestErr },
},
want: want{
isErr: func(err error) bool {
return errors.Is(err, errTestErr)
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
stmt := &handler.Statement{
Execute: tt.fields.execute,
}
if err := stmt.Execute(nil, tt.args.projectionName); !tt.want.isErr(err) {
t.Errorf("unexpected error: %v", err)
}
})
}
}
func Test_columnsToQuery(t *testing.T) {
type args struct {
cols []handler.Column
}
type want struct {
names []string
params []string
values []interface{}
}
tests := []struct {
name string
args args
want want
}{
{
name: "no columns",
args: args{},
want: want{
names: []string{},
params: []string{},
values: []interface{}{},
},
},
{
name: "one column",
args: args{
cols: []handler.Column{
{
Name: "col1",
Value: 1,
},
},
},
want: want{
names: []string{"col1"},
params: []string{"$1"},
values: []interface{}{1},
},
},
{
name: "multiple columns",
args: args{
cols: []handler.Column{
{
Name: "col1",
Value: 1,
},
{
Name: "col2",
Value: 3.14,
},
},
},
want: want{
names: []string{"col1", "col2"},
params: []string{"$1", "$2"},
values: []interface{}{1, 3.14},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotNames, gotParameters, gotValues := columnsToQuery(tt.args.cols)
if !reflect.DeepEqual(gotNames, tt.want.names) {
t.Errorf("columnsToQuery() gotNames = %v, want %v", gotNames, tt.want.names)
}
if !reflect.DeepEqual(gotParameters, tt.want.params) {
t.Errorf("columnsToQuery() gotParameters = %v, want %v", gotParameters, tt.want.params)
}
if !reflect.DeepEqual(gotValues, tt.want.values) {
t.Errorf("columnsToQuery() gotValues = %v, want %v", gotValues, tt.want.values)
}
})
}
}
func Test_columnsToWhere(t *testing.T) {
type args struct {
conds []handler.Condition
paramOffset int
}
type want struct {
wheres []string
values []interface{}
}
tests := []struct {
name string
args args
want want
}{
{
name: "no wheres",
args: args{},
want: want{
wheres: []string{},
values: []interface{}{},
},
},
{
name: "no offset",
args: args{
conds: []handler.Condition{
{
Name: "col1",
Value: "val1",
},
},
paramOffset: 0,
},
want: want{
wheres: []string{"(col1 = $1)"},
values: []interface{}{"val1"},
},
},
{
name: "multiple cols",
args: args{
conds: []handler.Condition{
{
Name: "col1",
Value: "val1",
},
{
Name: "col2",
Value: "val2",
},
},
paramOffset: 0,
},
want: want{
wheres: []string{"(col1 = $1)", "(col2 = $2)"},
values: []interface{}{"val1", "val2"},
},
},
{
name: "2 offset",
args: args{
conds: []handler.Condition{
{
Name: "col1",
Value: "val1",
},
},
paramOffset: 2,
},
want: want{
wheres: []string{"(col1 = $3)"},
values: []interface{}{"val1"},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotWheres, gotValues := conditionsToWhere(tt.args.conds, tt.args.paramOffset)
if !reflect.DeepEqual(gotWheres, tt.want.wheres) {
t.Errorf("columnsToWhere() gotWheres = %v, want %v", gotWheres, tt.want.wheres)
}
if !reflect.DeepEqual(gotValues, tt.want.values) {
t.Errorf("columnsToWhere() gotValues = %v, want %v", gotValues, tt.want.values)
}
})
}
}

View File

@@ -0,0 +1,29 @@
package handler
import (
"github.com/caos/zitadel/internal/eventstore"
)
type HandlerConfig struct {
Eventstore *eventstore.Eventstore
}
type Handler struct {
Eventstore *eventstore.Eventstore
Sub *eventstore.Subscription
EventQueue chan eventstore.EventReader
}
func NewHandler(config HandlerConfig) Handler {
return Handler{
Eventstore: config.Eventstore,
EventQueue: make(chan eventstore.EventReader, 100),
}
}
func (h *Handler) Subscribe(aggregates ...eventstore.AggregateType) {
h.Sub = eventstore.SubscribeAggregates(h.EventQueue, aggregates...)
}
func (h *Handler) SubscribeEvents(types map[eventstore.AggregateType][]eventstore.EventType) {
h.Sub = eventstore.SubscribeEventTypes(h.EventQueue, types)
}

View File

@@ -0,0 +1,314 @@
package handler
import (
"context"
"sort"
"sync"
"time"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/eventstore"
)
type ProjectionHandlerConfig struct {
HandlerConfig
ProjectionName string
RequeueEvery time.Duration
RetryFailedAfter time.Duration
}
//Update updates the projection with the given statements
type Update func(context.Context, []Statement, Reduce) (unexecutedStmts []Statement, err error)
//Reduce reduces the given event to a statement
//which is used to update the projection
type Reduce func(eventstore.EventReader) ([]Statement, error)
//Lock is used for mutex handling if needed on the projection
type Lock func(context.Context, time.Duration) <-chan error
//Unlock releases the mutex of the projection
type Unlock func() error
//SearchQuery generates the search query to lookup for events
type SearchQuery func() (query *eventstore.SearchQueryBuilder, queryLimit uint64, err error)
type ProjectionHandler struct {
Handler
requeueAfter time.Duration
shouldBulk *time.Timer
retryFailedAfter time.Duration
shouldPush *time.Timer
pushSet bool
ProjectionName string
lockMu sync.Mutex
stmts []Statement
}
func NewProjectionHandler(config ProjectionHandlerConfig) *ProjectionHandler {
h := &ProjectionHandler{
Handler: NewHandler(config.HandlerConfig),
ProjectionName: config.ProjectionName,
requeueAfter: config.RequeueEvery,
// first bulk is instant on startup
shouldBulk: time.NewTimer(0),
shouldPush: time.NewTimer(0),
retryFailedAfter: config.RetryFailedAfter,
}
//unitialized timer
//https://github.com/golang/go/issues/12721
<-h.shouldPush.C
if config.RequeueEvery <= 0 {
if !h.shouldBulk.Stop() {
<-h.shouldBulk.C
}
logging.LogWithFields("HANDL-mC9Xx", "projection", h.ProjectionName).Info("starting handler without requeue")
return h
} else if config.RequeueEvery < 500*time.Millisecond {
logging.LogWithFields("HANDL-IEFsG", "projection", h.ProjectionName).Fatal("requeue every must be greater 500ms or <= 0")
}
logging.LogWithFields("HANDL-fAC5O", "projection", h.ProjectionName).Info("starting handler")
return h
}
func (h *ProjectionHandler) ResetShouldBulk() {
if h.requeueAfter > 0 {
h.shouldBulk.Reset(h.requeueAfter)
}
}
func (h *ProjectionHandler) triggerShouldPush(after time.Duration) {
if !h.pushSet {
h.pushSet = true
h.shouldPush.Reset(after)
}
}
//Process waits for several conditions:
// if context is canceled the function gracefully shuts down
// if an event occures it reduces the event
// if the internal timer expires the handler will check
// for unprocessed events on eventstore
func (h *ProjectionHandler) Process(
ctx context.Context,
reduce Reduce,
update Update,
lock Lock,
unlock Unlock,
query SearchQuery,
) {
//handle panic
defer func() {
cause := recover()
logging.LogWithFields("HANDL-utWkv", "projection", h.ProjectionName, "cause", cause).Error("projection handler paniced")
}()
execBulk := h.prepareExecuteBulk(query, reduce, update)
for {
select {
case <-ctx.Done():
if h.pushSet {
h.push(context.Background(), update, reduce)
}
h.shutdown()
return
case event := <-h.Handler.EventQueue:
if err := h.processEvent(ctx, event, reduce); err != nil {
continue
}
h.triggerShouldPush(0)
case <-h.shouldBulk.C:
h.bulk(ctx, lock, execBulk, unlock)
h.ResetShouldBulk()
default:
//lower prio select with push
select {
case <-ctx.Done():
if h.pushSet {
h.push(context.Background(), update, reduce)
}
h.shutdown()
return
case event := <-h.Handler.EventQueue:
if err := h.processEvent(ctx, event, reduce); err != nil {
continue
}
h.triggerShouldPush(0)
case <-h.shouldBulk.C:
h.bulk(ctx, lock, execBulk, unlock)
h.ResetShouldBulk()
case <-h.shouldPush.C:
h.push(ctx, update, reduce)
h.ResetShouldBulk()
}
}
}
}
func (h *ProjectionHandler) processEvent(
ctx context.Context,
event eventstore.EventReader,
reduce Reduce,
) error {
stmts, err := reduce(event)
if err != nil {
logging.Log("EVENT-PTr4j").WithError(err).Warn("unable to process event")
return err
}
h.lockMu.Lock()
defer h.lockMu.Unlock()
h.stmts = append(h.stmts, stmts...)
return nil
}
func (h *ProjectionHandler) bulk(
ctx context.Context,
lock Lock,
executeBulk executeBulk,
unlock Unlock,
) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
errs := lock(ctx, h.requeueAfter)
//wait until projection is locked
if err, ok := <-errs; err != nil || !ok {
logging.LogWithFields("HANDL-XDJ4i", "projection", h.ProjectionName).OnError(err).Warn("initial lock failed")
return err
}
go h.cancelOnErr(ctx, errs, cancel)
execErr := executeBulk(ctx)
logging.LogWithFields("EVENT-gwiu4", "projection", h.ProjectionName).OnError(execErr).Warn("unable to execute")
unlockErr := unlock()
logging.LogWithFields("EVENT-boPv1", "projection", h.ProjectionName).OnError(unlockErr).Warn("unable to unlock")
if execErr != nil {
return execErr
}
return unlockErr
}
func (h *ProjectionHandler) cancelOnErr(ctx context.Context, errs <-chan error, cancel func()) {
for {
select {
case err := <-errs:
if err != nil {
logging.LogWithFields("HANDL-cVop2", "projection", h.ProjectionName).WithError(err).Warn("bulk canceled")
cancel()
return
}
case <-ctx.Done():
cancel()
return
}
}
}
type executeBulk func(ctx context.Context) error
func (h *ProjectionHandler) prepareExecuteBulk(
query SearchQuery,
reduce Reduce,
update Update,
) executeBulk {
return func(ctx context.Context) error {
for {
select {
case <-ctx.Done():
return nil
default:
hasLimitExeeded, err := h.fetchBulkStmts(ctx, query, reduce)
if err != nil || len(h.stmts) == 0 {
logging.LogWithFields("HANDL-CzQvn", "projection", h.ProjectionName).OnError(err).Warn("unable to fetch stmts")
return err
}
if err = h.push(ctx, update, reduce); err != nil {
return err
}
if !hasLimitExeeded {
return nil
}
}
}
}
}
func (h *ProjectionHandler) fetchBulkStmts(
ctx context.Context,
query SearchQuery,
reduce Reduce,
) (limitExeeded bool, err error) {
eventQuery, eventsLimit, err := query()
if err != nil {
logging.LogWithFields("HANDL-x6qvs", "projection", h.ProjectionName).WithError(err).Warn("unable to create event query")
return false, err
}
events, err := h.Eventstore.FilterEvents(ctx, eventQuery)
if err != nil {
logging.LogWithFields("HANDL-X8vlo", "projection", h.ProjectionName).WithError(err).Info("Unable to bulk fetch events")
return false, err
}
for _, event := range events {
if err = h.processEvent(ctx, event, reduce); err != nil {
logging.LogWithFields("HANDL-PaKlz", "projection", h.ProjectionName, "seq", event.Sequence()).WithError(err).Warn("unable to process event in bulk")
return false, err
}
}
return len(events) == int(eventsLimit), nil
}
func (h *ProjectionHandler) push(
ctx context.Context,
update Update,
reduce Reduce,
) (err error) {
h.lockMu.Lock()
defer h.lockMu.Unlock()
sort.Slice(h.stmts, func(i, j int) bool {
return h.stmts[i].Sequence < h.stmts[j].Sequence
})
h.stmts, err = update(ctx, h.stmts, reduce)
h.pushSet = len(h.stmts) > 0
if h.pushSet {
h.triggerShouldPush(h.retryFailedAfter)
return nil
}
h.shouldPush.Stop()
return err
}
func (h *ProjectionHandler) shutdown() {
h.lockMu.Lock()
defer h.lockMu.Unlock()
h.Sub.Unsubscribe()
if !h.shouldBulk.Stop() {
<-h.shouldBulk.C
}
if !h.shouldPush.Stop() {
<-h.shouldPush.C
}
logging.Log("EVENT-XG5Og").Info("stop processing")
}

View File

@@ -0,0 +1,991 @@
package handler
import (
"context"
"errors"
"reflect"
"sync"
"testing"
"time"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/repository"
es_repo_mock "github.com/caos/zitadel/internal/eventstore/repository/mock"
)
var (
ErrQuery = errors.New("query err")
ErrFilter = errors.New("filter err")
ErrReduce = errors.New("reduce err")
ErrLock = errors.New("lock failed")
ErrUnlock = errors.New("unlock failed")
ErrExec = errors.New("exec error")
ErrBulk = errors.New("bulk err")
ErrUpdate = errors.New("update err")
)
func newTestStatement(seq, previousSeq uint64) Statement {
return Statement{
Sequence: seq,
PreviousSequence: previousSeq,
}
}
func initTimer() *time.Timer {
t := time.NewTimer(0)
<-t.C
return t
}
func TestProjectionHandler_processEvent(t *testing.T) {
type fields struct {
stmts []Statement
pushSet bool
shouldPush *time.Timer
}
type args struct {
ctx context.Context
event eventstore.EventReader
reduce Reduce
}
type want struct {
isErr func(err error) bool
stmts []Statement
}
tests := []struct {
name string
fields fields
args args
want want
}{
{
name: "reduce fails",
fields: fields{
stmts: nil,
pushSet: false,
shouldPush: nil,
},
args: args{
reduce: testReduceErr(ErrReduce),
},
want: want{
isErr: func(err error) bool {
return errors.Is(err, ErrReduce)
},
stmts: nil,
},
},
{
name: "no stmts",
fields: fields{
stmts: nil,
pushSet: false,
shouldPush: initTimer(),
},
args: args{
reduce: testReduce(),
},
want: want{
isErr: func(err error) bool {
return err == nil
},
stmts: nil,
},
},
{
name: "existing stmts",
fields: fields{
stmts: []Statement{
newTestStatement(1, 0),
},
pushSet: false,
shouldPush: initTimer(),
},
args: args{
reduce: testReduce(newTestStatement(2, 1)),
},
want: want{
isErr: func(err error) bool {
return err == nil
},
stmts: []Statement{
newTestStatement(1, 0),
newTestStatement(2, 1),
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h := NewProjectionHandler(ProjectionHandlerConfig{
HandlerConfig: HandlerConfig{
Eventstore: nil,
},
ProjectionName: "",
RequeueEvery: -1,
})
h.stmts = tt.fields.stmts
h.pushSet = tt.fields.pushSet
h.shouldPush = tt.fields.shouldPush
err := h.processEvent(tt.args.ctx, tt.args.event, tt.args.reduce)
if !tt.want.isErr(err) {
t.Errorf("unexpected error %v", err)
}
if !reflect.DeepEqual(tt.want.stmts, h.stmts) {
t.Errorf("unexpected stmts\n want: %v\n got: %v", tt.want.stmts, h.stmts)
}
})
}
}
func TestProjectionHandler_fetchBulkStmts(t *testing.T) {
type args struct {
ctx context.Context
query SearchQuery
reduce Reduce
}
type want struct {
shouldLimitExeeded bool
isErr func(error) bool
}
type fields struct {
eventstore *eventstore.Eventstore
}
tests := []struct {
name string
args args
fields fields
want want
}{
{
name: "query returns err",
args: args{
ctx: context.Background(),
query: testQuery(nil, 0, ErrQuery),
reduce: testReduce(),
},
fields: fields{},
want: want{
shouldLimitExeeded: false,
isErr: func(err error) bool {
return errors.Is(err, ErrQuery)
},
},
},
{
name: "eventstore returns err",
args: args{
ctx: context.Background(),
query: testQuery(
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes("test").
Builder(),
5,
nil,
),
reduce: testReduce(),
},
fields: fields{
eventstore: eventstore.NewEventstore(
es_repo_mock.NewRepo(t).ExpectFilterEventsError(ErrFilter),
),
},
want: want{
shouldLimitExeeded: false,
isErr: func(err error) bool {
return errors.Is(err, ErrFilter)
},
},
},
{
name: "no events found",
args: args{
ctx: context.Background(),
query: testQuery(
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes("test").
Builder(),
5,
nil,
),
reduce: testReduce(),
},
fields: fields{
eventstore: eventstore.NewEventstore(
es_repo_mock.NewRepo(t).ExpectFilterEvents(),
),
},
want: want{
shouldLimitExeeded: false,
isErr: func(err error) bool {
return err == nil
},
},
},
{
name: "found events smaller than limit",
args: args{
ctx: context.Background(),
query: testQuery(
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes("test").
Builder(),
5,
nil,
),
reduce: testReduce(),
},
fields: fields{
eventstore: eventstore.NewEventstore(
es_repo_mock.NewRepo(t).ExpectFilterEvents(
&repository.Event{
ID: "id",
Sequence: 1,
PreviousAggregateSequence: 0,
CreationDate: time.Now(),
Type: "test.added",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
&repository.Event{
ID: "id",
Sequence: 2,
PreviousAggregateSequence: 1,
CreationDate: time.Now(),
Type: "test.changed",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
),
),
},
want: want{
shouldLimitExeeded: false,
isErr: func(err error) bool {
return err == nil
},
},
},
{
name: "found events exeed limit",
args: args{
ctx: context.Background(),
query: testQuery(
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes("test").
Builder(),
2,
nil,
),
reduce: testReduce(),
},
fields: fields{
eventstore: eventstore.NewEventstore(
es_repo_mock.NewRepo(t).ExpectFilterEvents(
&repository.Event{
ID: "id",
Sequence: 1,
PreviousAggregateSequence: 0,
CreationDate: time.Now(),
Type: "test.added",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
&repository.Event{
ID: "id",
Sequence: 2,
PreviousAggregateSequence: 1,
CreationDate: time.Now(),
Type: "test.changed",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
),
),
},
want: want{
shouldLimitExeeded: true,
isErr: func(err error) bool {
return err == nil
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h := &ProjectionHandler{
lockMu: sync.Mutex{},
Handler: Handler{
Eventstore: tt.fields.eventstore,
},
shouldPush: initTimer(),
}
gotLimitExeeded, err := h.fetchBulkStmts(tt.args.ctx, tt.args.query, tt.args.reduce)
if !tt.want.isErr(err) {
t.Errorf("ProjectionHandler.prepareBulkStmts() error = %v", err)
return
}
if gotLimitExeeded != tt.want.shouldLimitExeeded {
t.Errorf("ProjectionHandler.prepareBulkStmts() = %v, want %v", gotLimitExeeded, tt.want.shouldLimitExeeded)
}
})
}
}
func TestProjectionHandler_push(t *testing.T) {
type fields struct {
stmts []Statement
pushSet bool
}
type args struct {
ctx context.Context
previousLock time.Duration
update Update
reduce Reduce
}
type want struct {
isErr func(err error) bool
minExecution time.Duration
}
tests := []struct {
name string
fields fields
args args
want want
}{
{
name: "previous lock",
fields: fields{
stmts: []Statement{
newTestStatement(1, 0),
newTestStatement(2, 1),
},
pushSet: true,
},
args: args{
ctx: context.Background(),
previousLock: 200 * time.Millisecond,
update: testUpdate(t, 2, nil),
reduce: testReduce(),
},
want: want{
isErr: func(err error) bool { return err == nil },
minExecution: 200 * time.Millisecond,
},
},
{
name: "error in update",
fields: fields{
stmts: []Statement{
newTestStatement(1, 0),
newTestStatement(2, 1),
},
pushSet: true,
},
args: args{
ctx: context.Background(),
update: testUpdate(t, 2, errors.New("some error")),
reduce: testReduce(),
},
want: want{
isErr: func(err error) bool { return err.Error() == "some error" },
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h := NewProjectionHandler(ProjectionHandlerConfig{
HandlerConfig: HandlerConfig{},
})
h.stmts = tt.fields.stmts
h.pushSet = tt.fields.pushSet
if tt.args.previousLock > 0 {
h.lockMu.Lock()
go func() {
<-time.After(tt.args.previousLock)
h.lockMu.Unlock()
}()
}
start := time.Now()
if err := h.push(tt.args.ctx, tt.args.update, tt.args.reduce); !tt.want.isErr(err) {
t.Errorf("ProjectionHandler.push() error = %v", err)
}
executionTime := time.Since(start)
if tt.want.minExecution.Truncate(executionTime) > 0 {
t.Errorf("expected execution time >= %v got %v", tt.want.minExecution, executionTime)
}
if h.pushSet {
t.Error("expected push set to be false")
}
if len(h.stmts) != 0 {
t.Errorf("expected stmts to be nil but was %v", h.stmts)
}
})
}
}
func Test_cancelOnErr(t *testing.T) {
type args struct {
ctx context.Context
errs chan error
err error
}
tests := []struct {
name string
args args
cancelMocker *cancelMocker
}{
{
name: "error occured",
args: args{
ctx: context.Background(),
errs: make(chan error),
err: ErrNoCondition,
},
cancelMocker: &cancelMocker{
shouldBeCalled: true,
wasCalled: make(chan bool, 1),
},
},
{
name: "ctx done",
args: args{
ctx: canceledCtx(),
errs: make(chan error),
},
cancelMocker: &cancelMocker{
shouldBeCalled: false,
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h := &ProjectionHandler{}
go h.cancelOnErr(tt.args.ctx, tt.args.errs, tt.cancelMocker.mockCancel)
if tt.args.err != nil {
tt.args.errs <- tt.args.err
}
tt.cancelMocker.check(t)
})
}
}
func TestProjectionHandler_bulk(t *testing.T) {
type args struct {
ctx context.Context
executeBulk *executeBulkMock
lock *lockMock
unlock *unlockMock
}
type res struct {
lockCount int
lockCanceled bool
executeBulkCount int
executeBulkCanceled bool
unlockCount int
isErr func(error) bool
}
tests := []struct {
name string
args args
res res
}{
{
name: "lock fails",
args: args{
ctx: context.Background(),
executeBulk: &executeBulkMock{},
lock: &lockMock{
firstErr: ErrLock,
errWait: time.Duration(500 * time.Millisecond),
},
unlock: &unlockMock{},
},
res: res{
lockCount: 1,
executeBulkCount: 0,
unlockCount: 0,
isErr: func(err error) bool {
return errors.Is(err, ErrLock)
},
},
},
{
name: "unlock fails",
args: args{
ctx: context.Background(),
executeBulk: &executeBulkMock{},
lock: &lockMock{
err: nil,
errWait: time.Duration(500 * time.Millisecond),
},
unlock: &unlockMock{
err: ErrUnlock,
},
},
res: res{
lockCount: 1,
executeBulkCount: 1,
unlockCount: 1,
isErr: func(err error) bool {
return errors.Is(err, ErrUnlock)
},
},
},
{
name: "no error",
args: args{
ctx: context.Background(),
executeBulk: &executeBulkMock{},
lock: &lockMock{
err: nil,
errWait: time.Duration(500 * time.Millisecond),
canceled: make(chan bool, 1),
},
unlock: &unlockMock{
err: nil,
},
},
res: res{
lockCount: 1,
executeBulkCount: 1,
unlockCount: 1,
isErr: func(err error) bool {
return errors.Is(err, nil)
},
},
},
{
name: "ctx canceled before lock",
args: args{
ctx: canceledCtx(),
executeBulk: &executeBulkMock{},
lock: &lockMock{
err: nil,
errWait: time.Duration(500 * time.Millisecond),
canceled: make(chan bool, 1),
},
unlock: &unlockMock{
err: nil,
},
},
res: res{
lockCount: 1,
lockCanceled: true,
executeBulkCount: 0,
unlockCount: 0,
isErr: func(err error) bool {
return errors.Is(err, nil)
},
},
},
{
name: "2nd lock fails",
args: args{
ctx: context.Background(),
executeBulk: &executeBulkMock{
canceled: make(chan bool, 1),
waitForCancel: true,
},
lock: &lockMock{
firstErr: nil,
err: ErrLock,
errWait: time.Duration(100 * time.Millisecond),
canceled: make(chan bool, 1),
},
unlock: &unlockMock{
err: nil,
},
},
res: res{
lockCount: 1,
lockCanceled: true,
executeBulkCount: 1,
unlockCount: 1,
isErr: func(err error) bool {
return errors.Is(err, nil)
},
},
},
{
name: "bulk fails",
args: args{
ctx: context.Background(),
executeBulk: &executeBulkMock{
canceled: make(chan bool, 1),
err: ErrBulk,
waitForCancel: false,
},
lock: &lockMock{
firstErr: nil,
err: nil,
errWait: time.Duration(100 * time.Millisecond),
canceled: make(chan bool, 1),
},
unlock: &unlockMock{
err: nil,
},
},
res: res{
lockCount: 1,
lockCanceled: true,
executeBulkCount: 1,
unlockCount: 1,
isErr: func(err error) bool {
return errors.Is(err, ErrBulk)
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h := NewProjectionHandler(ProjectionHandlerConfig{
HandlerConfig: HandlerConfig{},
ProjectionName: "",
RequeueEvery: -1,
})
err := h.bulk(tt.args.ctx, tt.args.lock.lock(), tt.args.executeBulk.executeBulk(), tt.args.unlock.unlock())
if !tt.res.isErr(err) {
t.Errorf("unexpected error %v", err)
}
tt.args.lock.check(t, tt.res.lockCount, tt.res.lockCanceled)
tt.args.executeBulk.check(t, tt.res.executeBulkCount, tt.res.executeBulkCanceled)
tt.args.unlock.check(t, tt.res.unlockCount)
})
}
}
func TestProjectionHandler_prepareExecuteBulk(t *testing.T) {
type fields struct {
Handler Handler
SequenceTable string
stmts []Statement
pushSet bool
shouldPush *time.Timer
}
type args struct {
ctx context.Context
query SearchQuery
reduce Reduce
update Update
}
type want struct {
isErr func(error) bool
}
tests := []struct {
name string
fields fields
args args
want want
}{
{
name: "ctx done",
args: args{
ctx: canceledCtx(),
},
want: want{
isErr: func(err error) bool {
return err == nil
},
},
},
{
name: "fetch fails",
fields: fields{},
args: args{
query: testQuery(nil, 10, ErrNoProjection),
ctx: context.Background(),
},
want: want{
isErr: func(err error) bool {
return errors.Is(err, ErrNoProjection)
},
},
},
{
name: "push fails",
fields: fields{
Handler: NewHandler(HandlerConfig{
eventstore.NewEventstore(
es_repo_mock.NewRepo(t).ExpectFilterEvents(
&repository.Event{
ID: "id2",
Sequence: 1,
PreviousAggregateSequence: 0,
CreationDate: time.Now(),
Type: "test.added",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
&repository.Event{
ID: "id2",
Sequence: 2,
PreviousAggregateSequence: 1,
CreationDate: time.Now(),
Type: "test.changed",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
),
),
},
),
shouldPush: initTimer(),
},
args: args{
update: testUpdate(t, 2, ErrUpdate),
query: testQuery(
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes("testAgg").
Builder(),
10,
nil,
),
reduce: testReduce(
newTestStatement(2, 1),
),
ctx: context.Background(),
},
want: want{
isErr: func(err error) bool {
return errors.Is(err, ErrUpdate)
},
},
},
{
name: "success",
fields: fields{
Handler: NewHandler(HandlerConfig{
eventstore.NewEventstore(
es_repo_mock.NewRepo(t).ExpectFilterEvents(
&repository.Event{
ID: "id2",
Sequence: 1,
PreviousAggregateSequence: 0,
CreationDate: time.Now(),
Type: "test.added",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
&repository.Event{
ID: "id2",
Sequence: 2,
PreviousAggregateSequence: 1,
CreationDate: time.Now(),
Type: "test.changed",
Version: "v1",
AggregateID: "testid",
AggregateType: "testAgg",
},
),
),
},
),
shouldPush: initTimer(),
},
args: args{
update: testUpdate(t, 4, nil),
query: testQuery(
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes("testAgg").
Builder(),
10,
nil,
),
reduce: testReduce(
newTestStatement(1, 0),
newTestStatement(2, 1),
),
ctx: context.Background(),
},
want: want{
isErr: func(err error) bool {
return errors.Is(err, nil)
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
h := &ProjectionHandler{
Handler: tt.fields.Handler,
lockMu: sync.Mutex{},
stmts: tt.fields.stmts,
pushSet: tt.fields.pushSet,
shouldPush: tt.fields.shouldPush,
}
execBulk := h.prepareExecuteBulk(tt.args.query, tt.args.reduce, tt.args.update)
err := execBulk(tt.args.ctx)
if !tt.want.isErr(err) {
t.Errorf("unexpected err %v", err)
}
})
}
}
func testUpdate(t *testing.T, expectedStmtCount int, returnedErr error) Update {
return func(ctx context.Context, stmts []Statement, reduce Reduce) ([]Statement, error) {
if expectedStmtCount != len(stmts) {
t.Errorf("expected %d stmts got %d", expectedStmtCount, len(stmts))
}
return []Statement{}, returnedErr
}
}
func testReduce(stmts ...Statement) Reduce {
return func(event eventstore.EventReader) ([]Statement, error) {
return stmts, nil
}
}
func testReduceErr(err error) Reduce {
return func(event eventstore.EventReader) ([]Statement, error) {
return nil, err
}
}
func testQuery(query *eventstore.SearchQueryBuilder, limit uint64, err error) SearchQuery {
return func() (*eventstore.SearchQueryBuilder, uint64, error) {
return query, limit, err
}
}
type executeBulkMock struct {
callCount int
err error
waitForCancel bool
canceled chan bool
}
func (m *executeBulkMock) executeBulk() executeBulk {
return func(ctx context.Context) error {
m.callCount++
if m.waitForCancel {
select {
case <-ctx.Done():
m.canceled <- true
return nil
case <-time.After(500 * time.Millisecond):
}
}
return m.err
}
}
func (m *executeBulkMock) check(t *testing.T, callCount int, shouldBeCalled bool) {
t.Helper()
if callCount != m.callCount {
t.Errorf("wrong call count: expected %v got: %v", m.callCount, callCount)
}
if shouldBeCalled {
select {
case <-m.canceled:
default:
t.Error("bulk should be canceled but wasn't")
}
}
}
type lockMock struct {
callCount int
canceled chan bool
firstErr error
err error
errWait time.Duration
}
func (m *lockMock) lock() Lock {
return func(ctx context.Context, _ time.Duration) <-chan error {
m.callCount++
errs := make(chan error)
go func() {
for i := 0; ; i++ {
select {
case <-ctx.Done():
m.canceled <- true
close(errs)
return
case <-time.After(m.errWait):
err := m.err
if i == 0 {
err = m.firstErr
}
errs <- err
}
}
}()
return errs
}
}
func (m *lockMock) check(t *testing.T, callCount int, shouldBeCanceled bool) {
t.Helper()
if callCount != m.callCount {
t.Errorf("wrong call count: expected %v got: %v", callCount, m.callCount)
}
if shouldBeCanceled {
select {
case <-m.canceled:
case <-time.After(5 * time.Second):
t.Error("lock should be canceled but wasn't")
}
}
}
type unlockMock struct {
callCount int
err error
}
func (m *unlockMock) unlock() Unlock {
return func() error {
m.callCount++
return m.err
}
}
func (m *unlockMock) check(t *testing.T, callCount int) {
t.Helper()
if callCount != m.callCount {
t.Errorf("wrong call count: expected %v got: %v", callCount, m.callCount)
}
}
func canceledCtx() context.Context {
ctx, cancel := context.WithCancel(context.Background())
cancel()
return ctx
}
type cancelMocker struct {
shouldBeCalled bool
wasCalled chan bool
}
func (m *cancelMocker) mockCancel() {
m.wasCalled <- true
}
func (m *cancelMocker) check(t *testing.T) {
t.Helper()
if m.shouldBeCalled {
if wasCalled := <-m.wasCalled; !wasCalled {
t.Errorf("cancel: should: %t got: %t", m.shouldBeCalled, wasCalled)
}
}
}

View File

@@ -0,0 +1,17 @@
package handler
import "github.com/caos/zitadel/internal/eventstore"
//EventReducer represents the required data
//to work with events
type EventReducer struct {
Event eventstore.EventType
Reduce Reduce
}
//EventReducer represents the required data
//to work with aggregates
type AggregateReducer struct {
Aggregate eventstore.AggregateType
EventRedusers []EventReducer
}

View File

@@ -0,0 +1,52 @@
package handler
import (
"database/sql"
"errors"
"github.com/caos/zitadel/internal/eventstore"
)
var (
ErrNoProjection = errors.New("no projection")
ErrNoValues = errors.New("no values")
ErrNoCondition = errors.New("no condition")
ErrSomeStmtsFailed = errors.New("some statements failed")
)
type Statement struct {
AggregateType eventstore.AggregateType
Sequence uint64
PreviousSequence uint64
Execute func(ex Executer, projectionName string) error
}
func (s *Statement) IsNoop() bool {
return s.Execute == nil
}
type Executer interface {
Exec(string, ...interface{}) (sql.Result, error)
}
type Column struct {
Name string
Value interface{}
}
func NewCol(name string, value interface{}) Column {
return Column{
Name: name,
Value: value,
}
}
type Condition Column
func NewCond(name string, value interface{}) Condition {
return Condition{
Name: name,
Value: value,
}
}

View File

@@ -1,283 +0,0 @@
package eventstore_test
import (
"encoding/json"
"fmt"
"log"
)
//MemberReadModel is the minimum representation of a View model.
// it might be saved in a database or in memory
type ReadModel struct {
ProcessedSequence uint64
ID string
events []Event
}
//Append adds all the events to the aggregate.
// The function doesn't compute the new state of the read model
func (a *ReadModel) Append(events ...Event) {
a.events = append(a.events, events...)
}
type ProjectReadModel struct {
ReadModel
Apps []*AppReadModel
Name string
}
func (p *ProjectReadModel) Append(events ...Event) {
for _, event := range events {
switch event.(type) {
case *AddAppEvent:
app := new(AppReadModel)
app.Append(event)
p.Apps = append(p.Apps, app)
case *UpdateAppEvent:
for _, app := range p.Apps {
app.Append(event)
}
}
}
p.events = append(p.events, events...)
}
type AppReadModel struct {
ReadModel
Name string
}
//Reduce calculates the new state of the read model
func (a *AppReadModel) Reduce() error {
for _, event := range a.events {
switch e := event.(type) {
case *AddAppEvent:
a.Name = e.Name
a.ID = e.GetID()
case *UpdateAppEvent:
a.Name = e.Name
}
a.ProcessedSequence = event.GetSequence()
}
return nil
}
//Reduce calculates the new state of the read model
func (p *ProjectReadModel) Reduce() error {
for i := range p.Apps {
if err := p.Apps[i].Reduce(); err != nil {
return err
}
}
for _, event := range p.events {
switch e := event.(type) {
case *CreateProjectEvent:
p.ID = e.ID
p.Name = e.Name
case *RemoveAppEvent:
for i := len(p.Apps) - 1; i >= 0; i-- {
app := p.Apps[i]
if app.ID == e.GetID() {
p.Apps[i] = p.Apps[len(p.Apps)-1]
p.Apps[len(p.Apps)-1] = nil
p.Apps = p.Apps[:len(p.Apps)-1]
}
}
}
p.ProcessedSequence = event.GetSequence()
}
return nil
}
//Event is the minimal representation of a event
// which can be processed by the read models
type Event interface {
//GetSequence returns the event sequence
GetSequence() uint64
//GetID returns the id of the aggregate. It's not the id of the event
GetID() string
}
//DefaultEvent is the implementation of Event
type DefaultEvent struct {
Sequence uint64 `json:"-"`
ID string `json:"-"`
}
func (e *DefaultEvent) GetID() string {
return e.ID
}
func (e *DefaultEvent) GetSequence() uint64 {
return e.Sequence
}
type CreateProjectEvent struct {
DefaultEvent
Name string `json:"name,omitempty"`
}
//CreateProjectEventFromEventstore returns the specific type
// of the general EventstoreEvent
func CreateProjectEventFromEventstore(event *EventstoreEvent) (Event, error) {
e := &CreateProjectEvent{
DefaultEvent: DefaultEvent{Sequence: event.Sequence, ID: event.AggregateID},
}
err := json.Unmarshal(event.Data, e)
return e, err
}
type AddAppEvent struct {
ProjectID string `json:"-"`
AppID string `json:"id"`
Sequence uint64 `json:"-"`
Name string `json:"name,omitempty"`
}
func (e *AddAppEvent) GetID() string {
return e.AppID
}
func (e *AddAppEvent) GetSequence() uint64 {
return e.Sequence
}
func AppAddedEventFromEventstore(event *EventstoreEvent) (Event, error) {
e := &AddAppEvent{
Sequence: event.Sequence,
ProjectID: event.AggregateID,
}
err := json.Unmarshal(event.Data, e)
return e, err
}
type UpdateAppEvent struct {
ProjectID string `json:"-"`
AppID string `json:"id"`
Sequence uint64 `json:"-"`
Name string `json:"name,omitempty"`
}
func (e *UpdateAppEvent) GetID() string {
return e.AppID
}
func (e *UpdateAppEvent) GetSequence() uint64 {
return e.Sequence
}
func AppUpdatedEventFromEventstore(event *EventstoreEvent) (Event, error) {
e := &UpdateAppEvent{
Sequence: event.Sequence,
ProjectID: event.AggregateID,
}
err := json.Unmarshal(event.Data, e)
return e, err
}
type RemoveAppEvent struct {
ProjectID string `json:"-"`
AppID string `json:"id"`
Sequence uint64 `json:"-"`
}
func (e *RemoveAppEvent) GetID() string {
return e.AppID
}
func (e *RemoveAppEvent) GetSequence() uint64 {
return e.Sequence
}
func AppRemovedEventFromEventstore(event *EventstoreEvent) (Event, error) {
e := &RemoveAppEvent{
Sequence: event.Sequence,
ProjectID: event.AggregateID,
}
err := json.Unmarshal(event.Data, e)
return e, err
}
func main() {
eventstore := &Eventstore{
eventMapper: map[string]func(*EventstoreEvent) (Event, error){
"project.added": CreateProjectEventFromEventstore,
"app.added": AppAddedEventFromEventstore,
"app.updated": AppUpdatedEventFromEventstore,
"app.removed": AppRemovedEventFromEventstore,
},
events: []*EventstoreEvent{
{
AggregateID: "p1",
EventType: "project.added",
Sequence: 1,
Data: []byte(`{"name":"hodor"}`),
},
{
AggregateID: "123",
EventType: "app.added",
Sequence: 2,
Data: []byte(`{"id":"a1", "name": "ap 1"}`),
},
{
AggregateID: "123",
EventType: "app.updated",
Sequence: 3,
Data: []byte(`{"id":"a1", "name":"app 1"}`),
},
{
AggregateID: "123",
EventType: "app.added",
Sequence: 4,
Data: []byte(`{"id":"a2", "name": "app 2"}`),
},
{
AggregateID: "123",
EventType: "app.removed",
Sequence: 5,
Data: []byte(`{"id":"a1"}`),
},
},
}
events, err := eventstore.GetEvents()
if err != nil {
log.Panic(err)
}
p := &ProjectReadModel{Apps: []*AppReadModel{}}
p.Append(events...)
p.Reduce()
fmt.Printf("%+v\n", p)
for _, app := range p.Apps {
fmt.Printf("%+v\n", app)
}
}
//Eventstore is a simple abstraction of the eventstore framework
type Eventstore struct {
eventMapper map[string]func(*EventstoreEvent) (Event, error)
events []*EventstoreEvent
}
func (es *Eventstore) GetEvents() (events []Event, err error) {
events = make([]Event, len(es.events))
for i, event := range es.events {
events[i], err = es.eventMapper[event.EventType](event)
if err != nil {
return nil, err
}
}
return events, nil
}
type EventstoreEvent struct {
AggregateID string
Sequence uint64
EventType string
Data []byte
}

View File

@@ -2,7 +2,7 @@ package eventstore
import "time"
//ReadModel is the minimum representation of a View model.
//ReadModel is the minimum representation of a read model.
// It implements a basic reducer
// it might be saved in a database or in memory
type ReadModel struct {
@@ -21,7 +21,7 @@ func (rm *ReadModel) AppendEvents(events ...EventReader) *ReadModel {
return rm
}
//Reduce is the basic implementaion of reducer
//Reduce is the basic implementation of reducer
// If this function is extended the extending function should be the last step
func (rm *ReadModel) Reduce() error {
if len(rm.Events) == 0 {

View File

@@ -12,9 +12,13 @@ type Event struct {
//Sequence is the sequence of the event
Sequence uint64
//PreviousSequence is the sequence of the previous sequence
//PreviousAggregateSequence is the sequence of the previous sequence of the aggregate (e.g. org.250989)
// if it's 0 then it's the first event of this aggregate
PreviousSequence uint64
PreviousAggregateSequence uint64
//PreviousAggregateTypeSequence is the sequence of the previous sequence of the aggregate root (e.g. org)
// the first event of the aggregate has previous aggregate root sequence 0
PreviousAggregateTypeSequence uint64
//CreationDate is the time the event is created
// it's used for human readability.

View File

@@ -24,6 +24,11 @@ func (m *MockRepository) ExpectFilterEvents(events ...*repository.Event) *MockRe
return m
}
func (m *MockRepository) ExpectFilterEventsError(err error) *MockRepository {
m.EXPECT().Filter(gomock.Any(), gomock.Any()).Return(nil, err)
return m
}
func (m *MockRepository) ExpectPush(expectedEvents []*repository.Event, expectedUniqueConstraints ...*repository.UniqueConstraint) *MockRepository {
m.EXPECT().Push(gomock.Any(), gomock.Any(), gomock.Any()).DoAndReturn(
func(ctx context.Context, events []*repository.Event, uniqueConstraints ...*repository.UniqueConstraint) error {

View File

@@ -19,7 +19,29 @@ const (
//as soon as stored procedures are possible in crdb
// we could move the code to migrations and coll the procedure
// traking issue: https://github.com/cockroachdb/cockroach/issues/17511
crdbInsert = "WITH data ( " +
//
//previous_data selects the needed data of the latest event of the aggregate
// and buffers it (crdb inmemory)
crdbInsert = "WITH previous_data (aggregate_type_sequence, aggregate_sequence, resource_owner) AS (" +
"SELECT agg_type.seq, agg.seq, agg.ro FROM " +
"(" +
//max sequence of requested aggregate type
" SELECT MAX(event_sequence) seq, 1 join_me" +
" FROM eventstore.events" +
" WHERE aggregate_type = $2" +
") AS agg_type " +
// combined with
"LEFT JOIN " +
"(" +
// max sequence and resource owner of aggregate root
" SELECT event_sequence seq, resource_owner ro, 1 join_me" +
" FROM eventstore.events" +
" WHERE aggregate_type = $2 AND aggregate_id = $3" +
" ORDER BY event_sequence DESC" +
" LIMIT 1" +
") AS agg USING(join_me)" +
") " +
"INSERT INTO eventstore.events (" +
" event_type," +
" aggregate_type," +
" aggregate_id," +
@@ -29,15 +51,8 @@ const (
" editor_user," +
" editor_service," +
" resource_owner," +
// variables below are calculated
" previous_sequence" +
") AS (" +
//previous_data selects the needed data of the latest event of the aggregate
// and buffers it (crdb inmemory)
" WITH previous_data AS (" +
" SELECT event_sequence AS seq, resource_owner " +
" FROM eventstore.events " +
" WHERE aggregate_type = $2 AND aggregate_id = $3 ORDER BY seq DESC LIMIT 1" +
" previous_aggregate_sequence," +
" previous_aggregate_type_sequence" +
") " +
// defines the data to be inserted
"SELECT" +
@@ -49,43 +64,12 @@ const (
" $5::JSONB AS event_data," +
" $6::VARCHAR AS editor_user," +
" $7::VARCHAR AS editor_service," +
" CASE WHEN EXISTS (SELECT * FROM previous_data) " +
" THEN (SELECT resource_owner FROM previous_data) " +
" ELSE $8::VARCHAR " +
" end AS resource_owner, " +
" CASE WHEN EXISTS (SELECT * FROM previous_data) " +
" THEN (SELECT seq FROM previous_data) " +
" ELSE NULL " +
" end AS previous_sequence" +
") " +
"INSERT INTO eventstore.events " +
" ( " +
" event_type, " +
" aggregate_type," +
" aggregate_id, " +
" aggregate_version, " +
" creation_date, " +
" event_data, " +
" editor_user, " +
" editor_service, " +
" resource_owner, " +
" previous_sequence " +
" ) " +
" ( " +
" SELECT " +
" event_type, " +
" aggregate_type," +
" aggregate_id, " +
" aggregate_version, " +
" COALESCE(creation_date, NOW()), " +
" event_data, " +
" editor_user, " +
" editor_service, " +
" resource_owner, " +
" previous_sequence " +
" FROM data " +
" ) " +
"RETURNING id, event_sequence, previous_sequence, creation_date, resource_owner"
" IFNULL((resource_owner), $8::VARCHAR) AS resource_owner," +
" aggregate_sequence AS previous_aggregate_sequence," +
" aggregate_type_sequence AS previous_aggregate_type_sequence " +
"FROM previous_data " +
"RETURNING id, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, creation_date, resource_owner"
uniqueInsert = `INSERT INTO eventstore.unique_constraints
(
unique_type,
@@ -95,6 +79,7 @@ const (
$1,
$2
)`
uniqueDelete = `DELETE FROM eventstore.unique_constraints
WHERE unique_type = $1 and unique_field = $2`
)
@@ -113,15 +98,13 @@ func (db *CRDB) Health(ctx context.Context) error { return db.client.Ping() }
// This call is transaction save. The transaction will be rolled back if one event fails
func (db *CRDB) Push(ctx context.Context, events []*repository.Event, uniqueConstraints ...*repository.UniqueConstraint) error {
err := crdb.ExecuteTx(ctx, db.client, nil, func(tx *sql.Tx) error {
stmt, err := tx.PrepareContext(ctx, crdbInsert)
if err != nil {
logging.Log("SQL-3to5p").WithError(err).Warn("prepare failed")
return caos_errs.ThrowInternal(err, "SQL-OdXRE", "prepare failed")
}
var previousSequence Sequence
var (
previousAggregateSequence Sequence
previousAggregateTypeSequence Sequence
)
for _, event := range events {
err = stmt.QueryRowContext(ctx,
err := tx.QueryRowContext(ctx, crdbInsert,
event.Type,
event.AggregateType,
event.AggregateID,
@@ -130,22 +113,22 @@ func (db *CRDB) Push(ctx context.Context, events []*repository.Event, uniqueCons
event.EditorUser,
event.EditorService,
event.ResourceOwner,
).Scan(&event.ID, &event.Sequence, &previousSequence, &event.CreationDate, &event.ResourceOwner)
).Scan(&event.ID, &event.Sequence, &previousAggregateSequence, &previousAggregateTypeSequence, &event.CreationDate, &event.ResourceOwner)
event.PreviousSequence = uint64(previousSequence)
event.PreviousAggregateSequence = uint64(previousAggregateSequence)
event.PreviousAggregateTypeSequence = uint64(previousAggregateTypeSequence)
if err != nil {
logging.LogWithFields("SQL-IP3js",
logging.LogWithFields("SQL-NOqH7",
"aggregate", event.AggregateType,
"aggregateId", event.AggregateID,
"aggregateType", event.AggregateType,
"eventType", event.Type).WithError(err).Info("query failed",
"seq", event.PreviousSequence)
"eventType", event.Type).WithError(err).Info("query failed")
return caos_errs.ThrowInternal(err, "SQL-SBP37", "unable to create event")
}
}
err = db.handleUniqueConstraints(ctx, tx, uniqueConstraints...)
err := db.handleUniqueConstraints(ctx, tx, uniqueConstraints...)
if err != nil {
return err
}
@@ -230,7 +213,8 @@ func (db *CRDB) eventQuery() string {
" creation_date" +
", event_type" +
", event_sequence" +
", previous_sequence" +
", previous_aggregate_sequence" +
", previous_aggregate_type_sequence" +
", event_data" +
", editor_service" +
", editor_user" +
@@ -240,6 +224,7 @@ func (db *CRDB) eventQuery() string {
", aggregate_version" +
" FROM eventstore.events"
}
func (db *CRDB) maxSequenceQuery() string {
return "SELECT MAX(event_sequence) FROM eventstore.events"
}

View File

@@ -49,7 +49,7 @@ func query(ctx context.Context, criteria querier, searchQuery *repository.Search
rows, err := criteria.db().QueryContext(ctx, query, values...)
if err != nil {
logging.Log("SQL-HP3Uk").WithError(err).Info("query failed")
return z_errors.ThrowInternal(err, "SQL-IJuyR", "unable to filter events")
return z_errors.ThrowInternal(err, "SQL-KyeAx", "unable to filter events")
}
defer rows.Close()
@@ -91,7 +91,10 @@ func eventsScanner(scanner scan, dest interface{}) (err error) {
if !ok {
return z_errors.ThrowInvalidArgument(nil, "SQL-4GP6F", "type must be event")
}
var previousSequence Sequence
var (
previousAggregateSequence Sequence
previousAggregateTypeSequence Sequence
)
data := make(Data, 0)
event := new(repository.Event)
@@ -99,7 +102,8 @@ func eventsScanner(scanner scan, dest interface{}) (err error) {
&event.CreationDate,
&event.Type,
&event.Sequence,
&previousSequence,
&previousAggregateSequence,
&previousAggregateTypeSequence,
&data,
&event.EditorService,
&event.EditorUser,
@@ -114,7 +118,8 @@ func eventsScanner(scanner scan, dest interface{}) (err error) {
return z_errors.ThrowInternal(err, "SQL-M0dsf", "unable to scan row")
}
event.PreviousSequence = uint64(previousSequence)
event.PreviousAggregateSequence = uint64(previousAggregateSequence)
event.PreviousAggregateTypeSequence = uint64(previousAggregateTypeSequence)
event.Data = make([]byte, len(data))
copy(event.Data, data)

View File

@@ -129,13 +129,13 @@ func Test_prepareColumns(t *testing.T) {
dest: &[]*repository.Event{},
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
expected: []*repository.Event{
{AggregateID: "hodor", AggregateType: "user", Sequence: 5, Data: make(Data, 0)},
},
},
fields: fields{
dbRow: []interface{}{time.Time{}, repository.EventType(""), uint64(5), Sequence(0), Data(nil), "", "", "", repository.AggregateType("user"), "hodor", repository.Version("")},
dbRow: []interface{}{time.Time{}, repository.EventType(""), uint64(5), Sequence(0), Sequence(0), Data(nil), "", "", "", repository.AggregateType("user"), "hodor", repository.Version("")},
},
},
{
@@ -145,7 +145,7 @@ func Test_prepareColumns(t *testing.T) {
dest: []*repository.Event{},
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
dbErr: errors.IsErrorInvalidArgument,
},
},
@@ -157,7 +157,7 @@ func Test_prepareColumns(t *testing.T) {
dbErr: sql.ErrConnDone,
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
dbErr: errors.IsInternal,
},
},
@@ -591,7 +591,7 @@ func Test_query_events_mocked(t *testing.T) {
},
fields: fields{
mock: newMockClient(t).expectQuery(t,
`SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC`,
`SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC`,
[]driver.Value{repository.AggregateType("user")},
),
},
@@ -620,7 +620,7 @@ func Test_query_events_mocked(t *testing.T) {
},
fields: fields{
mock: newMockClient(t).expectQuery(t,
`SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence LIMIT \$2`,
`SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence LIMIT \$2`,
[]driver.Value{repository.AggregateType("user"), uint64(5)},
),
},
@@ -649,7 +649,7 @@ func Test_query_events_mocked(t *testing.T) {
},
fields: fields{
mock: newMockClient(t).expectQuery(t,
`SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC LIMIT \$2`,
`SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC LIMIT \$2`,
[]driver.Value{repository.AggregateType("user"), uint64(5)},
),
},
@@ -678,7 +678,7 @@ func Test_query_events_mocked(t *testing.T) {
},
fields: fields{
mock: newMockClient(t).expectQueryErr(t,
`SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC`,
`SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC`,
[]driver.Value{repository.AggregateType("user")},
sql.ErrConnDone),
},
@@ -707,7 +707,7 @@ func Test_query_events_mocked(t *testing.T) {
},
fields: fields{
mock: newMockClient(t).expectQuery(t,
`SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC`,
`SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) ORDER BY event_sequence DESC`,
[]driver.Value{repository.AggregateType("user")},
&repository.Event{Sequence: 100}),
},
@@ -775,7 +775,7 @@ func Test_query_events_mocked(t *testing.T) {
},
fields: fields{
mock: newMockClient(t).expectQuery(t,
`SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) OR \( aggregate_type = \$2 AND aggregate_id = \$3 \) ORDER BY event_sequence DESC LIMIT \$4`,
`SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, previous_aggregate_type_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE \( aggregate_type = \$1 \) OR \( aggregate_type = \$2 AND aggregate_id = \$3 \) ORDER BY event_sequence DESC LIMIT \$4`,
[]driver.Value{repository.AggregateType("user"), repository.AggregateType("org"), "asdf42", uint64(5)},
),
},

View File

@@ -19,7 +19,8 @@ type SearchQuery struct {
builder *SearchQueryBuilder
aggregateTypes []AggregateType
aggregateIDs []string
eventSequence uint64
eventSequenceGreater uint64
eventSequenceLess uint64
eventTypes []EventType
eventData map[string]interface{}
}
@@ -40,7 +41,7 @@ type AggregateType repository.AggregateType
// EventType is the description of the change
type EventType repository.EventType
// NewSearchQueryBuilder creates a new factory for event filters
// NewSearchQueryBuilder creates a new builder for event filters
// aggregateTypes must contain at least one aggregate type
func NewSearchQueryBuilder(columns Columns) *SearchQueryBuilder {
return &SearchQueryBuilder{
@@ -103,7 +104,13 @@ func (query *SearchQuery) AggregateTypes(types ...AggregateType) *SearchQuery {
//SequenceGreater filters for events with sequence greater the requested sequence
func (query *SearchQuery) SequenceGreater(sequence uint64) *SearchQuery {
query.eventSequence = sequence
query.eventSequenceGreater = sequence
return query
}
//SequenceLess filters for events with sequence less the requested sequence
func (query *SearchQuery) SequenceLess(sequence uint64) *SearchQuery {
query.eventSequenceLess = sequence
return query
}
@@ -131,21 +138,22 @@ func (query *SearchQuery) Builder() *SearchQueryBuilder {
return query.builder
}
func (factory *SearchQueryBuilder) build() (*repository.SearchQuery, error) {
if factory == nil ||
len(factory.queries) < 1 ||
factory.columns.Validate() != nil {
return nil, errors.ThrowPreconditionFailed(nil, "MODEL-4m9gs", "factory invalid")
func (builder *SearchQueryBuilder) build() (*repository.SearchQuery, error) {
if builder == nil ||
len(builder.queries) < 1 ||
builder.columns.Validate() != nil {
return nil, errors.ThrowPreconditionFailed(nil, "MODEL-4m9gs", "builder invalid")
}
filters := make([][]*repository.Filter, len(factory.queries))
filters := make([][]*repository.Filter, len(builder.queries))
for i, query := range factory.queries {
for i, query := range builder.queries {
for _, f := range []func() *repository.Filter{
query.aggregateTypeFilter,
query.aggregateIDFilter,
query.eventSequenceFilter,
query.eventTypeFilter,
query.eventDataFilter,
query.eventSequenceGreaterFilter,
query.eventSequenceLessFilter,
query.builder.resourceOwnerFilter,
} {
if filter := f(); filter != nil {
@@ -159,9 +167,9 @@ func (factory *SearchQueryBuilder) build() (*repository.SearchQuery, error) {
}
return &repository.SearchQuery{
Columns: factory.columns,
Limit: factory.limit,
Desc: factory.desc,
Columns: builder.columns,
Limit: builder.limit,
Desc: builder.desc,
Filters: filters,
}, nil
}
@@ -201,22 +209,33 @@ func (query *SearchQuery) aggregateTypeFilter() *repository.Filter {
return repository.NewFilter(repository.FieldAggregateType, aggregateTypes, repository.OperationIn)
}
func (query *SearchQuery) eventSequenceFilter() *repository.Filter {
if query.eventSequence == 0 {
func (query *SearchQuery) eventSequenceGreaterFilter() *repository.Filter {
if query.eventSequenceGreater == 0 {
return nil
}
sortOrder := repository.OperationGreater
if query.builder.desc {
sortOrder = repository.OperationLess
}
return repository.NewFilter(repository.FieldSequence, query.eventSequence, sortOrder)
return repository.NewFilter(repository.FieldSequence, query.eventSequenceGreater, sortOrder)
}
func (factory *SearchQueryBuilder) resourceOwnerFilter() *repository.Filter {
if factory.resourceOwner == "" {
func (query *SearchQuery) eventSequenceLessFilter() *repository.Filter {
if query.eventSequenceLess == 0 {
return nil
}
return repository.NewFilter(repository.FieldResourceOwner, factory.resourceOwner, repository.OperationEquals)
sortOrder := repository.OperationLess
if query.builder.desc {
sortOrder = repository.OperationGreater
}
return repository.NewFilter(repository.FieldSequence, query.eventSequenceLess, sortOrder)
}
func (builder *SearchQueryBuilder) resourceOwnerFilter() *repository.Filter {
if builder.resourceOwner == "" {
return nil
}
return repository.NewFilter(repository.FieldResourceOwner, builder.resourceOwner, repository.OperationEquals)
}
func (query *SearchQuery) eventDataFilter() *repository.Filter {

View File

@@ -26,10 +26,10 @@ func testSetColumns(columns Columns) func(factory *SearchQueryBuilder) *SearchQu
}
}
func testSetLimit(limit uint64) func(factory *SearchQueryBuilder) *SearchQueryBuilder {
return func(factory *SearchQueryBuilder) *SearchQueryBuilder {
factory = factory.Limit(limit)
return factory
func testSetLimit(limit uint64) func(builder *SearchQueryBuilder) *SearchQueryBuilder {
return func(builder *SearchQueryBuilder) *SearchQueryBuilder {
builder = builder.Limit(limit)
return builder
}
}
@@ -50,13 +50,20 @@ func testSetAggregateTypes(types ...AggregateType) func(*SearchQuery) *SearchQue
}
}
func testSetSequence(sequence uint64) func(*SearchQuery) *SearchQuery {
func testSetSequenceGreater(sequence uint64) func(*SearchQuery) *SearchQuery {
return func(query *SearchQuery) *SearchQuery {
query = query.SequenceGreater(sequence)
return query
}
}
func testSetSequenceLess(sequence uint64) func(*SearchQuery) *SearchQuery {
return func(query *SearchQuery) *SearchQuery {
query = query.SequenceLess(sequence)
return query
}
}
func testSetAggregateIDs(aggregateIDs ...string) func(*SearchQuery) *SearchQuery {
return func(query *SearchQuery) *SearchQuery {
query = query.AggregateIDs(aggregateIDs...)
@@ -89,7 +96,7 @@ func testSetSortOrder(asc bool) func(*SearchQueryBuilder) *SearchQueryBuilder {
}
}
func TestSearchQueryFactorySetters(t *testing.T) {
func TestSearchQuerybuilderSetters(t *testing.T) {
type args struct {
columns Columns
setters []func(*SearchQueryBuilder) *SearchQueryBuilder
@@ -100,7 +107,7 @@ func TestSearchQueryFactorySetters(t *testing.T) {
res *SearchQueryBuilder
}{
{
name: "New factory",
name: "New builder",
args: args{
columns: ColumnsEvent,
},
@@ -127,14 +134,27 @@ func TestSearchQueryFactorySetters(t *testing.T) {
},
},
{
name: "set sequence",
name: "set sequence greater",
args: args{
setters: []func(*SearchQueryBuilder) *SearchQueryBuilder{testAddQuery(testSetSequence(90))},
setters: []func(*SearchQueryBuilder) *SearchQueryBuilder{testAddQuery(testSetSequenceGreater(90))},
},
res: &SearchQueryBuilder{
queries: []*SearchQuery{
{
eventSequence: 90,
eventSequenceGreater: 90,
},
},
},
},
{
name: "set sequence less",
args: args{
setters: []func(*SearchQueryBuilder) *SearchQueryBuilder{testAddQuery(testSetSequenceLess(90))},
},
res: &SearchQueryBuilder{
queries: []*SearchQuery{
{
eventSequenceLess: 90,
},
},
},
@@ -202,7 +222,7 @@ func TestSearchQueryFactorySetters(t *testing.T) {
}
}
func TestSearchQueryFactoryBuild(t *testing.T) {
func TestSearchQuerybuilderBuild(t *testing.T) {
type args struct {
columns Columns
setters []func(*SearchQueryBuilder) *SearchQueryBuilder
@@ -305,7 +325,7 @@ func TestSearchQueryFactoryBuild(t *testing.T) {
testSetLimit(5),
testSetSortOrder(false),
testAddQuery(
testSetSequence(100),
testSetSequenceGreater(100),
testSetAggregateTypes("user"),
),
},
@@ -333,7 +353,7 @@ func TestSearchQueryFactoryBuild(t *testing.T) {
testSetLimit(5),
testSetSortOrder(true),
testAddQuery(
testSetSequence(100),
testSetSequenceGreater(100),
testSetAggregateTypes("user"),
),
},
@@ -362,7 +382,7 @@ func TestSearchQueryFactoryBuild(t *testing.T) {
testSetSortOrder(false),
testSetColumns(repository.ColumnsMaxSequence),
testAddQuery(
testSetSequence(100),
testSetSequenceGreater(100),
testSetAggregateTypes("user"),
),
},
@@ -475,7 +495,7 @@ func TestSearchQueryFactoryBuild(t *testing.T) {
setters: []func(*SearchQueryBuilder) *SearchQueryBuilder{
testAddQuery(
testSetAggregateTypes("user"),
testSetSequence(8),
testSetSequenceGreater(8),
),
},
},
@@ -572,6 +592,34 @@ func TestSearchQueryFactoryBuild(t *testing.T) {
},
},
},
{
name: "filter aggregate type and sequence between",
args: args{
columns: ColumnsEvent,
setters: []func(*SearchQueryBuilder) *SearchQueryBuilder{
testAddQuery(
testSetAggregateTypes("user"),
testSetSequenceGreater(8),
testSetSequenceLess(16),
),
},
},
res: res{
isErr: nil,
query: &repository.SearchQuery{
Columns: repository.ColumnsEvent,
Desc: false,
Limit: 0,
Filters: [][]*repository.Filter{
{
repository.NewFilter(repository.FieldAggregateType, repository.AggregateType("user"), repository.OperationEquals),
repository.NewFilter(repository.FieldSequence, uint64(8), repository.OperationGreater),
repository.NewFilter(repository.FieldSequence, uint64(16), repository.OperationLess),
},
},
},
},
},
{
name: "column invalid",
args: args{
@@ -589,11 +637,11 @@ func TestSearchQueryFactoryBuild(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
factory := NewSearchQueryBuilder(tt.args.columns)
builder := NewSearchQueryBuilder(tt.args.columns)
for _, f := range tt.args.setters {
factory = f(factory)
builder = f(builder)
}
query, err := factory.build()
query, err := builder.build()
if tt.res.isErr != nil && !tt.res.isErr(err) {
t.Errorf("wrong error(%T): %v", err, err)
return
@@ -644,8 +692,11 @@ func assertQuery(t *testing.T, i int, want, got *SearchQuery) {
if !reflect.DeepEqual(got.eventData, want.eventData) {
t.Errorf("wrong eventData in query %d : got: %v want: %v", i, got.eventData, want.eventData)
}
if got.eventSequence != want.eventSequence {
t.Errorf("wrong eventSequence in query %d : got: %v want: %v", i, got.eventSequence, want.eventSequence)
if got.eventSequenceLess != want.eventSequenceLess {
t.Errorf("wrong eventSequenceLess in query %d : got: %v want: %v", i, got.eventSequenceLess, want.eventSequenceLess)
}
if got.eventSequenceGreater != want.eventSequenceGreater {
t.Errorf("wrong eventSequenceGreater in query %d : got: %v want: %v", i, got.eventSequenceGreater, want.eventSequenceGreater)
}
if !reflect.DeepEqual(got.eventTypes, want.eventTypes) {
t.Errorf("wrong eventTypes in query %d : got: %v want: %v", i, got.eventTypes, want.eventTypes)

View File

@@ -14,24 +14,43 @@ var (
type Subscription struct {
Events chan EventReader
aggregates []AggregateType
types map[AggregateType][]EventType
}
func Subscribe(aggregates ...AggregateType) *Subscription {
events := make(chan EventReader, 100)
//SubscribeAggregates subscribes for all events on the given aggregates
func SubscribeAggregates(eventQueue chan EventReader, aggregates ...AggregateType) *Subscription {
types := make(map[AggregateType][]EventType, len(aggregates))
for _, aggregate := range aggregates {
types[aggregate] = nil
}
sub := &Subscription{
Events: events,
aggregates: aggregates,
Events: eventQueue,
types: types,
}
subsMutext.Lock()
defer subsMutext.Unlock()
for _, aggregate := range aggregates {
_, ok := subscriptions[aggregate]
if !ok {
subscriptions[aggregate] = make([]*Subscription, 0, 1)
subscriptions[aggregate] = append(subscriptions[aggregate], sub)
}
return sub
}
//SubscribeEventTypes subscribes for the given event types
// if no event types are provided the subscription is for all events of the aggregate
func SubscribeEventTypes(eventQueue chan EventReader, types map[AggregateType][]EventType) *Subscription {
aggregates := make([]AggregateType, len(types))
sub := &Subscription{
Events: eventQueue,
types: types,
}
subsMutext.Lock()
defer subsMutext.Unlock()
for _, aggregate := range aggregates {
subscriptions[aggregate] = append(subscriptions[aggregate], sub)
}
@@ -43,12 +62,24 @@ func notify(events []EventReader) {
subsMutext.Lock()
defer subsMutext.Unlock()
for _, event := range events {
subs, ok := subscriptions[event.Aggregate().Typ]
subs, ok := subscriptions[event.Aggregate().Type]
if !ok {
continue
}
for _, sub := range subs {
eventTypes := sub.types[event.Aggregate().Type]
//subscription for all events
if len(eventTypes) == 0 {
sub.Events <- event
continue
}
//subscription for certain events
for _, eventType := range eventTypes {
if event.Type() == eventType {
sub.Events <- event
break
}
}
}
}
}
@@ -56,7 +87,7 @@ func notify(events []EventReader) {
func (s *Subscription) Unsubscribe() {
subsMutext.Lock()
defer subsMutext.Unlock()
for _, aggregate := range s.aggregates {
for aggregate := range s.types {
subs, ok := subscriptions[aggregate]
if !ok {
continue
@@ -88,7 +119,7 @@ func mapEventToV1Event(event EventReader) *models.Event {
Sequence: event.Sequence(),
CreationDate: event.CreationDate(),
Type: models.EventType(event.Type()),
AggregateType: models.AggregateType(event.Aggregate().Typ),
AggregateType: models.AggregateType(event.Aggregate().Type),
AggregateID: event.Aggregate().ID,
ResourceOwner: event.Aggregate().ResourceOwner,
EditorService: event.EditorService(),

View File

@@ -18,6 +18,7 @@ func Start(conf Config) (*SQL, *sql.DB, error) {
if err != nil {
return nil, nil, errors.ThrowPreconditionFailed(err, "SQL-9qBtr", "unable to open database connection")
}
return &SQL{
client: client,
}, client, nil

View File

@@ -11,11 +11,11 @@ import (
)
const (
selectEscaped = `SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore\.events WHERE aggregate_type = \$1`
selectEscaped = `SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore\.events WHERE aggregate_type = \$1`
)
var (
eventColumns = []string{"creation_date", "event_type", "event_sequence", "previous_sequence", "event_data", "editor_service", "editor_user", "resource_owner", "aggregate_type", "aggregate_id", "aggregate_version"}
eventColumns = []string{"creation_date", "event_type", "event_sequence", "previous_aggregate_sequence", "event_data", "editor_service", "editor_user", "resource_owner", "aggregate_type", "aggregate_id", "aggregate_version"}
expectedFilterEventsLimitFormat = regexp.MustCompile(selectEscaped + ` ORDER BY event_sequence LIMIT \$2`).String()
expectedFilterEventsDescFormat = regexp.MustCompile(selectEscaped + ` ORDER BY event_sequence DESC`).String()
expectedFilterEventsAggregateIDLimit = regexp.MustCompile(selectEscaped + ` AND aggregate_id = \$2 ORDER BY event_sequence LIMIT \$3`).String()
@@ -23,7 +23,7 @@ var (
expectedGetAllEvents = regexp.MustCompile(selectEscaped + ` ORDER BY event_sequence`).String()
expectedInsertStatement = regexp.MustCompile(`INSERT INTO eventstore\.events ` +
`\(event_type, aggregate_type, aggregate_id, aggregate_version, creation_date, event_data, editor_user, editor_service, resource_owner, previous_sequence\) ` +
`\(event_type, aggregate_type, aggregate_id, aggregate_version, creation_date, event_data, editor_user, editor_service, resource_owner, previous_aggregate_sequence, previous_aggregate_type_sequence\) ` +
`SELECT \$1, \$2, \$3, \$4, COALESCE\(\$5, now\(\)\), \$6, \$7, \$8, \$9, \$10 ` +
`WHERE EXISTS \(` +
`SELECT 1 FROM eventstore\.events WHERE aggregate_type = \$11 AND aggregate_id = \$12 HAVING MAX\(event_sequence\) = \$13 OR \(\$14::BIGINT IS NULL AND COUNT\(\*\) = 0\)\) ` +

View File

@@ -18,7 +18,7 @@ const (
" creation_date" +
", event_type" +
", event_sequence" +
", previous_sequence" +
", previous_aggregate_sequence" +
", event_data" +
", editor_service" +
", editor_user" +

View File

@@ -234,7 +234,7 @@ func Test_prepareColumns(t *testing.T) {
dest: new(es_models.Event),
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
dbRow: []interface{}{time.Time{}, es_models.EventType(""), uint64(5), Sequence(0), Data(nil), "", "", "", es_models.AggregateType("user"), "hodor", es_models.Version("")},
expected: es_models.Event{AggregateID: "hodor", AggregateType: "user", Sequence: 5, Data: make(Data, 0)},
},
@@ -246,7 +246,7 @@ func Test_prepareColumns(t *testing.T) {
dest: new(uint64),
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
dbErr: errors.IsErrorInvalidArgument,
},
},
@@ -258,7 +258,7 @@ func Test_prepareColumns(t *testing.T) {
dbErr: sql.ErrConnDone,
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events",
dbErr: errors.IsInternal,
},
},
@@ -429,7 +429,7 @@ func Test_buildQuery(t *testing.T) {
queryFactory: es_models.NewSearchQueryFactory("user").OrderDesc(),
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE aggregate_type = $1 ORDER BY event_sequence DESC",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE aggregate_type = $1 ORDER BY event_sequence DESC",
rowScanner: true,
values: []interface{}{es_models.AggregateType("user")},
},
@@ -440,7 +440,7 @@ func Test_buildQuery(t *testing.T) {
queryFactory: es_models.NewSearchQueryFactory("user").Limit(5),
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE aggregate_type = $1 ORDER BY event_sequence LIMIT $2",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE aggregate_type = $1 ORDER BY event_sequence LIMIT $2",
rowScanner: true,
values: []interface{}{es_models.AggregateType("user"), uint64(5)},
limit: 5,
@@ -452,7 +452,7 @@ func Test_buildQuery(t *testing.T) {
queryFactory: es_models.NewSearchQueryFactory("user").Limit(5).OrderDesc(),
},
res: res{
query: "SELECT creation_date, event_type, event_sequence, previous_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE aggregate_type = $1 ORDER BY event_sequence DESC LIMIT $2",
query: "SELECT creation_date, event_type, event_sequence, previous_aggregate_sequence, event_data, editor_service, editor_user, resource_owner, aggregate_type, aggregate_id, aggregate_version FROM eventstore.events WHERE aggregate_type = $1 ORDER BY event_sequence DESC LIMIT $2",
rowScanner: true,
values: []interface{}{es_models.AggregateType("user"), uint64(5)},
limit: 5,

View File

@@ -4,12 +4,10 @@ import (
"context"
"time"
"github.com/caos/logging"
"github.com/getsentry/sentry-go"
"github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/logging"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/eventstore/v1/models"
)

View File

@@ -2,7 +2,7 @@ package eventstore
import "time"
//MemberWriteModel is the minimum representation of a command side view model.
//WriteModel is the minimum representation of a command side write model.
// It implements a basic reducer
// it's purpose is to reduce events to create new ones
type WriteModel struct {

View File

@@ -2,9 +2,9 @@ package eventstore
import (
"context"
"github.com/caos/zitadel/internal/query"
iam_model "github.com/caos/zitadel/internal/iam/model"
"github.com/caos/zitadel/internal/query"
)
type IAMRepository struct {

View File

@@ -3,7 +3,7 @@ package handler
import (
"time"
"github.com/caos/zitadel/internal/eventstore/v1"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/static"
"github.com/caos/zitadel/internal/config/systemdefaults"

View File

@@ -4,16 +4,15 @@ import (
"github.com/caos/logging"
"github.com/rakyll/statik/fs"
"github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/query"
"github.com/caos/zitadel/internal/static"
sd "github.com/caos/zitadel/internal/config/systemdefaults"
"github.com/caos/zitadel/internal/config/types"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
es_spol "github.com/caos/zitadel/internal/eventstore/v1/spooler"
"github.com/caos/zitadel/internal/management/repository/eventsourcing/eventstore"
"github.com/caos/zitadel/internal/management/repository/eventsourcing/spooler"
mgmt_view "github.com/caos/zitadel/internal/management/repository/eventsourcing/view"
"github.com/caos/zitadel/internal/query"
"github.com/caos/zitadel/internal/static"
)
type Config struct {

View File

@@ -4,14 +4,13 @@ import (
"net/http"
"time"
"github.com/caos/zitadel/internal/command"
"github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/command"
sd "github.com/caos/zitadel/internal/config/systemdefaults"
"github.com/caos/zitadel/internal/config/types"
"github.com/caos/zitadel/internal/crypto"
v1 "github.com/caos/zitadel/internal/eventstore/v1"
"github.com/caos/zitadel/internal/eventstore/v1/query"
"github.com/caos/zitadel/internal/notification/repository/eventsourcing/view"
)

View File

@@ -0,0 +1,19 @@
package projection
import "github.com/caos/zitadel/internal/config/types"
type Config struct {
RequeueEvery types.Duration
RetryFailedAfter types.Duration
MaxFailureCount uint
BulkLimit uint64
CRDB types.SQL
Customizations map[string]CustomConfig
}
type CustomConfig struct {
RequeueEvery *types.Duration
RetryFailedAfter *types.Duration
MaxFailureCount *uint
BulkLimit *uint64
}

View File

@@ -0,0 +1,175 @@
package projection
import (
"context"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/domain"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
"github.com/caos/zitadel/internal/eventstore/handler/crdb"
"github.com/caos/zitadel/internal/repository/org"
)
type OrgProjection struct {
crdb.StatementHandler
}
func NewOrgProjection(ctx context.Context, config crdb.StatementHandlerConfig) *OrgProjection {
p := &OrgProjection{}
config.ProjectionName = "projections.orgs"
config.Reducers = p.reducers()
p.StatementHandler = crdb.NewStatementHandler(ctx, config)
return p
}
func (p *OrgProjection) reducers() []handler.AggregateReducer {
return []handler.AggregateReducer{
{
Aggregate: org.AggregateType,
EventRedusers: []handler.EventReducer{
{
Event: org.OrgAddedEventType,
Reduce: p.reduceOrgAdded,
},
{
Event: org.OrgChangedEventType,
Reduce: p.reduceOrgChanged,
},
{
Event: org.OrgDeactivatedEventType,
Reduce: p.reduceOrgDeactivated,
},
{
Event: org.OrgReactivatedEventType,
Reduce: p.reduceOrgReactivated,
},
{
Event: org.OrgDomainPrimarySetEventType,
Reduce: p.reducePrimaryDomainSet,
},
},
},
}
}
const (
orgIDCol = "id"
orgCreationDateCol = "creation_date"
orgChangeDateCol = "change_date"
orgResourceOwnerCol = "resource_owner"
orgStateCol = "org_state"
orgSequenceCol = "sequence"
orgDomainCol = "domain"
orgNameCol = "name"
)
func (p *OrgProjection) reduceOrgAdded(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgAddedEvent)
if !ok {
logging.LogWithFields("HANDL-zWCk3", "seq", event.Sequence, "expectedType", org.OrgAddedEventType).Error("was not an event")
return nil, errors.ThrowInvalidArgument(nil, "HANDL-uYq4r", "reduce.wrong.event.type")
}
return []handler.Statement{
crdb.NewCreateStatement(
e,
[]handler.Column{
handler.NewCol(orgIDCol, e.Aggregate().ID),
handler.NewCol(orgCreationDateCol, e.CreationDate()),
handler.NewCol(orgChangeDateCol, e.CreationDate()),
handler.NewCol(orgResourceOwnerCol, e.Aggregate().ResourceOwner),
handler.NewCol(orgSequenceCol, e.Sequence()),
handler.NewCol(orgNameCol, e.Name),
handler.NewCol(orgStateCol, domain.OrgStateActive),
},
),
}, nil
}
func (p *OrgProjection) reduceOrgChanged(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgChangedEvent)
if !ok {
logging.LogWithFields("HANDL-q4oq8", "seq", event.Sequence, "expected", org.OrgChangedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "HANDL-Bg8oM", "reduce.wrong.event.type")
}
values := []handler.Column{
handler.NewCol(orgChangeDateCol, e.CreationDate()),
handler.NewCol(orgSequenceCol, e.Sequence()),
}
if e.Name != "" {
values = append(values, handler.NewCol(orgNameCol, e.Name))
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
values,
[]handler.Condition{
handler.NewCond(orgIDCol, e.Aggregate().ID),
},
),
}, nil
}
func (p *OrgProjection) reduceOrgDeactivated(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgDeactivatedEvent)
if !ok {
logging.LogWithFields("HANDL-1gwdc", "seq", event.Sequence, "expectedType", org.OrgDeactivatedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "HANDL-BApK4", "reduce.wrong.event.type")
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(orgChangeDateCol, e.CreationDate()),
handler.NewCol(orgSequenceCol, e.Sequence()),
handler.NewCol(orgStateCol, domain.OrgStateInactive),
},
[]handler.Condition{
handler.NewCond(orgIDCol, e.Aggregate().ID),
},
),
}, nil
}
func (p *OrgProjection) reduceOrgReactivated(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgReactivatedEvent)
if !ok {
logging.LogWithFields("HANDL-Vjwiy", "seq", event.Sequence, "expectedType", org.OrgReactivatedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "HANDL-o37De", "reduce.wrong.event.type")
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(orgChangeDateCol, e.CreationDate()),
handler.NewCol(orgSequenceCol, e.Sequence()),
handler.NewCol(orgStateCol, domain.OrgStateActive),
},
[]handler.Condition{
handler.NewCond(orgIDCol, e.Aggregate().ID),
},
),
}, nil
}
func (p *OrgProjection) reducePrimaryDomainSet(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.DomainPrimarySetEvent)
if !ok {
logging.LogWithFields("HANDL-79OhB", "seq", event.Sequence, "expectedType", org.OrgDomainPrimarySetEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "HANDL-4TbKT", "reduce.wrong.event.type")
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(orgChangeDateCol, e.CreationDate()),
handler.NewCol(orgSequenceCol, e.Sequence()),
handler.NewCol(orgDomainCol, e.Domain),
},
[]handler.Condition{
handler.NewCond(orgIDCol, e.Aggregate().ID),
},
),
}, nil
}

View File

@@ -0,0 +1,411 @@
package owner
import (
"context"
"time"
"github.com/caos/logging"
"github.com/caos/zitadel/internal/domain"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
"github.com/caos/zitadel/internal/eventstore/handler/crdb"
"github.com/caos/zitadel/internal/repository/org"
"github.com/caos/zitadel/internal/repository/user"
"golang.org/x/text/language"
)
type OrgOwner struct {
OrgID string `col:"org_id"`
OrgName string `col:"org_name"`
OrgCreationDate time.Time `col:"org_creation_date"`
OwnerID string `col:"owner_id"`
OwnerLanguage *language.Tag `col:"owner_language"`
OwnerEmailAddress string `col:"owner_email"`
OwnerFirstName string `col:"owner_first_name"`
OwnerLastName string `col:"owner_last_name"`
OwnerGender domain.Gender `col:"owner_gender"`
}
type OrgOwnerProjection struct {
crdb.StatementHandler
}
const (
orgTableSuffix = "orgs"
orgIDCol = "id"
orgNameCol = "name"
orgCreationDateCol = "creation_date"
userTableSuffix = "users"
userOrgIDCol = "org_id"
userIDCol = "owner_id"
userLanguageCol = "language"
userEmailCol = "email"
userFirstNameCol = "first_name"
userLastNameCol = "last_name"
userGenderCol = "gender"
)
func NewOrgOwnerProjection(ctx context.Context, config crdb.StatementHandlerConfig) *OrgOwnerProjection {
p := &OrgOwnerProjection{}
config.ProjectionName = "projections.org_owners"
config.Reducers = p.reducers()
p.StatementHandler = crdb.NewStatementHandler(ctx, config)
return p
}
func (p *OrgOwnerProjection) reducers() []handler.AggregateReducer {
return []handler.AggregateReducer{
{
Aggregate: org.AggregateType,
EventRedusers: []handler.EventReducer{
{
Event: org.OrgAddedEventType,
Reduce: p.reduceOrgAdded,
},
{
Event: org.OrgChangedEventType,
Reduce: p.reduceOrgChanged,
},
{
Event: org.OrgRemovedEventType,
Reduce: p.reduceOrgRemoved,
},
{
Event: org.MemberAddedEventType,
Reduce: p.reduceMemberAdded,
},
{
Event: org.MemberChangedEventType,
Reduce: p.reduceMemberChanged,
},
{
Event: org.MemberRemovedEventType,
Reduce: p.reduceMemberRemoved,
},
},
},
{
Aggregate: user.AggregateType,
EventRedusers: []handler.EventReducer{
{
Event: user.HumanEmailChangedType,
Reduce: p.reduceHumanEmailChanged,
},
{
Event: user.UserV1EmailChangedType,
Reduce: p.reduceHumanEmailChanged,
},
{
Event: user.HumanProfileChangedType,
Reduce: p.reduceHumanProfileChanged,
},
{
Event: user.UserV1ProfileChangedType,
Reduce: p.reduceHumanProfileChanged,
},
},
},
}
}
func (p *OrgOwnerProjection) reduceMemberAdded(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.MemberAddedEvent)
if !ok {
logging.LogWithFields("PROJE-kL530", "seq", event.Sequence, "expected", org.MemberAddedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-OkiBV", "reduce.wrong.event.type")
}
if !isOrgOwner(e.Roles) {
return []handler.Statement{crdb.NewNoOpStatement(e)}, nil
}
stmt, err := p.addOwner(e, e.Aggregate().ResourceOwner, e.UserID)
if err != nil {
return nil, err
}
return []handler.Statement{stmt}, nil
}
func (p *OrgOwnerProjection) reduceMemberChanged(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.MemberChangedEvent)
if !ok {
logging.LogWithFields("PROJE-kL530", "seq", event.Sequence, "expected", org.MemberAddedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-OkiBV", "reduce.wrong.event.type")
}
if !isOrgOwner(e.Roles) {
return []handler.Statement{p.deleteOwner(e, e.Aggregate().ID, e.UserID)}, nil
}
stmt, err := p.addOwner(e, e.Aggregate().ResourceOwner, e.UserID)
if err != nil {
return nil, err
}
return []handler.Statement{stmt}, nil
}
func (p *OrgOwnerProjection) reduceMemberRemoved(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.MemberRemovedEvent)
if !ok {
logging.LogWithFields("PROJE-boIbP", "seq", event.Sequence, "expected", org.MemberRemovedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-pk6TS", "reduce.wrong.event.type")
}
return []handler.Statement{p.deleteOwner(e, e.Aggregate().ID, e.UserID)}, nil
}
func (p *OrgOwnerProjection) reduceHumanEmailChanged(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*user.HumanEmailChangedEvent)
if !ok {
logging.LogWithFields("PROJE-IHFwh", "seq", event.Sequence, "expected", user.HumanEmailChangedType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-jMlwT", "reduce.wrong.event.type")
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(userEmailCol, e.EmailAddress),
},
[]handler.Condition{
handler.NewCond(userIDCol, e.Aggregate().ID),
},
crdb.WithTableSuffix(userTableSuffix),
),
}, nil
}
func (p *OrgOwnerProjection) reduceHumanProfileChanged(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*user.HumanProfileChangedEvent)
if !ok {
logging.LogWithFields("PROJE-WqgUS", "seq", event.Sequence, "expected", user.HumanProfileChangedType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-Cdkkf", "reduce.wrong.event.type")
}
values := []handler.Column{}
if e.FirstName != "" {
values = append(values, handler.NewCol(userFirstNameCol, e.FirstName))
}
if e.LastName != "" {
values = append(values, handler.NewCol(userLastNameCol, e.LastName))
}
if e.PreferredLanguage != nil {
values = append(values, handler.NewCol(userLanguageCol, e.PreferredLanguage.String()))
}
if e.Gender != nil {
values = append(values, handler.NewCol(userGenderCol, *e.Gender))
}
if len(values) == 0 {
return []handler.Statement{crdb.NewNoOpStatement(e)}, nil
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
values,
[]handler.Condition{
handler.NewCond(userIDCol, e.Aggregate().ID),
},
crdb.WithTableSuffix(userTableSuffix),
),
}, nil
}
func (p *OrgOwnerProjection) reduceOrgAdded(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgAddedEvent)
if !ok {
logging.LogWithFields("PROJE-wbOrL", "seq", event.Sequence, "expected", org.OrgAddedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-pk6TS", "reduce.wrong.event.type")
}
return []handler.Statement{
crdb.NewCreateStatement(
e,
[]handler.Column{
handler.NewCol(orgIDCol, e.Aggregate().ResourceOwner),
handler.NewCol(orgNameCol, e.Name),
handler.NewCol(orgCreationDateCol, e.CreationDate()),
},
crdb.WithTableSuffix(orgTableSuffix),
),
}, nil
}
func (p *OrgOwnerProjection) reduceOrgChanged(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgChangedEvent)
if !ok {
logging.LogWithFields("PROJE-piy2b", "seq", event.Sequence, "expected", org.OrgChangedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-MGbru", "reduce.wrong.event.type")
}
values := []handler.Column{}
if e.Name != "" {
values = append(values, handler.NewCol(orgNameCol, e.Name))
}
if len(values) == 0 {
return []handler.Statement{crdb.NewNoOpStatement(e)}, nil
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
values,
[]handler.Condition{
handler.NewCond(orgIDCol, e.Aggregate().ResourceOwner),
},
crdb.WithTableSuffix(orgTableSuffix),
),
}, nil
}
func (p *OrgOwnerProjection) reduceOrgRemoved(event eventstore.EventReader) ([]handler.Statement, error) {
e, ok := event.(*org.OrgChangedEvent)
if !ok {
logging.LogWithFields("PROJE-F1mHQ", "seq", event.Sequence, "expected", org.OrgRemovedEventType).Error("wrong event type")
return nil, errors.ThrowInvalidArgument(nil, "PROJE-9ZR2w", "reduce.wrong.event.type")
}
return []handler.Statement{
//delete org in org table
crdb.NewDeleteStatement(
e,
[]handler.Condition{
handler.NewCond(orgIDCol, e.Aggregate().ResourceOwner),
},
crdb.WithTableSuffix(orgTableSuffix),
),
// delete users of the org
crdb.NewDeleteStatement(
e,
[]handler.Condition{
handler.NewCond(userOrgIDCol, e.Aggregate().ResourceOwner),
},
crdb.WithTableSuffix(userTableSuffix),
),
}, nil
}
func isOrgOwner(roles []string) bool {
for _, role := range roles {
if role == "ORG_OWNER" {
return true
}
}
return false
}
func (p *OrgOwnerProjection) deleteOwner(event eventstore.EventReader, orgID, ownerID string) handler.Statement {
return crdb.NewDeleteStatement(
event,
[]handler.Condition{
handler.NewCond(userOrgIDCol, orgID),
handler.NewCond(userIDCol, ownerID),
},
crdb.WithTableSuffix(userTableSuffix),
)
}
func (p *OrgOwnerProjection) addOwner(event eventstore.EventReader, orgID, userID string) (handler.Statement, error) {
events, err := p.Eventstore.FilterEvents(context.Background(),
eventstore.NewSearchQueryBuilder(eventstore.ColumnsEvent).
AddQuery().
AggregateTypes(user.AggregateType).
EventTypes(
user.HumanAddedType,
user.UserV1AddedType,
user.HumanRegisteredType,
user.UserV1RegisteredType,
user.HumanEmailChangedType,
user.UserV1EmailChangedType,
user.HumanProfileChangedType,
user.UserV1ProfileChangedType,
user.MachineAddedEventType,
user.MachineChangedEventType).
AggregateIDs(userID).
SequenceLess(event.Sequence()).
Builder())
if err != nil {
return handler.Statement{}, err
}
if len(events) == 0 {
logging.LogWithFields("mqd3w", "user", userID, "org", orgID, "seq", event.Sequence()).Warn("no events for user found")
return handler.Statement{}, errors.ThrowInternal(nil, "PROJE-Qk7Tv", "unable to find user events")
}
owner := &OrgOwner{
OrgID: orgID,
OwnerID: userID,
}
p.reduce(owner, events)
values := []handler.Column{
handler.NewCol(userOrgIDCol, owner.OrgID),
handler.NewCol(userIDCol, owner.OwnerID),
handler.NewCol(userEmailCol, owner.OwnerEmailAddress),
handler.NewCol(userFirstNameCol, owner.OwnerFirstName),
handler.NewCol(userLastNameCol, owner.OwnerLastName),
handler.NewCol(userGenderCol, owner.OwnerGender),
}
if owner.OwnerLanguage != nil {
values = append(values, handler.NewCol(userLanguageCol, owner.OwnerLanguage.String()))
}
return crdb.NewUpsertStatement(
event,
values,
crdb.WithTableSuffix(userTableSuffix),
), nil
}
func (p *OrgOwnerProjection) reduce(owner *OrgOwner, events []eventstore.EventReader) {
for _, event := range events {
switch e := event.(type) {
case *user.HumanAddedEvent:
owner.OwnerLanguage = &e.PreferredLanguage
owner.OwnerEmailAddress = e.EmailAddress
owner.OwnerFirstName = e.FirstName
owner.OwnerLastName = e.LastName
owner.OwnerGender = e.Gender
case *user.HumanRegisteredEvent:
owner.OwnerLanguage = &e.PreferredLanguage
owner.OwnerEmailAddress = e.EmailAddress
owner.OwnerFirstName = e.FirstName
owner.OwnerLastName = e.LastName
owner.OwnerGender = e.Gender
case *user.HumanEmailChangedEvent:
owner.OwnerEmailAddress = e.EmailAddress
case *user.HumanProfileChangedEvent:
if e.PreferredLanguage != nil {
owner.OwnerLanguage = e.PreferredLanguage
}
if e.FirstName != "" {
owner.OwnerFirstName = e.FirstName
}
if e.LastName != "" {
owner.OwnerLastName = e.LastName
}
if e.Gender != nil {
owner.OwnerGender = *e.Gender
}
case *user.MachineAddedEvent:
owner.OwnerFirstName = "machine"
owner.OwnerLastName = e.Name
owner.OwnerEmailAddress = e.UserName
case *user.MachineChangedEvent:
if e.Name != nil {
owner.OwnerLastName = *e.Name
}
default:
// This happens only on implementation errors
logging.LogWithFields("PROJE-sKNsR", "eventType", event.Type()).Panic("unexpected event type")
}
}
}

View File

@@ -0,0 +1,154 @@
package projection
import (
"context"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
"github.com/caos/zitadel/internal/eventstore/handler/crdb"
"github.com/caos/zitadel/internal/repository/project"
)
type ProjectProjection struct {
crdb.StatementHandler
}
func NewProjectProjection(ctx context.Context, config crdb.StatementHandlerConfig) *ProjectProjection {
p := &ProjectProjection{}
config.ProjectionName = "projections.projects"
config.Reducers = p.reducers()
p.StatementHandler = crdb.NewStatementHandler(ctx, config)
return p
}
func (p *ProjectProjection) reducers() []handler.AggregateReducer {
return []handler.AggregateReducer{
{
Aggregate: project.AggregateType,
EventRedusers: []handler.EventReducer{
{
Event: project.ProjectAddedType,
Reduce: p.reduceProjectAdded,
},
{
Event: project.ProjectChangedType,
Reduce: p.reduceProjectChanged,
},
{
Event: project.ProjectDeactivatedType,
Reduce: p.reduceProjectDeactivated,
},
{
Event: project.ProjectReactivatedType,
Reduce: p.reduceProjectReactivated,
},
{
Event: project.ProjectRemovedType,
Reduce: p.reduceProjectRemoved,
},
},
},
}
}
type projectState int8
const (
projectIDCol = "id"
projectNameCol = "name"
projectCreationDateCol = "creation_date"
projectChangeDateCol = "change_date"
projectOwnerCol = "owner_id"
projectCreatorCol = "creator_id"
projectStateCol = "state"
projectActive projectState = iota
projectInactive
)
func (p *ProjectProjection) reduceProjectAdded(event eventstore.EventReader) ([]handler.Statement, error) {
e := event.(*project.ProjectAddedEvent)
return []handler.Statement{
crdb.NewCreateStatement(
e,
[]handler.Column{
handler.NewCol(projectIDCol, e.Aggregate().ID),
handler.NewCol(projectNameCol, e.Name),
handler.NewCol(projectCreationDateCol, e.CreationDate()),
handler.NewCol(projectChangeDateCol, e.CreationDate()),
handler.NewCol(projectOwnerCol, e.Aggregate().ResourceOwner),
handler.NewCol(projectCreatorCol, e.EditorUser()),
handler.NewCol(projectStateCol, projectActive),
},
),
}, nil
}
func (p *ProjectProjection) reduceProjectChanged(event eventstore.EventReader) ([]handler.Statement, error) {
e := event.(*project.ProjectChangeEvent)
if e.Name == nil {
return []handler.Statement{crdb.NewNoOpStatement(e)}, nil
}
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(projectNameCol, e.Name),
handler.NewCol(projectChangeDateCol, e.CreationDate()),
},
[]handler.Condition{
handler.NewCond(projectIDCol, e.Aggregate().ID),
},
),
}, nil
}
func (p *ProjectProjection) reduceProjectDeactivated(event eventstore.EventReader) ([]handler.Statement, error) {
e := event.(*project.ProjectDeactivatedEvent)
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(projectStateCol, projectInactive),
handler.NewCol(projectChangeDateCol, e.CreationDate()),
},
[]handler.Condition{
handler.NewCond(projectIDCol, e.Aggregate().ID),
},
),
}, nil
}
func (p *ProjectProjection) reduceProjectReactivated(event eventstore.EventReader) ([]handler.Statement, error) {
e := event.(*project.ProjectReactivatedEvent)
return []handler.Statement{
crdb.NewUpdateStatement(
e,
[]handler.Column{
handler.NewCol(projectStateCol, projectActive),
handler.NewCol(projectChangeDateCol, e.CreationDate()),
},
[]handler.Condition{
handler.NewCond(projectIDCol, e.Aggregate().ID),
},
),
}, nil
}
func (p *ProjectProjection) reduceProjectRemoved(event eventstore.EventReader) ([]handler.Statement, error) {
e := event.(*project.ProjectRemovedEvent)
return []handler.Statement{
crdb.NewDeleteStatement(
e,
[]handler.Condition{
handler.NewCond(projectIDCol, e.Aggregate().ID),
},
),
}, nil
}

View File

@@ -0,0 +1,61 @@
package projection
import (
"context"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/handler"
"github.com/caos/zitadel/internal/eventstore/handler/crdb"
"github.com/caos/zitadel/internal/query/projection/org/owner"
)
const (
currentSeqTable = "projections.current_sequences"
locksTable = "projections.locks"
failedEventsTable = "projections.failed_events"
)
func Start(ctx context.Context, es *eventstore.Eventstore, config Config) error {
sqlClient, err := config.CRDB.Start()
if err != nil {
return err
}
projectionConfig := crdb.StatementHandlerConfig{
ProjectionHandlerConfig: handler.ProjectionHandlerConfig{
HandlerConfig: handler.HandlerConfig{
Eventstore: es,
},
RequeueEvery: config.RequeueEvery.Duration,
RetryFailedAfter: config.RetryFailedAfter.Duration,
},
Client: sqlClient,
SequenceTable: currentSeqTable,
LockTable: locksTable,
FailedEventsTable: failedEventsTable,
MaxFailureCount: config.MaxFailureCount,
BulkLimit: config.BulkLimit,
}
NewOrgProjection(ctx, applyCustomConfig(projectionConfig, config.Customizations["orgs"]))
NewProjectProjection(ctx, applyCustomConfig(projectionConfig, config.Customizations["projects"]))
owner.NewOrgOwnerProjection(ctx, applyCustomConfig(projectionConfig, config.Customizations["org_owners"]))
return nil
}
func applyCustomConfig(config crdb.StatementHandlerConfig, customConfig CustomConfig) crdb.StatementHandlerConfig {
if customConfig.BulkLimit != nil {
config.BulkLimit = *customConfig.BulkLimit
}
if customConfig.MaxFailureCount != nil {
config.MaxFailureCount = *customConfig.MaxFailureCount
}
if customConfig.RequeueEvery != nil {
config.RequeueEvery = customConfig.RequeueEvery.Duration
}
if customConfig.RetryFailedAfter != nil {
config.RetryFailedAfter = customConfig.RetryFailedAfter.Duration
}
return config
}

View File

@@ -9,7 +9,10 @@ import (
"github.com/caos/zitadel/internal/eventstore"
iam_model "github.com/caos/zitadel/internal/iam/model"
"github.com/caos/zitadel/internal/id"
"github.com/caos/zitadel/internal/query/projection"
iam_repo "github.com/caos/zitadel/internal/repository/iam"
"github.com/caos/zitadel/internal/repository/org"
"github.com/caos/zitadel/internal/repository/project"
usr_repo "github.com/caos/zitadel/internal/repository/user"
"github.com/caos/zitadel/internal/telemetry/tracing"
)
@@ -25,19 +28,27 @@ type Config struct {
Eventstore types.SQLUser
}
func StartQueries(eventstore *eventstore.Eventstore, defaults sd.SystemDefaults) (repo *Queries, err error) {
func StartQueries(ctx context.Context, es *eventstore.Eventstore, projections projection.Config, defaults sd.SystemDefaults) (repo *Queries, err error) {
repo = &Queries{
iamID: defaults.IamID,
eventstore: eventstore,
eventstore: es,
idGenerator: id.SonyFlakeGenerator,
}
iam_repo.RegisterEventMappers(repo.eventstore)
usr_repo.RegisterEventMappers(repo.eventstore)
org.RegisterEventMappers(repo.eventstore)
project.RegisterEventMappers(repo.eventstore)
repo.secretCrypto, err = crypto.NewAESCrypto(defaults.IDPConfigVerificationKey)
if err != nil {
return nil, err
}
err = projection.Start(ctx, es, projections)
if err != nil {
return nil, err
}
return repo, nil
}

View File

@@ -21,7 +21,7 @@ type Aggregate struct {
func NewAggregate() *Aggregate {
return &Aggregate{
Aggregate: eventstore.Aggregate{
Typ: AggregateType,
Type: AggregateType,
Version: AggregateVersion,
ID: domain.IAMID,
ResourceOwner: domain.IAMID,

View File

@@ -20,7 +20,7 @@ type Aggregate struct {
func NewAggregate(id, resourceOwner string) *Aggregate {
return &Aggregate{
Aggregate: eventstore.Aggregate{
Typ: AggregateType,
Type: AggregateType,
Version: AggregateVersion,
ID: id,
ResourceOwner: resourceOwner,

View File

@@ -3,9 +3,9 @@ package org
import (
"context"
"encoding/json"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/repository"
)

View File

@@ -16,7 +16,7 @@ type Aggregate struct {
func NewAggregate(id, resourceOwner string) *Aggregate {
return &Aggregate{
Aggregate: eventstore.Aggregate{
Typ: AggregateType,
Type: AggregateType,
Version: AggregateVersion,
ID: id,
ResourceOwner: resourceOwner,

View File

@@ -16,7 +16,7 @@ type Aggregate struct {
func NewAggregate(id, resourceOwner string) *Aggregate {
return &Aggregate{
Aggregate: eventstore.Aggregate{
Typ: AggregateType,
Type: AggregateType,
Version: AggregateVersion,
ID: id,
ResourceOwner: resourceOwner,

View File

@@ -3,11 +3,11 @@ package user
import (
"context"
"encoding/json"
"github.com/caos/zitadel/internal/eventstore"
"time"
"github.com/caos/zitadel/internal/domain"
"github.com/caos/zitadel/internal/errors"
"github.com/caos/zitadel/internal/eventstore"
"github.com/caos/zitadel/internal/eventstore/repository"
)
@@ -61,7 +61,12 @@ func MachineKeyAddedEventMapper(event *repository.Event) (eventstore.EventReader
}
err := json.Unmarshal(event.Data, machineKeyAdded)
if err != nil {
return nil, errors.ThrowInternal(err, "USER-p0ovS", "unable to unmarshal machine key removed")
//first events had wrong payload.
// the keys were removed later, that's why we ignore them here.
if unwrapErr, ok := err.(*json.UnmarshalTypeError); ok && unwrapErr.Field == "publicKey" {
return machineKeyAdded, nil
}
return nil, errors.ThrowInternal(err, "USER-p0ovS", "unable to unmarshal machine key added")
}
return machineKeyAdded, nil

View File

@@ -16,7 +16,7 @@ type Aggregate struct {
func NewAggregate(id, resourceOwner string) *Aggregate {
return &Aggregate{
Aggregate: eventstore.Aggregate{
Typ: AggregateType,
Type: AggregateType,
Version: AggregateVersion,
ID: id,
ResourceOwner: resourceOwner,

View File

@@ -116,7 +116,7 @@ Errors:
Invalid: Refresh Token ist ungültig
NotFound: Refresh Token nicht gefunden
Org:
AlreadyExist: Organisationsname existiert bereits
AlreadyExists: Organisationsname existiert bereits
Invalid: Organisation ist ungültig
AlreadyDeactivated: Organisation ist bereits deaktiviert
AlreadyActive: Organisation ist bereits aktiv

View File

@@ -116,7 +116,7 @@ Errors:
Invalid: Refresh Token is invalid
NotFound: Refresh Token not found
Org:
AlreadyExist: Organisationname already taken
AlreadyExists: Organisationname already taken
Invalid: Organisation is invalid
AlreadyDeactivated: Organisation is already deactivated
AlreadyActive: Organisation is already ative

View File

@@ -0,0 +1,40 @@
BEGIN;
ALTER TABLE eventstore.events
RENAME COLUMN previous_sequence TO previous_aggregate_sequence,
ADD COLUMN previous_aggregate_type_sequence INT8,
ADD CONSTRAINT prev_agg_type_seq_unique UNIQUE(previous_aggregate_type_sequence);
COMMIT;
SET CLUSTER SETTING kv.closed_timestamp.target_duration = '2m';
BEGIN;
WITH data AS (
SELECT
event_sequence,
LAG(event_sequence)
OVER (
PARTITION BY aggregate_type
ORDER BY event_sequence
) as prev_seq,
aggregate_type
FROM eventstore.events
ORDER BY event_sequence
) UPDATE eventstore.events
SET previous_aggregate_type_sequence = data.prev_seq
FROM data
WHERE data.event_sequence = events.event_sequence;
COMMIT;
SET CLUSTER SETTING kv.closed_timestamp.target_duration TO DEFAULT;
-- validation by hand:
-- SELECT
-- event_sequence,
-- previous_aggregate_sequence,
-- previous_aggregate_type_sequence,
-- aggregate_type,
-- aggregate_id,
-- event_type
-- FROM eventstore.events ORDER BY event_sequence;

View File

@@ -0,0 +1,31 @@
CREATE DATABASE zitadel;
GRANT SELECT, INSERT, UPDATE, DELETE ON DATABASE zitadel TO queries;
use zitadel;
CREATE SCHEMA zitadel.projections AUTHORIZATION queries;
CREATE TABLE zitadel.projections.locks (
locker_id TEXT,
locked_until TIMESTAMPTZ(3),
projection_name TEXT,
PRIMARY KEY (projection_name)
);
CREATE TABLE zitadel.projections.current_sequences (
projection_name TEXT,
aggregate_type TEXT,
current_sequence BIGINT,
timestamp TIMESTAMPTZ,
PRIMARY KEY (projection_name, aggregate_type)
);
CREATE TABLE zitadel.projections.failed_events (
projection_name TEXT,
failed_sequence BIGINT,
failure_count SMALLINT,
error TEXT,
PRIMARY KEY (projection_name, failed_sequence)
);

View File

@@ -0,0 +1,60 @@
CREATE TABLE zitadel.projections.orgs (
id TEXT,
creation_date TIMESTAMPTZ,
change_date TIMESTAMPTZ,
resource_owner TEXT,
org_state SMALLINT,
sequence BIGINT,
domain TEXT,
name TEXT,
PRIMARY KEY (id)
);
CREATE TABLE zitadel.projections.org_owners_orgs (
id TEXT,
name TEXT,
creation_date TIMESTAMPTZ,
PRIMARY KEY (id)
);
CREATE TABLE zitadel.projections.org_owners_users (
org_id TEXT,
owner_id TEXT,
language VARCHAR(10),
email TEXT,
first_name TEXT,
last_name TEXT,
gender INT2,
PRIMARY KEY (owner_id, org_id),
CONSTRAINT fk_org FOREIGN KEY (org_id) REFERENCES projections.org_owners_orgs (id) ON DELETE CASCADE
);
CREATE VIEW zitadel.projections.org_owners AS (
SELECT o.id AS org_id,
o.name AS org_name,
o.creation_date,
u.owner_id,
u.language,
u.email,
u.first_name,
u.last_name,
u.gender
FROM projections.org_owners_orgs o
JOIN projections.org_owners_users u ON o.id = u.org_id
);
CREATE TABLE zitadel.projections.projects (
id TEXT,
name TEXT,
creation_date TIMESTAMPTZ,
change_date TIMESTAMPTZ,
owner_id TEXT,
creator_id TEXT,
state INT2,
PRIMARY KEY (id)
);

View File

@@ -0,0 +1 @@
CREATE INDEX agg_type ON eventstore.events (aggregate_type);

View File

@@ -9,9 +9,9 @@ type dockerhubImage image
type zitadelImage image
const (
CockroachImage dockerhubImage = "cockroachdb/cockroach:v20.2.3"
CockroachImage dockerhubImage = "cockroachdb/cockroach:v21.1.7"
PostgresImage dockerhubImage = "postgres:9.6.17"
FlywayImage dockerhubImage = "flyway/flyway:7.5.1"
FlywayImage dockerhubImage = "flyway/flyway:7.12.1"
AlpineImage dockerhubImage = "alpine:3.11"
ZITADELImage zitadelImage = "caos/zitadel"
BackupImage zitadelImage = "caos/zitadel-crbackup"

View File

@@ -1,13 +1,14 @@
package zitadel
import (
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/kubernetes"
orbconfig "github.com/caos/orbos/pkg/orb"
"github.com/caos/zitadel/pkg/databases"
kubernetes2 "github.com/caos/zitadel/pkg/kubernetes"
"time"
)
var (
@@ -18,6 +19,7 @@ var (
"authz",
"eventstore",
"management",
"zitadel",
}
userList = []string{
"notification",