# Which Problems Are Solved
Add the ability to keep track of the current counts of projection
resources. We want to prevent calling `SELECT COUNT(*)` on tables, as
that forces a full scan and sudden spikes of DB resource uses.
# How the Problems Are Solved
- A resource_counts table is added
- Triggers that increment and decrement the counted values on inserts
and deletes
- Triggers that delete all counts of a table when the source table is
TRUNCATEd. This is not in the business logic, but prevents wrong counts
in case someone want to force a re-projection.
- Triggers that delete all counts if the parent resource is deleted
- Script to pre-populate the resource_counts table when a new source
table is added.
The triggers are reusable for any type of resource, in case we choose to
add more in the future.
Counts are aggregated by a given parent. Currently only `instance` and
`organization` are defined as possible parent. This can later be
extended to other types, such as `project`, should the need arise.
I deliberately chose to use `parent_id` to distinguish from the
de-factor `resource_owner` which is usually an organization ID. For
example:
- For users the parent is an organization and the `parent_id` matches
`resource_owner`.
- For organizations the parent is an instance, but the `resource_owner`
is the `org_id`. In this case the `parent_id` is the `instance_id`.
- Applications would have a similar problem, where the parent is a
project, but the `resource_owner` is the `org_id`
# Additional Context
Closes https://github.com/zitadel/zitadel/issues/9957
Even though this is a feature it's released as fix so that we can back port to earlier revisions.
As reported by multiple users startup of ZITADEL after leaded to downtime and worst case rollbacks to the previously deployed version.
The problem starts rising when there are too many events to process after the start of ZITADEL. The root cause are changes on projections (database tables) which must be recomputed. This PR solves this problem by adding a new step to the setup phase which prefills the projections. The step can be enabled by adding the `--init-projections`-flag to `setup`, `start-from-init` and `start-from-setup`. Setting this flag results in potentially longer duration of the setup phase but reduces the risk of the problems mentioned in the paragraph above.
* fix(setup): unmarshal of failed step
* fix(cleanup): cleanup all stuck states
* use lastRun for repeatable steps
* typo
---------
Co-authored-by: Livio Spring <livio.a@gmail.com>
This implementation increases parallel write capabilities of the eventstore.
Please have a look at the technical advisories: [05](https://zitadel.com/docs/support/advisory/a10005) and [06](https://zitadel.com/docs/support/advisory/a10006).
The implementation of eventstore.push is rewritten and stored events are migrated to a new table `eventstore.events2`.
If you are using cockroach: make sure that the database user of ZITADEL has `VIEWACTIVITY` grant. This is used to query events.
* begin init checks for projections
* first projection checks
* debug notification providers with query fixes
* more projections and first index
* more projections
* more projections
* finish projections
* fix tests (remove db name)
* create tables in setup
* fix logging / error handling
* add tenant to views
* rename tenant to instance_id
* add instance_id to all projections
* add instance_id to all queries
* correct instance_id on projections
* add instance_id to failed_events
* use separate context for instance
* implement features projection
* implement features projection
* remove unique constraint from setup when migration failed
* add error to failed setup event
* add instance_id to primary keys
* fix IAM projection
* remove old migrations folder
* fix keysFromYAML test