# Which Problems Are Solved
Using the service ping, we want to have some additional insights to how
zitadel is configured. The current resource count report contains
already some amount of configured policies, such as the login_policy.
But we do not know if for example MFA is enforced.
# How the Problems Are Solved
- Added the following counts to the report:
- service users per organization
- MFA enforcements (though login policy)
- Notification policies with password change option enabled
- SCIM provisioned users (using user metadata)
- Since all of the above are conditional based on at least a column
inside a projection, a new `migration.CountTriggerConditional` has been
added, where a condition (column values) and an option to track updates
on that column should be considered for the count.
- For this to be possible, the following changes had to be made to the
existing sql resources:
- the `resource_name` has been added to unique constraint on the
`projection.resource_counts` table
- triggers have been added / changed to individually track `INSERT`,
`UPDATE`(s) and `DELETE` and be able to handle conditions
- an optional argument has been added to the
`projections.count_resource()` function to allow providing the
information to `UP` or `DOWN` count the resource on an update.
# Additional Changes
None
# Additional Context
- partially solves #10244 (reporting audit log retention limit will be
handled in #10245 directly)
- backport to v4.x
(cherry picked from commit 2dbe21fb30)
# Which Problems Are Solved
We noticed that the startup for v4 was way slower than v3. A query
without an instanceID filter could be traced back to the systemID query
of the service ping.
# How the Problems Are Solved
A an empty instanceID to the query to ensure it used an appropriate
index.
# Additional Changes
None
# Additional Context
- Closes https://github.com/zitadel/zitadel/issues/10390
- backport to v4.x
(cherry picked from commit 9621d357c0)
# Which Problems Are Solved
Some events that are now unused are clogging the event queue from time
to time.
# How the Problems Are Solved
Remove the events described in #10458
# Additional Changes
- Updated `stringer` and `enumer` in Makefile target `core_generate_all`
to resolve generated files compilation issues
# Notes
It looks like there are a lot of changes, but most of it is fixing
translation files. I suggest doing a review per-commit
# Additional Context
- Closes#10458
- Depends on https://github.com/zitadel/zitadel/pull/10513
(cherry picked from commit e8a9cd6964)
# Which Problems Are Solved
The production endpoint of the service ping was wrong.
Additionally we discussed in the sprint review, that we could randomize
the default interval to prevent all systems to report data at the very
same time and also require a minimal interval.
# How the Problems Are Solved
- fixed the endpoint
- If the interval is set to @daily (default), we generate a random time
(minute, hour) as a cron format.
- Check if the interval is more than 30min and return an error if not.
- Fixed yaml indent on `ResourceCount`
# Additional Changes
None
# Additional Context
as discussed internally
This PR is still WIP and needs changes to at least the tests.
# Which Problems Are Solved
To be able to report analytical / telemetry data from deployed Zitadel
systems back to a central endpoint, we designed a "service ping"
functionality. See also https://github.com/zitadel/zitadel/issues/9706.
This PR adds the first implementation to allow collection base data as
well as report amount of resources such as organizations, users per
organization and more.
# How the Problems Are Solved
- Added a worker to handle the different `ReportType` variations.
- Schedule a periodic job to start a `ServicePingReport`
- Configuration added to allow customization of what data will be
reported
- Setup step to generate and store a `systemID`
# Additional Changes
None
# Additional Context
relates to #9869