mirror of
https://github.com/zitadel/zitadel.git
synced 2025-12-01 19:42:12 +00:00
# Which Problems Are Solved
Replaces Turbo by Nx and lays the foundation for the next CI
improvements. It enables using Nx Cloud to speed the up the pipelines
that affect any node package.
It streamlines the dev experience for frontend and backend developers by
providing the following commands:
| Task | Command | Notes |
|------|---------|--------|
| **Production** | `nx run PROJECT:prod` | Production server |
| **Develop** | `nx run PROJECT:dev` | Hot reloading development server
|
| **Test** | `nx run PROJECT:test` | Run all tests |
| **Lint** | `nx run PROJECT:lint` | Check code style |
| **Lint Fix** | `nx run PROJECT:lint-fix` | Auto-fix style issues |
The following values can be used for PROJECT:
- @zitadel/zitadel (root commands)
- @zitadel/api,
- @zitadel/login,
- @zitadel/console,
- @zitadel/docs,
- @zitadel/client
- @zitadel/proto
The project names and folders are streamlined:
| Old Folder | New Folder |
| --- | --- |
| ./e2e | ./tests/functional-ui |
| ./load-test | ./benchmark |
| ./build/zitadel | ./apps/api |
| ./console | ./apps/console (postponed so the PR is reviewable) |
Also, all references to the TypeScript repo are removed so we can
archive it.
# How the Problems Are Solved
- Ran `npx nx@latest init`
- Replaced all turbo.json by project.json and fixed the target configs
- Removed Turbo dependency
- All JavaScript related code affected by a PRs changes is
quality-checked using the `nx affected` command
- We move PR checks that are runnable using Nx into the `check`
workflow. For workflows where we don't use Nx, yet, we restore
previously built dependency artifacts from Nx.
- We only use a single and easy to understand dev container
- The CONTRIBUTING.md is streamlined
- The setup with a generated client pat is orchestrated with Nx
- Everything related to the TypeScript repo is updated or removed. A
**Deploy with Vercel** button is added to the docs and the
CONTRIBUTING.md.
# Additional Changes
- NPM package names have a consistent pattern.
- Docker bake is removed. The login container is built and released like
the core container.
- The integration tests build the login container before running, so
they don't rely on the login container action anymore. This fixes
consistently failing checks on PRs from forks.
- The docs build in GitHub actions is removed, as we already build on
Vercel.
# Additional Context
- Internal discussion:
https://zitadel.slack.com/archives/C087ADF8LRX/p1756277884928169
- Workflow dispatch test:
https://github.com/zitadel/zitadel/actions/runs/17760122959
---------
Co-authored-by: Florian Forster <florian@zitadel.com>
Co-authored-by: Tim Möhlmann <tim+github@zitadel.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
(cherry picked from commit f69a6ed4f3)
# Conflicts:
# .github/workflows/build.yml
# .github/workflows/console.yml
# .github/workflows/core.yml
# CONTRIBUTING.md
# Makefile
# backend/v3/storage/database/events_testing/events_test.go
# backend/v3/storage/database/events_testing/id_provider_instance_test.go
# backend/v3/storage/database/events_testing/instance_test.go
# console/README.md
# console/package.json
# internal/api/grpc/group/v2/integration_test/query_test.go
# pnpm-lock.yaml
Benchmarks
This package contains code for benchmarking specific endpoints of the API using k6.
Prerequisite
Structure
The use cases under tests are defined in src/use_cases. The implementation of ZITADEL resources and calls are located under src.
Execution
Env vars
VUS: Amount of parallel processes execute the test (default is 20)DURATION: Defines how long the tests are executed (default is200s)ZITADEL_HOST: URL of ZITADEL (default ishttp://localhost:8080)ADMIN_LOGIN_NAME: Loginanme of a human user withIAM_OWNER-roleADMIN_PASSWORD: password of the human user
To setup the tests we use the credentials of console and log in using an admin. The user must be able to create organizations and all resources inside organizations.
ADMIN_LOGIN_NAME:zitadel-admin@zitadel.localhostADMIN_PASSWORD:Password1!
Test
Before you run the tests you need an initialized user. The tests don't implement the change password screen during login.
make human_password_login
setup: creates human users
test: uses the previously created humans to sign in using the login uimake machine_pat_login
setup: creates machines and a pat for each machine
test: calls user info endpoint with the given patsmake machine_client_credentials_login
setup: creates machines and a client credential secret for each machine
test: calls token endpoint with theclient_credentialsgrant type.make user_info
setup: creates human users and signs them in
test: calls user info endpoint using the given humansmake manipulate_user
test: creates a human, updates its profile, locks the user and then deletes itmake introspect
setup: creates projects, one api per project, one key per api and generates the jwt from the given keys
test: calls introspection endpoint using the given JWTsmake add_session
setup: creates human users
test: creates new sessions with user id checkmake oidc_session
setup: creates a machine user to create the auth request and session.
test: creates an auth request, a session and links the session to the auth request. Implementation of this flow.make otp_session
setup: creates 1 human user for each VU and adds email OTP to it
test: creates a session based on the login name of the user, sets the email OTP challenge to the session and afterwards checks the OTP codemake password_session
setup: creates 1 human user for each VU and adds email OTP to it
test: creates a session based on the login name of the user and checks for the password on a second stepmake machine_jwt_profile_grant
setup: generates private/public key, creates machine users, adds a key
test: creates a token and calls user infomake machine_jwt_profile_grant_single_user
setup: generates private/public key, creates machine user, adds a key
test: creates a token and calls user info in parallel for the same usermake users_by_metadata_key
setup: creates for half of the VUS a human user and a machine for the other half, adds 3 metadata to each user test: calls the list users endpoint and filters by a metadata keymake users_by_metadata_value
setup: creates for half of the VUS a human user and a machine for the other half, adds 3 metadata to each user test: calls the list users endpoint and filters by a metadata valuemake verify_all_user_grants_exists
setup: creates 50 projects, 1 machine per VU test: creates a machine and grants all projects to the machine teardown: the organization is not removed to verify the data of the projections are correct. You can find additional information at the bottom of this file