mirror of
https://github.com/zitadel/zitadel.git
synced 2025-08-21 10:58:25 +00:00
docs: deprecation of Cockroachdb (#9480)
# Which Problems Are Solved Docs currently state that Zitadel supports Cockroachdb and Postgresql. Cockroachdb support will be dropped with v3 which must be reflected in docs. # How the Problems Are Solved - Added info / warning boxes on cockroachdb docs paragraphs - Added link to the mirror command to guide through migration - droped or rewrote docs if cockroachdb specific # Additional Information - part of https://github.com/zitadel/zitadel/issues/9414 - changes (including docs) to the mirror command will be made on a separate PR - knative example and helm docs will be changed in this issue: https://github.com/zitadel/zitadel-charts/issues/322
This commit is contained in:
2
.github/ISSUE_TEMPLATE/BUG_REPORT.yaml
vendored
2
.github/ISSUE_TEMPLATE/BUG_REPORT.yaml
vendored
@@ -40,7 +40,7 @@ body:
|
||||
label: Database
|
||||
description: What database are you using? (self-hosters only)
|
||||
options:
|
||||
- CockroachDB
|
||||
- CockroachDB (Zitadel v2)
|
||||
- PostgreSQL
|
||||
- Other (describe below!)
|
||||
- type: input
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@@ -36,7 +36,6 @@ load-test/.keys
|
||||
# dumps
|
||||
.backups
|
||||
|
||||
cockroach-data/*
|
||||
.local/*
|
||||
.build/
|
||||
|
||||
@@ -71,7 +70,6 @@ zitadel-*-*
|
||||
|
||||
# local
|
||||
build/local/*.env
|
||||
migrations/cockroach/migrate_cloud.go
|
||||
.notifications
|
||||
/.artifacts/*
|
||||
!/.artifacts/zitadel
|
||||
|
@@ -165,7 +165,7 @@ ZITADEL serves traffic as soon as you can see the following log line:
|
||||
### Backend/login
|
||||
|
||||
By executing the commands from this section, you run everything you need to develop the ZITADEL backend locally.
|
||||
Using [Docker Compose](https://docs.docker.com/compose/), you run a [CockroachDB](https://www.cockroachlabs.com/docs/stable/start-a-local-cluster-in-docker-mac.html) on your local machine.
|
||||
Using [Docker Compose](https://docs.docker.com/compose/), you run a [PostgreSQL](https://www.postgresql.org/download/) on your local machine.
|
||||
With [make](https://www.gnu.org/software/make/), you build a debuggable ZITADEL binary and run it using [delve](https://github.com/go-delve/delve).
|
||||
Then, you test your changes via the console your binary is serving at http://<span because="breaks the link"></span>localhost:8080 and by verifying the database.
|
||||
Once you are happy with your changes, you run end-to-end tests and tear everything down.
|
||||
@@ -200,7 +200,7 @@ make compile
|
||||
|
||||
You can now run and debug the binary in .artifacts/zitadel/zitadel using your favourite IDE, for example GoLand.
|
||||
You can test if ZITADEL does what you expect by using the UI at http://localhost:8080/ui/console.
|
||||
Also, you can verify the data by running `cockroach sql --database zitadel --insecure` and running SQL queries.
|
||||
Also, you can verify the data by running `psql "host=localhost dbname=zitadel sslmode=disable"` and running SQL queries.
|
||||
|
||||
#### Run Local Unit Tests
|
||||
|
||||
@@ -300,7 +300,7 @@ docker compose --file ./e2e/config/host.docker.internal/docker-compose.yaml down
|
||||
### Console
|
||||
|
||||
By executing the commands from this section, you run everything you need to develop the console locally.
|
||||
Using [Docker Compose](https://docs.docker.com/compose/), you run [CockroachDB](https://www.cockroachlabs.com/docs/stable/start-a-local-cluster-in-docker-mac.html) and the [latest release of ZITADEL](https://github.com/zitadel/zitadel/releases/latest) on your local machine.
|
||||
Using [Docker Compose](https://docs.docker.com/compose/), you run [PostgreSQL](https://www.postgresql.org/download/) and the [latest release of ZITADEL](https://github.com/zitadel/zitadel/releases/latest) on your local machine.
|
||||
You use the ZITADEL container as backend for your console.
|
||||
The console is run in your [Node](https://nodejs.org/en/about/) environment using [a local development server for Angular](https://angular.io/cli/serve#ng-serve), so you have fast feedback about your changes.
|
||||
|
||||
|
@@ -107,7 +107,7 @@ Yet it offers everything you need for a customer identity ([CIAM](https://zitade
|
||||
- [Actions](https://zitadel.com/docs/apis/actions/introduction) to react on events with custom code and extended ZITADEL for you needs
|
||||
- [Branding](https://zitadel.com/docs/guides/manage/customize/branding) for a uniform user experience across multiple organizations
|
||||
- [Self-service](https://zitadel.com/docs/concepts/features/selfservice) for end-users, business customers, and administrators
|
||||
- [CockroachDB](https://www.cockroachlabs.com/) or a [Postgres](https://www.postgresql.org/) database as reliable and widespread storage option
|
||||
- [Postgres](https://www.postgresql.org/) database as reliable and widespread storage option
|
||||
|
||||
## Features
|
||||
|
||||
@@ -151,7 +151,7 @@ Self-Service
|
||||
- [Administration UI (Console)](https://zitadel.com/docs/guides/manage/console/overview)
|
||||
|
||||
Deployment
|
||||
- [Postgres](https://zitadel.com/docs/self-hosting/manage/database#postgres) (version >= 14) or [CockroachDB](https://zitadel.com/docs/self-hosting/manage/database#cockroach) (version latest stable)
|
||||
- [Postgres](https://zitadel.com/docs/self-hosting/manage/database#postgres) (version >= 14)
|
||||
- [Zero Downtime Updates](https://zitadel.com/docs/concepts/architecture/solution#zero-downtime-updates)
|
||||
- [High scalability](https://zitadel.com/docs/self-hosting/manage/production)
|
||||
|
||||
|
@@ -115,8 +115,8 @@ Database:
|
||||
Host: localhost # ZITADEL_DATABASE_POSTGRES_HOST
|
||||
Port: 5432 # ZITADEL_DATABASE_POSTGRES_PORT
|
||||
Database: zitadel # ZITADEL_DATABASE_POSTGRES_DATABASE
|
||||
MaxOpenConns: 5 # ZITADEL_DATABASE_POSTGRES_MAXOPENCONNS
|
||||
MaxIdleConns: 2 # ZITADEL_DATABASE_POSTGRES_MAXIDLECONNS
|
||||
MaxOpenConns: 10 # ZITADEL_DATABASE_POSTGRES_MAXOPENCONNS
|
||||
MaxIdleConns: 5 # ZITADEL_DATABASE_POSTGRES_MAXIDLECONNS
|
||||
MaxConnLifetime: 30m # ZITADEL_DATABASE_POSTGRES_MAXCONNLIFETIME
|
||||
MaxConnIdleTime: 5m # ZITADEL_DATABASE_POSTGRES_MAXCONNIDLETIME
|
||||
Options: "" # ZITADEL_DATABASE_POSTGRES_OPTIONS
|
||||
|
@@ -48,7 +48,7 @@ TODO: describe the outcome of the test?
|
||||
| ZITADEL Version | |
|
||||
| ZITADEL Configuration | |
|
||||
| ZITADEL feature flags | |
|
||||
| Database | type: crdb / psql<br />version: |
|
||||
| Database | type: psql<br />version: |
|
||||
| Database location | |
|
||||
| Database specification | vCPU: <br/> memory: Gb |
|
||||
| ZITADEL metrics during test | |
|
||||
|
@@ -57,9 +57,9 @@ The following metrics must be collected for each test iteration. The metrics are
|
||||
| ZITADEL Version | Setup | The version of zitadel deployed | Semantic version or commit |
|
||||
| ZITADEL Configuration | Setup | Configuration of zitadel which deviates from the defaults and is not secret | yaml |
|
||||
| ZITADEL feature flags | Setup | Changed feature flags | yaml |
|
||||
| Database | Setup | Database type and version | **type**: crdb / psql **version**: semantic version |
|
||||
| Database | Setup | Database type and version | **type**: psql **version**: semantic version |
|
||||
| Database location | Setup | Region or location of the deployment of the database. If not further specified the hoster is Google Cloud SQL | Location / Region |
|
||||
| Database specification | Setup | The description must at least clarify the following metrics: vCPU, Memory and egress bandwidth (Scale) | **vCPU**: Amount of threads ([additional info](https://cloud.google.com/compute/docs/cpu-platforms)) **memory**: GB **egress bandwidth**:Gbps **scale**: Amount of crdb nodes if crdb is used |
|
||||
| Database specification | Setup | The description must at least clarify the following metrics: vCPU, Memory and egress bandwidth (Scale) | **vCPU**: Amount of threads ([additional info](https://cloud.google.com/compute/docs/cpu-platforms)) **memory**: GB **egress bandwidth**:Gbps |
|
||||
| ZITADEL metrics during test | Result | This metric helps understanding the bottlenecks of the executed test. At least the following metrics must be provided: CPU usage Memory usage | **CPU usage** in percent **Memory usage** in percent |
|
||||
| Observed errors | Result | Errors worth mentioning, mostly unexpected errors | description |
|
||||
| Top 3 most expensive database queries | Result | The execution plan of the top 3 most expensive database queries during the test execution | database execution plan |
|
||||
|
@@ -1,51 +1,51 @@
|
||||
---
|
||||
title: ZITADEL's Software Architecture
|
||||
title: Zitadel's Software Architecture
|
||||
sidebar_label: Software Architecture
|
||||
---
|
||||
|
||||
ZITADEL is built with two essential patterns. Event Sourcing (ES) and Command and Query Responsibility Segregation (CQRS).
|
||||
Due to the nature of Event Sourcing ZITADEL provides the unique capability to generate a strong audit trail of ALL the things that happen to its resources, without compromising on storage cost or audit trail length.
|
||||
Zitadel is built with two essential patterns. Event Sourcing (ES) and Command and Query Responsibility Segregation (CQRS).
|
||||
Due to the nature of Event Sourcing Zitadel provides the unique capability to generate a strong audit trail of ALL the things that happen to its resources, without compromising on storage cost or audit trail length.
|
||||
|
||||
The combination of ES and CQRS makes ZITADEL eventual consistent which, from our perspective, is a great benefit in many ways.
|
||||
The combination of ES and CQRS makes Zitadel eventual consistent which, from our perspective, is a great benefit in many ways.
|
||||
It allows us to build a Source of Records (SOR) which is the one single point of truth for all computed states.
|
||||
The SOR needs to be transaction safe to make sure all operations are in order.
|
||||
You can read more about this in our [ES documentation](../eventstore/overview).
|
||||
|
||||
Each ZITADEL binary contains all components necessary to serve traffic
|
||||
Each Zitadel binary contains all components necessary to serve traffic
|
||||
From serving the API, rendering GUI's, background processing of events and task.
|
||||
This All in One (AiO) approach makes operating ZITADEL simple.
|
||||
This All in One (AiO) approach makes operating Zitadel simple.
|
||||
|
||||
## The Architecture
|
||||
|
||||
ZITADELs software architecture is built around multiple components at different levels.
|
||||
Zitadels software architecture is built around multiple components at different levels.
|
||||
This chapter should give you an idea of the components as well as the different layers.
|
||||
|
||||

|
||||
|
||||
### Service Layer
|
||||
|
||||
The service layer includes all components who are potentially exposed to consumers of ZITADEL.
|
||||
The service layer includes all components who are potentially exposed to consumers of Zitadel.
|
||||
|
||||
#### HTTP Server
|
||||
|
||||
The http server is responsible for the following functions:
|
||||
|
||||
- serving the management GUI called ZITADEL Console
|
||||
- serving the management GUI called Zitadel Console
|
||||
- serving the static assets
|
||||
- rendering server side html (login, password-reset, verification, ...)
|
||||
|
||||
#### API Server
|
||||
|
||||
The API layer consist of the multiple APIs provided by ZITADEL. Each serves a dedicated purpose.
|
||||
All APIs of ZITADEL are always available as gRCP, gRPC-web and REST service.
|
||||
The API layer consist of the multiple APIs provided by Zitadel. Each serves a dedicated purpose.
|
||||
All APIs of Zitadel are always available as gRCP, gRPC-web and REST service.
|
||||
The only exception is the [OpenID Connect & OAuth](/apis/openidoauth/endpoints) and [Asset API](/apis/introduction#assets) due their unique nature.
|
||||
|
||||
- [OpenID Connect & OAuth](/apis/openidoauth/endpoints) - allows to request authentication and authorization of ZITADEL
|
||||
- [SAML](/apis/saml/endpoints) - allows to request authentication and authorization of ZITADEL through the SAML standard
|
||||
- [OpenID Connect & OAuth](/apis/openidoauth/endpoints) - allows to request authentication and authorization of Zitadel
|
||||
- [SAML](/apis/saml/endpoints) - allows to request authentication and authorization of Zitadel through the SAML standard
|
||||
- [Authentication API](/apis/introduction#authentication) - allow a user to do operation in its own context
|
||||
- [Management API](/apis/introduction#management) - allows an admin or machine to manage the ZITADEL resources on an organization level
|
||||
- [Administration API](/apis/introduction#administration) - allows an admin or machine to manage the ZITADEL resources on an instance level
|
||||
- [System API](/apis/introduction#system) - allows to create and change new ZITADEL instances
|
||||
- [Management API](/apis/introduction#management) - allows an admin or machine to manage the Zitadel resources on an organization level
|
||||
- [Administration API](/apis/introduction#administration) - allows an admin or machine to manage the Zitadel resources on an instance level
|
||||
- [System API](/apis/introduction#system) - allows to create and change new Zitadel instances
|
||||
- [Asset API](/apis/introduction#assets) - is used to upload and download static assets
|
||||
|
||||
### Core Layer
|
||||
@@ -61,7 +61,7 @@ The Command Side has some unique requirements, these include:
|
||||
|
||||
##### Command Handler
|
||||
|
||||
The command handler receives all operations who alter a resource managed by ZITADEL.
|
||||
The command handler receives all operations who alter a resource managed by Zitadel.
|
||||
For example if a user changes his name. The API Layer will pass the instruction received through the API call to the command handler for further processing.
|
||||
The command handler is then responsible of creating the necessary commands.
|
||||
After creating the commands the command hand them down to the command validation.
|
||||
@@ -75,14 +75,14 @@ These events now are being handed down to the storage layer for storage.
|
||||
|
||||
#### Events
|
||||
|
||||
ZITADEL handles events in two ways.
|
||||
Zitadel handles events in two ways.
|
||||
Events that should be processed in near real time are processed by a in memory pub sub system.
|
||||
Some events can be handled asynchronously using the spooler.
|
||||
|
||||
##### Pub Sub
|
||||
|
||||
The pub sub system job is it to keep a query view up-to-date by feeding a constant stream of events to the projections.
|
||||
Our pub sub system built into ZITADEL works by placing events into an in memory queue for its subscribers.
|
||||
Our pub sub system built into Zitadel works by placing events into an in memory queue for its subscribers.
|
||||
There is no need for specific guarantees from the pub sub system. Since the SOR is the ES everything can be retried without loss of data.
|
||||
In case of an error an event can be reapplied in two ways:
|
||||
|
||||
@@ -90,8 +90,8 @@ In case of an error an event can be reapplied in two ways:
|
||||
- The spooler takes care of background cleanups in a scheduled fashion
|
||||
|
||||
> The decision to incorporate an internal pub sub system with no need for specific guarantees is a deliberate choice.
|
||||
> We believe that the toll of operating an additional external service like a MQ system negatively affects the ease of use of ZITADEL as well as its availability guarantees.
|
||||
> One of the authors of ZITADEL did his thesis to test this approach against established MQ systems.
|
||||
> We believe that the toll of operating an additional external service like a MQ system negatively affects the ease of use of Zitadel as well as its availability guarantees.
|
||||
> One of the authors of Zitadel did his thesis to test this approach against established MQ systems.
|
||||
|
||||
##### Spooler
|
||||
|
||||
@@ -136,12 +136,16 @@ It is also responsible to execute authorization checks. To check if a request is
|
||||
|
||||
### Storage Layer
|
||||
|
||||
As ZITADEL itself is built completely stateless only the storage layer is needed to persist states.
|
||||
The storage layer of ZITADEL is responsible for multiple tasks. For example:
|
||||
As Zitadel itself is built completely stateless only the storage layer is needed to persist states.
|
||||
The storage layer of Zitadel is responsible for multiple tasks. For example:
|
||||
|
||||
- Guarantee strong consistency for the command side
|
||||
- Guarantee good query performance for the query side
|
||||
- Backup and restore operation for disaster recovery purpose
|
||||
|
||||
ZITADEL currently supports PostgreSQL and CockroachDB..
|
||||
Zitadel currently supports PostgreSQL.
|
||||
Make sure to read our [Production Guide](/docs/self-hosting/manage/production#prefer-postgresql) before you decide on using one of them.
|
||||
|
||||
:::info
|
||||
Zitadel v2 supported CockroachDB and PostgreSQL. Zitadel v3 only supports PostgreSQL. Please refer to [the mirror guide](cli/mirror) to migrate to PostgreSQL.
|
||||
:::
|
@@ -1,21 +1,20 @@
|
||||
---
|
||||
title: ZITADEL's Deployment Architecture
|
||||
title: Zitadel's Deployment Architecture
|
||||
sidebar_label: Deployment Architecture
|
||||
---
|
||||
|
||||
## High Availability
|
||||
|
||||
ZITADEL can be run as high available system with ease.
|
||||
Zitadel can be run as high available system with ease.
|
||||
Since the storage layer takes the heavy lifting of making sure that data in synched across, server, data centers or regions.
|
||||
|
||||
Depending on your projects needs our general recommendation is to run ZITADEL and ZITADELs storage layer across multiple availability zones in the same region or if you need higher guarantees run the storage layer across multiple regions.
|
||||
Consult the [CockroachDB documentation](https://www.cockroachlabs.com/docs/) for more details or use the [CockroachCloud Service](https://www.cockroachlabs.com/docs/cockroachcloud/create-an-account.html)
|
||||
Alternatively you can run ZITADEL also with Postgres which is [Enterprise Supported](/docs/support/software-release-cycles-support#partially-supported).
|
||||
Make sure to read our [Production Guide](/self-hosting/manage/production#prefer-postgresql) before you decide to use it.
|
||||
Depending on your projects needs our general recommendation is to run Zitadel across multiple availability zones in the same region or across multiple regions.
|
||||
Make sure to read our [Production Guide](/docs/self-hosting/manage/production#prefer-postgresql) before you decide to use it.
|
||||
Consult the [Postgres documentation](https://www.postgresql.org/docs/) for more details.
|
||||
|
||||
## Scalability
|
||||
|
||||
ZITADEL can be scaled in a linear fashion in multiple dimensions.
|
||||
Zitadel can be scaled in a linear fashion in multiple dimensions.
|
||||
|
||||
- Vertical on your compute infrastructure
|
||||
- Horizontal in a region
|
||||
@@ -23,45 +22,38 @@ ZITADEL can be scaled in a linear fashion in multiple dimensions.
|
||||
|
||||
Our customers can reuse the same already known binary or container and scale it across multiple server, data center and regions.
|
||||
To distribute traffic an already existing proxy infrastructure can be reused.
|
||||
Simply steer traffic by path, hostname, IP address or any other metadata to the ZITADEL of your choice.
|
||||
Simply steer traffic by path, hostname, IP address or any other metadata to the Zitadel of your choice.
|
||||
|
||||
> To improve your service quality we recommend steering traffic by path to different ZITADEL deployments
|
||||
> To improve your service quality we recommend steering traffic by path to different Zitadel deployments
|
||||
> Feel free to [contact us](https://zitadel.com/contact/) for details
|
||||
|
||||
## Example Deployment Architecture
|
||||
|
||||
### Single Cluster / Region
|
||||
|
||||
A ZITADEL Cluster is a highly available IAM system with each component critical for serving traffic laid out at least three times.
|
||||
As our storage layer (CockroachDB) relies on Raft, it is recommended to operate odd numbers of storage nodes to prevent "split brain" problems.
|
||||
Hence our reference design for Kubernetes is to have three application nodes and three storage nodes.
|
||||
A Zitadel Cluster is a highly available IAM system with each component critical for serving traffic laid out at least three times.
|
||||
Our storage layer (Postgres) is built for single region deployments.
|
||||
Hence our reference design for Kubernetes is to have three application nodes and one storage node.
|
||||
|
||||
> If you are using a serverless offering like Google Cloud Run you can scale ZITADEL from 0 to 1000 Pods without the need of deploying the node across multiple availability zones.
|
||||
|
||||
:::info
|
||||
CockroachDB needs to be configured with locality flags to proper distribute data over the zones
|
||||
:::
|
||||
> If you are using a serverless offering like Google Cloud Run you can scale Zitadel from 0 to 1000 Pods without the need of deploying the node across multiple availability zones.
|
||||
|
||||

|
||||
|
||||
### Multi Cluster / Region
|
||||
|
||||
To scale ZITADEL across regions it is recommend to create at least three cluster.
|
||||
We recommend to run an odd number of storage clusters (storage nodes per data center) to compensate for "split brain" scenarios.
|
||||
In our reference design we recommend to create one cluster per region or cloud provider with a minimum of three regions.
|
||||
To scale Zitadel across regions it is recommend to create at least three clusters.
|
||||
Each cluster is a fully independent ZITADEL setup.
|
||||
To keep the data in sync across all clusters, we recommend using Postgres with read-only replicas as a storage layer.
|
||||
Make sure to read our [Production Guide](/docs/self-hosting/manage/production#prefer-postgresql) before you decide to use it.
|
||||
Consult the [Postgres documentation](https://www.postgresql.org/docs/current/high-availability.html) for more details.
|
||||
|
||||
With this design even the outage of a whole data-center would have a minimal impact as all data is still available at the other two locations.
|
||||
|
||||
:::info
|
||||
CockroachDB needs to be configured with locality flags to proper distribute data over the zones
|
||||
:::
|
||||
|
||||

|
||||
|
||||
## Zero Downtime Updates
|
||||
|
||||
Since an Identity system tends to be a critical piece of infrastructure, the "in place zero downtime update" is a well needed feature.
|
||||
ZITADEL is built in a way that upgrades can be executed without downtime by just updating to a more recent version.
|
||||
Zitadel is built in a way that upgrades can be executed without downtime by just updating to a more recent version.
|
||||
|
||||
The common update involves the following steps and do not need manual intervention of the operator:
|
||||
|
||||
@@ -78,5 +70,5 @@ Users who use [Kubernetes/Helm](/docs/self-hosting/deploy/kubernetes) or serverl
|
||||
:::info
|
||||
As a good practice we recommend creating Database Backups prior to an update.
|
||||
It is also recommend to read the release notes on GitHub before upgrading.
|
||||
Since ZITADEL utilizes Semantic Versioning Breaking Changes of any kind will always increase the major version (e.g Version 2 would become Version 3).
|
||||
Since Zitadel utilizes Semantic Versioning Breaking Changes of any kind will always increase the major version (e.g Version 2 would become Version 3).
|
||||
:::
|
||||
|
@@ -20,7 +20,6 @@ The following table shows the available integration patterns for streaming audit
|
||||
| | Description | Self-hosting | ZITADEL Cloud |
|
||||
|-------------------------------------|----------------------------------------------------------------------------------------------------------------|-------------|---------------|
|
||||
| Events-API | Pulling events of all ZITADEL resources such as Users, Projects, Apps, etc. (Events = Change Log of Resources) | ✅ | ✅ |
|
||||
| Cockroach Change Data Capture | Sending events of all ZITADEL resources such as Users, Projects, Apps, etc. (Events = Change Log of Resources) | ✅ | ❌ |
|
||||
| ZITADEL Actions Log to Stdout | Custom log to messages possible on predefined triggers during login / register Flow | ✅ | ❌ |
|
||||
| ZITADEL Actions trigger API/Webhook | Custom API/Webhook request on predefined triggers during login / register | ✅ | ✅ |
|
||||
|
||||
@@ -34,71 +33,6 @@ This API offers granular control through various filters, enabling you to:
|
||||
|
||||
You can find a comprehensive guide on how to use the events API for different use cases here: [Get Events from ZITADEL](/docs/guides/integrate/zitadel-apis/event-api)
|
||||
|
||||
### Cockroach Change Data Capture
|
||||
|
||||
For self-hosted ZITADEL deployments utilizing CockroachDB as the database, [CockroachDB's built-in Change Data Capture (CDC)](https://www.cockroachlabs.com/docs/stable/change-data-capture-overview) functionality provides a streamlined approach to integrate ZITADEL audit logs with external systems.
|
||||
|
||||
CDC captures row-level changes in your database and streams them as messages to a configurable destination, such as Google BigQuery or a SIEM/SOC solution. This real-time data stream enables:
|
||||
- **Continuous monitoring**: Receive near-instantaneous updates on ZITADEL activity, facilitating proactive threat detection and response.
|
||||
- **Simplified integration**: Leverage CockroachDB's native capabilities for real-time data transfer, eliminating the need for additional tools or configurations.
|
||||
|
||||
This approach is limited to self-hosted deployments using CockroachDB and requires expertise in managing the database and CDC configuration.
|
||||
|
||||
#### Sending events to Google Cloud Storage using Change Data Capture
|
||||
|
||||
This example will show you how you can utilize CDC for sending all ZITADEL events to Google Cloud Storage.
|
||||
For a detailed description please read [CockroachLab's Get Started Guide](https://www.cockroachlabs.com/docs/v23.2/create-and-configure-changefeeds) and [Cloud Storage Authentication](https://www.cockroachlabs.com/docs/v23.2/cloud-storage-authentication?filters=gcs#set-up-google-cloud-storage-assume-role) from Cockroach.
|
||||
|
||||
You will need a Google Cloud Storage Bucket and a service account.
|
||||
1. [Create Google Cloud Storage Bucket](https://cloud.google.com/storage/docs/creating-buckets)
|
||||
2. [Create Service Account](https://cloud.google.com/iam/docs/service-accounts-create)
|
||||
3. Create a role with the `storage.objects.create` permission
|
||||
4. Grant service account access to the bucket
|
||||
5. Create key for service account and download it
|
||||
|
||||
Now we need to enable and create the changefeed in the cockroach DB.
|
||||
1. [Enable rangefeeds on cockroach cluster](https://www.cockroachlabs.com/docs/v23.2/create-and-configure-changefeeds#enable-rangefeeds)
|
||||
```bash
|
||||
SET CLUSTER SETTING kv.rangefeed.enabled = true;
|
||||
```
|
||||
2. Encode the keyfile from the service account with base64 and replace the placeholder it in the script below
|
||||
3. Create Changefeed to send data into Google Cloud Storage
|
||||
The following example sends all events without payload to Google Cloud Storage
|
||||
Per default we do not want to send the payload of the events, as this could potentially include personally identifiable information (PII)
|
||||
If you want to include the payload, you can just add `payload` to the select list in the query.
|
||||
```sql
|
||||
CREATE CHANGEFEED INTO 'gs://gc-storage-zitadel-data/events?partition_format=flat&AUTH=specified&CREDENTIALS=base64encodedkey'
|
||||
AS SELECT instance_id, aggregate_type, aggregate_id, owner, event_type, sequence, created_at
|
||||
FROM eventstore.events2;
|
||||
```
|
||||
|
||||
In some cases you might want the payload of only some specific events.
|
||||
This example shows you how to get all events and the instance domain events with the payload:
|
||||
```sql
|
||||
CREATE CHANGEFEED INTO 'gs://gc-storage-zitadel-data/events?partition_format=flat&AUTH=specified&CREDENTIALS=base64encodedkey'
|
||||
AS SELECT instance_id, aggregate_type, aggregate_id, owner, event_type, sequence, created_at
|
||||
CASE WHEN event_type IN ('instance.domain.added', 'instance.domain.removed', 'instance.domain.primary.set' )
|
||||
THEN payload END AS payload
|
||||
FROM eventstore.events2;
|
||||
```
|
||||
|
||||
The partition format in the example above is flat, this means that all files for each timestamp will be created in the same folder.
|
||||
You will have files for different timestamps including the output for the events created in that time.
|
||||
Each event is represented as a json row.
|
||||
|
||||
Example Output:
|
||||
```json lines
|
||||
{
|
||||
"aggregate_id": "26553987123463875",
|
||||
"aggregate_type": "user",
|
||||
"created_at": "2023-12-25T10:01:45.600913Z",
|
||||
"event_type": "user.human.added",
|
||||
"instance_id": "123456789012345667",
|
||||
"payload": null,
|
||||
"sequence": 1
|
||||
}
|
||||
```
|
||||
|
||||
## ZITADEL Actions
|
||||
|
||||
ZITADEL [Actions](/docs/concepts/features/actions) offer a powerful mechanism for extending the platform's capabilities and integrating with external systems tailored to your specific requirements.
|
||||
|
@@ -32,7 +32,7 @@ services:
|
||||
|
||||
db:
|
||||
restart: 'always'
|
||||
image: postgres:16-alpine
|
||||
image: postgres:17-alpine
|
||||
environment:
|
||||
PGUSER: postgres
|
||||
POSTGRES_PASSWORD: postgres
|
||||
|
@@ -24,7 +24,7 @@ services:
|
||||
|
||||
db:
|
||||
restart: 'always'
|
||||
image: postgres:16-alpine
|
||||
image: postgres:17-alpine
|
||||
environment:
|
||||
PGUSER: postgres
|
||||
POSTGRES_PASSWORD: postgres
|
||||
|
@@ -1,5 +1,5 @@
|
||||
---
|
||||
title: Install ZITADEL on Linux
|
||||
title: Install Zitadel on Linux
|
||||
sidebar_label: Linux
|
||||
---
|
||||
|
||||
@@ -11,7 +11,7 @@ import NoteInstanceNotFound from "./troubleshooting/_note_instance_not_found.mdx
|
||||
## Install PostgreSQL
|
||||
|
||||
Download a `postgresql` binary as described [in the PostgreSQL docs](https://www.postgresql.org/download/linux/).
|
||||
ZITADEL is tested against PostgreSQL and CockroachDB latest stable tag and Ubuntu 22.04.
|
||||
Zitadel is tested against PostgreSQL latest stable tag and latest Ubuntu LTS.
|
||||
|
||||
## Run PostgreSQL
|
||||
|
||||
@@ -20,15 +20,15 @@ sudo systemctl start postgresql
|
||||
sudo systemctl enable postgresql
|
||||
```
|
||||
|
||||
## Install ZITADEL
|
||||
## Install Zitadel
|
||||
|
||||
Download the ZITADEL release according to your architecture from [Github](https://github.com/zitadel/zitadel/releases/latest), unpack the archive and copy zitadel binary to /usr/local/bin
|
||||
Download the Zitadel release according to your architecture from [Github](https://github.com/zitadel/zitadel/releases/latest), unpack the archive and copy zitadel binary to /usr/local/bin
|
||||
|
||||
```bash
|
||||
LATEST=$(curl -i https://github.com/zitadel/zitadel/releases/latest | grep location: | cut -d '/' -f 8 | tr -d '\r'); ARCH=$(uname -m); case $ARCH in armv5*) ARCH="armv5";; armv6*) ARCH="armv6";; armv7*) ARCH="arm";; aarch64) ARCH="arm64";; x86) ARCH="386";; x86_64) ARCH="amd64";; i686) ARCH="386";; i386) ARCH="386";; esac; wget -c https://github.com/zitadel/zitadel/releases/download/$LATEST/zitadel-linux-$ARCH.tar.gz -O - | tar -xz && sudo mv zitadel-linux-$ARCH/zitadel /usr/local/bin
|
||||
```
|
||||
|
||||
## Run ZITADEL
|
||||
## Run Zitadel
|
||||
|
||||
```bash
|
||||
ZITADEL_DATABASE_POSTGRES_HOST=localhost ZITADEL_DATABASE_POSTGRES_PORT=5432 ZITADEL_DATABASE_POSTGRES_DATABASE=zitadel ZITADEL_DATABASE_POSTGRES_USER_USERNAME=zitadel ZITADEL_DATABASE_POSTGRES_USER_PASSWORD=zitadel ZITADEL_DATABASE_POSTGRES_USER_SSL_MODE=disable ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME=root ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD=postgres ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE=disable ZITADEL_EXTERNALSECURE=false zitadel start-from-init --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled
|
||||
@@ -50,7 +50,7 @@ ZITADEL_DATABASE_POSTGRES_HOST=localhost ZITADEL_DATABASE_POSTGRES_PORT=5432 ZIT
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
### Setup ZITADEL with a service account
|
||||
### Setup Zitadel with a service account
|
||||
|
||||
```bash
|
||||
ZITADEL_DATABASE_POSTGRES_HOST=localhost ZITADEL_DATABASE_POSTGRES_PORT=5432 ZITADEL_DATABASE_POSTGRES_DATABASE=zitadel ZITADEL_DATABASE_POSTGRES_USER_USERNAME=zitadel ZITADEL_DATABASE_POSTGRES_USER_PASSWORD=zitadel ZITADEL_DATABASE_POSTGRES_USER_SSL_MODE=disable ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME=root ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE=disable ZITADEL_EXTERNALSECURE=false ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH=/tmp/zitadel-admin-sa.json ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME=zitadel-admin-sa ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME=Admin ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINEKEY_TYPE=1 zitadel start-from-init --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled
|
||||
|
@@ -25,7 +25,7 @@ services:
|
||||
- './example-zitadel-init-steps.yaml:/example-zitadel-init-steps.yaml:ro'
|
||||
|
||||
db:
|
||||
image: postgres:16-alpine
|
||||
image: postgres:17-alpine
|
||||
restart: always
|
||||
environment:
|
||||
- POSTGRES_USER=root
|
||||
|
@@ -81,5 +81,5 @@ Read more about [the login process](/guides/integrate/login/oidc/login-users).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
You can connect to cockroach like this: `docker exec -it loadbalancing-example-my-cockroach-db-1 cockroach sql --host my-cockroach-db --certs-dir /cockroach/certs/`
|
||||
For example, to show all login names: `docker exec -it loadbalancing-example-my-cockroach-db-1 cockroach sql --database zitadel --host my-cockroach-db --certs-dir /cockroach/certs/ --execute "select * from projections.login_names3"`
|
||||
You can connect to the database like this: `docker exec -it loadbalancing-example-db-1 psql --host localhost`
|
||||
For example, to show all login names: `docker exec -it loadbalancing-example-db-1 psql -d zitadel --host localhost -c 'select * from projections.login_names3'`
|
||||
|
@@ -11,7 +11,7 @@ import NoteInstanceNotFound from './troubleshooting/_note_instance_not_found.mdx
|
||||
## Install PostgreSQL
|
||||
|
||||
Download a `postgresql` binary as described [in the PostgreSQL docs](https://www.postgresql.org/download/macosx/).
|
||||
ZITADEL is tested against PostgreSQL and CockroachDB latest stable tag and Ubuntu 22.04.
|
||||
ZITADEL is tested against PostgreSQL latest stable tag and latest Ubuntu LTS.
|
||||
|
||||
## Run PostgreSQL
|
||||
|
||||
|
@@ -14,7 +14,7 @@ Choose your platform and run ZITADEL with the most minimal configuration possibl
|
||||
## Prerequisites
|
||||
|
||||
- For test environments, ZITADEL does not need many resources, 1 CPU and 512MB memory are more than enough. (With more CPU, the password hashing might be faster)
|
||||
- A PostgreSQL or CockroachDB as only needed storage. Make sure to read our [Production Guide](/docs/self-hosting/manage/production#prefer-postgresql) before you decide to use Postgresql.
|
||||
- A PostgreSQL as only needed storage. Make sure to read our [Production Guide](/docs/self-hosting/manage/production#prefer-postgresql) before you decide to use Postgresql.
|
||||
|
||||
## Releases
|
||||
|
||||
|
@@ -110,7 +110,6 @@ Drawbacks:
|
||||
|
||||
- Slowest of the available caching options
|
||||
- Might put additional strain on the database server, limiting horizontal scalability
|
||||
- CockroachDB does not support unlogged tables. When this connector is enabled against CockroachDB, it does work but little to no performance benefit is to be expected.
|
||||
|
||||
### Local memory cache
|
||||
|
||||
|
@@ -13,7 +13,7 @@ services:
|
||||
- "./example-zitadel-init-steps.yaml:/example-zitadel-init-steps.yaml:ro"
|
||||
|
||||
db:
|
||||
image: postgres:16-alpine
|
||||
image: postgres:17-alpine
|
||||
restart: always
|
||||
environment:
|
||||
- POSTGRES_USER=root
|
||||
|
@@ -1,8 +1,12 @@
|
||||
## ZITADEL with Cockroach
|
||||
## Zitadel v2 with Cockroach
|
||||
|
||||
The default database of ZITADEL is [CockroachDB](https://www.cockroachlabs.com). The SQL database provides a bunch of features like horizontal scalability, data regionality and many more.
|
||||
:::warning
|
||||
Zitadel v3 removed CockroachDB support. See the [CLI mirror guide](../cli/mirror) for migrating to PostgreSQL.
|
||||
:::
|
||||
|
||||
Currently versions >= 23.2 are supported.
|
||||
The default database of Zitadel v2 is [CockroachDB](https://www.cockroachlabs.com). The SQL database provides a bunch of features like horizontal scalability, data regionality and many more.
|
||||
|
||||
Currently versions >= 25.1 are supported.
|
||||
|
||||
The default configuration of the database looks like this:
|
||||
|
||||
|
@@ -1,6 +1,8 @@
|
||||
## ZITADEL with Postgres
|
||||
|
||||
If you want to use a PostgreSQL database you can [overwrite the default configuration](../configure/configure.mdx).
|
||||
PostgreSQL is the default database for ZITADEL due to its reliability, robustness, and adherence to SQL standards. It is well-suited for handling the complex data requirements of an identity management system.
|
||||
|
||||
If you are using Zitadel v2 and want to use a PostgreSQL database you can [overwrite the default configuration](../configure/configure.mdx).
|
||||
|
||||
Currently versions >= 14 are supported.
|
||||
|
||||
|
@@ -11,10 +11,10 @@ import Postgres from './_postgres.mdx'
|
||||
|
||||
<Tabs
|
||||
groupId="database-vendor"
|
||||
default="cockroach"
|
||||
default="postgres"
|
||||
values={[
|
||||
{'label': 'Postgres', 'value': 'pg'},
|
||||
{'label': 'Cockroach', 'value': 'crdb'},
|
||||
{'label': 'Cockroach (Zitadel v2)', 'value': 'crdb'},
|
||||
]}
|
||||
>
|
||||
<TabItem value="pg">
|
||||
|
@@ -111,14 +111,15 @@ but in the Projections.Customizations.Telemetry section
|
||||
|
||||
### Prefer PostgreSQL
|
||||
|
||||
ZITADEL supports [CockroachDB](https://www.cockroachlabs.com/) and [PostgreSQL](https://www.postgresql.org/).
|
||||
We recommend using PostgreSQL, as it is the better choice when you want to prioritize performance and latency.
|
||||
ZITADEL supports [PostgreSQL](https://www.postgresql.org/).
|
||||
|
||||
However, if [multi-regional data locality](https://www.cockroachlabs.com/docs/stable/multiregion-overview.html) is a critical requirement, CockroachDB might be a suitable option.
|
||||
:::info
|
||||
ZITADEL v2 supports [CockroachDB](https://www.cockroachlabs.com/) and [PostgreSQL](https://www.postgresql.org/). Please refer to [the mirror guide](cli/mirror) to migrate to postgres.
|
||||
:::
|
||||
|
||||
The indexes for the database are optimized using load tests from [ZITADEL Cloud](https://zitadel.com),
|
||||
which runs with PostgreSQL.
|
||||
If you identify problems with your CockroachDB during load tests that indicate that the indexes are not optimized,
|
||||
If you identify problems with your database during load tests that indicate that the indexes are not optimized,
|
||||
please create an issue in our [github repository](https://github.com/zitadel/zitadel).
|
||||
|
||||
### Configure ZITADEL
|
||||
@@ -129,12 +130,13 @@ Depending on your environment, you maybe would want to tweak some settings about
|
||||
Database:
|
||||
postgres:
|
||||
Host: localhost
|
||||
Port: 26257
|
||||
Port: 5432
|
||||
Database: zitadel
|
||||
//highlight-start
|
||||
MaxOpenConns: 20
|
||||
MaxOpenConns: 10
|
||||
MaxIdleConns: 5
|
||||
MaxConnLifetime: 30m
|
||||
MaxConnIdleTime: 30m
|
||||
MaxConnIdleTime: 5m
|
||||
//highlight-end
|
||||
Options: ""
|
||||
```
|
||||
@@ -192,9 +194,7 @@ The ZITADEL binary itself is stateless,
|
||||
so there is no need for a special backup job.
|
||||
|
||||
Generally, for maintaining your database management system in production,
|
||||
please refer to the corresponding docs
|
||||
[for CockroachDB](https://www.cockroachlabs.com/docs/stable/recommended-production-settings.html)
|
||||
or [for PostgreSQL](https://www.postgresql.org/docs/current/admin.html).
|
||||
please refer to the corresponding docs [for PostgreSQL](https://www.postgresql.org/docs/current/admin.html).
|
||||
|
||||
|
||||
## Data initialization
|
||||
@@ -240,8 +240,7 @@ you might want to [limit usage and/or execute tasks on certain usage units and l
|
||||
|
||||
### General resource usage
|
||||
|
||||
ZITADEL consumes around 512MB RAM and can run with less than 1 CPU core.
|
||||
The database consumes around 2 CPU under normal conditions and 6GB RAM with some caching to it.
|
||||
ZITADEL itself requires approximately 512MB of RAM and can operate with less than one CPU core. The database component, under typical conditions, utilizes about one CPU core per 100 requests per second (req/s) and 4GB of RAM per core, which includes some caching.
|
||||
|
||||
:::info Password hashing
|
||||
Be aware of CPU spikes when hashing passwords. We recommend to have 4 CPU cores available for this purpose.
|
||||
@@ -249,5 +248,6 @@ Be aware of CPU spikes when hashing passwords. We recommend to have 4 CPU cores
|
||||
|
||||
### Production HA cluster
|
||||
|
||||
It is recommended to build a minimal high-availability with 3 Nodes with 4 CPU and 16GB memory each.
|
||||
Excluding non-essential services, such as log collection, metrics etc, the resources could be reduced to around 4 CPU and 8GB memory each.
|
||||
For a minimal high-availability setup, we recommend a cluster of 3 nodes, each with 4 CPU cores and 16GB of memory.
|
||||
|
||||
If you exclude non-essential services like log collection and metrics, you can reduce the resources to approximately 4 CPU cores and 8GB of memory per node.
|
||||
|
@@ -19,7 +19,9 @@ To apply best practices to your production setup we created a step by step check
|
||||
- [ ] Use serverless platform such as Knative or a hyperscaler equivalent (e.g. CloudRun from Google)
|
||||
- [ ] Split `zitadel init` and `zitadel setup` for fast start-up times when [scaling](/docs/self-hosting/manage/updating_scaling) ZITADEL
|
||||
- [ ] High Availability for database
|
||||
- [ ] Follow the [Production Checklist](https://www.cockroachlabs.com/docs/stable/recommended-production-settings.html) for CockroachDB if you selfhost the database or use [CockroachDB cloud](https://www.cockroachlabs.com/docs/cockroachcloud/create-an-account.html)
|
||||
- [ ] Follow [this guide](https://www.postgresql.org/docs/current/high-availability.html) to set up the database.
|
||||
- [ ] Configure logging
|
||||
- [ ] Configure timeouts
|
||||
- [ ] Configure backups on a regular basis for the database
|
||||
- [ ] Test the restore scenarios before going live
|
||||
- [ ] Secure database connections from outside your network and/or use an internal subnet for database connectivity
|
||||
|
@@ -121,7 +121,7 @@ services:
|
||||
|
||||
db:
|
||||
restart: 'always'
|
||||
image: postgres:16-alpine
|
||||
image: postgres:17-alpine
|
||||
environment:
|
||||
POSTGRES_PASSWORD: postgres
|
||||
healthcheck:
|
||||
|
@@ -53,10 +53,10 @@ The command `zitadel init` ensures the database connection is ready to use for t
|
||||
It just needs to be executed once over ZITADELs full life cycle,
|
||||
when you install ZITADEL from scratch.
|
||||
During `zitadel init`, for connecting to your database,
|
||||
ZITADEL uses the privileged and preexisting database user configured in `Database.<cockroach|postgres>.Admin.Username`.
|
||||
ZITADEL uses the privileged and preexisting database user configured in `Database.postgres.Admin.Username`.
|
||||
, `zitadel init` ensures the following:
|
||||
- If it doesn’t exist already, it creates a database with the configured database name.
|
||||
- If it doesn’t exist already, it creates the unprivileged user use configured in `Database.<cockroach|postgres>.User.Username`.
|
||||
- If it doesn’t exist already, it creates the unprivileged user use configured in `Database.postgres.User.Username`.
|
||||
Subsequent phases connect to the database with this user's credentials only.
|
||||
- If not already done, it grants the necessary permissions ZITADEL needs to the non privileged user.
|
||||
- If they don’t exist already, it creates all schemas and some basic tables.
|
||||
|
21
docs/docs/support/advisory/a10015.md
Normal file
21
docs/docs/support/advisory/a10015.md
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
title: Technical Advisory 10015
|
||||
---
|
||||
|
||||
## Date
|
||||
|
||||
Versions: >= v3.0.0
|
||||
|
||||
Date: 2025-03-31
|
||||
|
||||
## Description
|
||||
|
||||
CockroachDB was initially chosen due to its distributed nature and SQL compatibility. However, over time, it became apparent that the operational complexity and specific compatibility issues outweighed the benefits for our use case. We decided to focus on PostgreSQL to simplify our infrastructure and leverage its mature ecosystem.
|
||||
|
||||
## Impact
|
||||
|
||||
Zitadel v3 requires PostgreSQL as a database. Therefore, Zitadel v3 will not start if CockroachDB is configured as the database.
|
||||
|
||||
## Mitigation
|
||||
|
||||
To upgrade your self-hosted deployment to Zitadel v3 migrate to PostgreSQL. Please refer to [this guide](/docs/self-hosting/manage/cli/mirror) to mirror the data to PostgreSQL before you deploy Zitadel v3.
|
@@ -226,6 +226,18 @@ We understand that these advisories may include breaking changes, and we aim to
|
||||
<td>-</td>
|
||||
<td>2025-01-10</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
<a href="./advisory/a10015">A-10015</a>
|
||||
</td>
|
||||
<td>Drop CockroachDB support</td>
|
||||
<td>Breaking Behavior Change</td>
|
||||
<td>
|
||||
CockroachDB is no longer supported by Zitadel.
|
||||
</td>
|
||||
<td>3.0.0</td>
|
||||
<td>2025-03-31</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Subscribe to our Mailing List
|
||||
|
BIN
docs/static/img/zitadel_cluster_architecture.png
vendored
BIN
docs/static/img/zitadel_cluster_architecture.png
vendored
Binary file not shown.
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 106 KiB |
Binary file not shown.
Before Width: | Height: | Size: 7.2 KiB After Width: | Height: | Size: 74 KiB |
@@ -3409,9 +3409,9 @@ caniuse-api@^3.0.0:
|
||||
lodash.uniq "^4.5.0"
|
||||
|
||||
caniuse-lite@^1.0.0, caniuse-lite@^1.0.30001599, caniuse-lite@^1.0.30001629:
|
||||
version "1.0.30001636"
|
||||
resolved "https://registry.yarnpkg.com/caniuse-lite/-/caniuse-lite-1.0.30001636.tgz#b15f52d2bdb95fad32c2f53c0b68032b85188a78"
|
||||
integrity sha512-bMg2vmr8XBsbL6Lr0UHXy/21m84FTxDLWn2FSqMd5PrlbMxwJlQnC2YWYxVgp66PZE+BBNF2jYQUBKCo1FDeZg==
|
||||
version "1.0.30001702"
|
||||
resolved "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001702.tgz"
|
||||
integrity sha512-LoPe/D7zioC0REI5W73PeR1e1MLCipRGq/VkovJnd6Df+QVqT+vT33OXCp8QUd7kA7RZrHWxb1B36OQKI/0gOA==
|
||||
|
||||
ccount@^2.0.0:
|
||||
version "2.0.1"
|
||||
|
Reference in New Issue
Block a user