Merge branch 'main' into cy10

This commit is contained in:
Elio Bischof 2022-08-03 15:13:46 +02:00
commit 96de67a218
No known key found for this signature in database
GPG Key ID: 7B383FDE4DDBF1BD
45 changed files with 463 additions and 145 deletions

View File

@ -52,12 +52,12 @@ dockers:
- "--platform=linux/arm64"
docker_manifests:
id: zitadel-latest
- id: zitadel-latest
name_template: ghcr.io/zitadel/zitadel:latest
image_templates:
- ghcr.io/zitadel/zitadel:{{ .Tag }}-amd64
- ghcr.io/zitadel/zitadel:{{ .Tag }}-arm64
id: zitadel-Tag
- id: zitadel-Tag
name_template: ghcr.io/zitadel/zitadel:{{ .Tag }}
image_templates:
- ghcr.io/zitadel/zitadel:{{ .Tag }}-amd64

View File

@ -152,3 +152,4 @@ See the policy [here](./SECURITY.md)
See the exact licensing terms [here](./LICENSE)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

View File

@ -15,7 +15,7 @@ For a list of supported or unsupported `Grant Types` please have a look at the t
| Refresh Token | yes |
| Resource Owner Password Credentials | no |
| Security Assertion Markup Language (SAML) 2.0 Profile | no |
| Token Exchange | work in progress |
| Token Exchange | no |
## Authorization Code

View File

@ -1,76 +0,0 @@
---
title: Architecture
---
## Software Architecture
**ZITADEL** is built with two essential patterns. Eventsourcing and CQRS. Due to the nature of eventsourcing **ZITADEL** provides the unique capability to generate a strong audit trail of ALL the things that happen to its resources, without compromising on storage cost or audit trail length.
The combination with CQRS makes **ZITADEL** eventual consistent which, from our perspective is a great benefit. It allows us to build a SOR (Source of Records) which is the one single point of truth for all computed states. The SOR needs to be transaction safe to make sure all operations are in order.
Each **ZITADEL** contains all components of the IAM, from serving as API, rendering / serving GUI's, background processing of events and task or being a GITOPS style operator. This AiO (All in One) approach makes scaling from a single machine to a multi region (multi cluster) seamless.
![Software Architecture](/img/zitadel_software_architecture.png)
### Component Command Side
The **command handler** receives all operations who alter a IAM resource. For example if a user changes his name.
This information is then passed to **command validation** for processing of the business logic, for example to make sure that the user actually can change his name. If this succeeds all generated events are inserted into the eventstore when required all within one transaction.
- Transaction safety is a MUST
- Availability MUST be high
> When we classify this with the CAP theorem we would choose **Consistent** and **Available** but leave **Partition Tolerance** aside.
### Component Spooler
The spoolers job is it to keep a query view up-to-date or at least look that it does not have a too big lag behind the eventstore.
Each query view has its own spooler who is responsible to look for the events who are relevant to generate the query view. It does this by triggering the relevant projection.
Spoolers are especially necessary where someone can query datasets instead of single ids.
> The query side has the option to dynamically check the eventstore for newer events on a certain id, see query side for more information
> Each view can have exactly one spooler, but spoolers are dynamically leader elected, so even if a spooler crashes it will be replaced in a short amount of time.
### Component Query Side
The query handler receives all read relevant operations. These can either be query or simple `getById` calls.
When receiving a query it will proceed by passing this to the repository which will call the database and return the dataset.
If a request calls for a specific id the call will, most of the times, be revalidated against the eventstore. This is achieved by triggering the projection to make sure that the last sequence of a id is loaded into the query view.
- Easy to query
- Short response times (80%of queries below 100ms on the api server)
- Availability MUST be high
> When we classify this with the CAP theorem we would choose **Available** and **Performance** but leave **Consistent** aside
> TODO explain more here
### Component HTTP Server
The http server is responsible for serving the management GUI called **ZITADEL Console**, serving the static assets and as well rendering server side html (login, password-reset, verification, ...)
## Cluster Architecture
A **ZITADEL Cluster** is a highly available IAM system with each component critical for serving traffic laid out at least three times.
As our storage (CockroachDB) relies on Raft it is also necessary to always utilizes odd numbers to address for "split brain" scenarios.
Hence our reference design is to have three application nodes and three Storage Nodes.
If you deploy **ZITADEL** with our GITOPS Tooling [**ORBOS**](https://github.com/caos/orbos) we create 7 seven nodes. One management, three application and three storage nodes.
> You can horizontaly scale zitadel, but we recommend to use multiple cluster instead to reduce the blast radius from impacts to a single cluster
![Cluster Architecture](/img/zitadel_cluster_architecture.png)
## Multi Cluster Architecture
To scale **ZITADEL** is recommend to create smaller clusters, see cluster architecture and then create a fabric which interconnects the database.
In our reference design we recommend to create a cluster per cloud provider or availability zone and to group them into regions.
For example, you can run three cluster for the region switzerland. On with GCE, one with cloudscale and one with inventx.
With this design even the outage of a whole data-center would have a minimal impact as all data is still available at the other two locations.
> Cockroach needs to be configured with locality flags to proper distribute data over the zones
> East - West connectivity for the database can be solved at you discretion. We recommend to expose the public ips and run traffic directly without any VPN or Mesh
> Use MTLS in combination with IP Allowlist in the firewalls!
![Multi-Cluster Architecture](/img/zitadel_multicluster_architecture.png)

View File

@ -0,0 +1,147 @@
---
title: Software
---
ZITADEL is built with two essential patterns. Event Sourcing (ES) and Command and Query Responsibility Segregation (CQRS).
Due to the nature of Event Sourcing ZITADEL provides the unique capability to generate a strong audit trail of ALL the things that happen to its resources, without compromising on storage cost or audit trail length.
The combination of ES and CQRS makes ZITADEL eventual consistent which, from our perspective, is a great benefit in many ways.
It allows us to build a Source of Records (SOR) which is the one single point of truth for all computed states.
The SOR needs to be transaction safe to make sure all operations are in order.
You can read more about this in our [ES documentation](../eventstore/overview).
Each ZITADEL binary contains all components necessary to serve traffic
From serving the API, rendering GUI's, background processing of events and task.
This All in One (AiO) approach makes operating ZITADEL simple.
## Software Structure
ZITADELs software architecture is built around multiple components at different levels.
This chapter should give you an idea of the components as well as the different layers.
![Software Architecture](/img/zitadel_software_architecture.png)
### Service Layer
The service layer includes all components who are potentially exposed to consumers of ZITADEL.
#### HTTP Server
The http server is responsible for the following functions:
- serving the management GUI called ZITADEL Console
- serving the static assets
- rendering server side html (login, password-reset, verification, ...)
#### API Server
The API layer consist of the multiple APIs provided by ZITADEL. Each serves a dedicated purpose.
All APIs of ZITADEL are always available as gRCP, gRPC-web and REST service.
The only exception is the [OpenID Connect & OAuth](/docs/apis/openidoauth/endpoints) and [Asset API](/docs/apis/introduction#assets) due their unique nature.
- [OpenID Connect & OAuth](/docs/apis/openidoauth/endpoints) - allows to request authentication and authorization of ZITADEL
- [Authentication API](/docs/apis/introduction#authentication) - allow a user to do operation in its own context
- [Management API](/docs/apis/introduction#management) - allows an admin or machine to manage the ZITADEL resources on an organization level
- [Administration API](/docs/apis/introduction#administration) - allows an admin or machine to manage the ZITADEL resources on an instance level
- [System API](/docs/apis/introduction#system) - allows to create and change new ZITADEL instances
- [Asset API](/docs/apis/introduction#assets) - is used to upload and download static assets
### Core Layer
#### Commands
The Command Side has some unique requirements, these include:
- Transaction safety is a MUST
- Availability MUST be high
> When we classify this with the CAP theorem we would choose Consistent and Available but leave Partition Tolerance aside.
##### Command Handler
The command handler receives all operations who alter a resource managed by ZITADEL.
For example if a user changes his name. The API Layer will pass the instruction received through the API call to the command handler for further processing.
The command handler is then responsible of creating the necessary commands.
After creating the commands the command hand them down to the command validation.
##### Command Validation
With the received commands the command validation will execute the business logic to verify if a certain action can take place.
For example if the user really can change his name is verified in the command validation.
If this succeeds the command validation will create the events that reflect the changes.
These events now are being handed down to the storage layer for storage.
#### Events
ZITADEL handles events in two ways.
Events that should be processed in near real time are processed by a in memory pub sub system.
Some events hand be handled in background processing for which the spooler is responsible.
##### Pub Sub
The pub sub system job is it to keep a query view up-to-date by feeding a constant stream of events to the projections.
Our pub sub system built into ZITADEL works by placing events into an in memory queue for its subscribers.
There is no need for specific guarantees from the pub sub system. Since the SOR is the ES everything can be retried without loss of data.
In case of an error an event can be reapplied in two ways:
- The next event might trigger the projection to apply the whole difference
- The spooler takes care of background cleanups in a scheduled fashion
> The decision to incorporate an internal pub sub system with no need for specific guarantees is a deliberate choice.
> We believe that the toll of operating an additional external service like a MQ system negatively affects the ease of use of ZITADEL as well as its availability guarantees.
> One of the authors of ZITADEL did his thesis to test this approach against established MQ systems.
##### Spooler
The spoolers job is it to keep a query view up-to-date or at least look that it does not have a too big lag behind the Event Store.
Each query view has its own spooler who is responsible to look for the events who are relevant to generate the query view. It does this by triggering the relevant projection.
Spoolers are especially necessary where someone can query datasets instead of single ids.
> Each view can have exactly one spooler, but spoolers are dynamically leader elected, so even if a spooler crashes it will be replaced in a short amount of time.
#### Projections
Projections are responsible of normalizing data for the query side or for analytical purpose.
They generally work by being invoked either through a scheduled spooler or the pub sub subscription.
When they receive events they will create their normalized object and then store this into the query view and its storage layer.
#### Queries
The query side is responsible for answering read requests on data.
It has some unique requirements, which include:
- It needs to be easy to query
- Short response times are a MUST (80%of queries below 100ms on the api server)
- Availability MUST be high, even during high loads
- The query view MUST be able to be persisted for most request
> When we classify this with the CAP theorem we would choose **Available** and **Performance** but leave **Consistent** aside
##### Query Handler
The query handler receives all read relevant operations. These can either be query or simple `getById` calls.
When receiving a query it will proceed by passing this to the repository which will call the database and return the dataset.
If a request calls for a specific id the call will, most of the times, be revalidated against the Event Store.
This is achieved by triggering the projection to make sure that the last sequence of a id is loaded into the query view.
> The query side has the option to dynamically check the Event Store for newer events on a certain id to make sure for consistent responses without delay.
##### Query View
The query view is responsible to query the storage layer with the request from the command handler.
It is also responsible to execute authorization checks. To check if a request is valid and can be answered.
### Storage Layer
As ZITADEL itself is built completely stateless only the storage layer is needed for storing things.
The storage layer of ZITADEL is responsible for multiple things. For example:
- Distributing data for high availability over multiple server, data centers or regions
- Guarantee strong consistency for the command side
- Guarantee good query performance for the query side
- Ability to store data in specific data centers or regions for data residency (This is only supported with CockroachDB Cloud or Enterprise)
- Backup and restore operation for disaster recovery purpose
ZITADEL currently supports CockroachDB as first choice of storage due to its perfect match for ZITADELs needs.
Postgresql support is work in progress and should be available soon as well.

View File

@ -0,0 +1,59 @@
---
title: Deployment
---
## High Availability
ZITADEL can be run as high available system with ease.
Since the storage layer takes the heavy lifting of making sure that data in synched across, server, data centers or regions.
Depending on your projects needs our general recommendation is to run ZITADEL and ZITADELs storage layer across multiple availability zones in the same region or if you need higher guarantees run the storage layer across multiple regions.
Consult the [CockroachDB documentation](https://www.cockroachlabs.com/docs/) for more details or use the [CockroachCloud Service](https://www.cockroachlabs.com/docs/cockroachcloud/create-an-account.html)
> Soon ZITADEL will also support Postgres as database.
## Scalability
ZITADEL can be scaled in a linear fashion in multiple dimensions.
- Vertical on your compute infrastructure
- Horizontal in a region
- Horizontal in multiple regions
Our customers can reuse the same already known binary or container and scale it across multiple server, data center and regions.
To distribute traffic an already existing proxy infrastructure can be reused.
Simply steer traffic by path, hostname, IP address or any other metadata to the ZITADEL of your choice.
> To improve your service quality we recommend steering traffic by path to different ZITADEL deployments
> Feel free to [contact us](https://zitadel.com/contact/) for details
## Example Deployment Architecture
### Single Cluster / Region
A ZITADEL Cluster is a highly available IAM system with each component critical for serving traffic laid out at least three times.
As our storage layer (CockroachDB) relies on Raft, it is recommended to operate odd numbers of storage nodes to prevent "split brain" problems.
Hence our reference design for Kubernetes is to have three application nodes and three storage nodes.
> If you are using a serverless offering like Google Cloud Run you can scale ZITADEL from 0 to 1000 Pods without the need of deploying the node across multiple availability zones.
:::info
CockroachDB needs to be configured with locality flags to proper distribute data over the zones
:::
![Cluster Architecture](/img/zitadel_cluster_architecture.png)
### Multi Cluster / Region
To scale ZITADEL across regions it is recommend to create at least three cluster.
We recommend to run an odd number of storage clusters (storage nodes per data center) to compensate for "split brain" scenarios.
In our reference design we recommend to create one cluster per region or cloud provider with a minimum of three regions.
With this design even the outage of a whole data-center would have a minimal impact as all data is still available at the other two locations.
:::info
CockroachDB needs to be configured with locality flags to proper distribute data over the zones
:::
![Multi-Cluster Architecture](/img/zitadel_multicluster_architecture.png)

View File

@ -1,5 +1,5 @@
---
title: Implementation in ZITADEL
title: Implementation
---
This documentation gives you an insight into the structure of the ZITADEL database.

View File

@ -2,9 +2,9 @@
title: Overview
---
ZITADEL is built on the [event sourcing pattern](../architecture), where changes are stored as events in an eventstore.
ZITADEL is built on the [Event Sourcing pattern](../architecture/software), where changes are stored as events in an Event Store.
## What is an eventstore?
## What is an Event Store?
Traditionally, data is stored in relations as a state
@ -12,7 +12,7 @@ Traditionally, data is stored in relations as a state
- If a relation changes, the requests need to change as well
- That is valid for actual, as well as for historical data
An Eventstore on the other hand stores events, meaning every change that happens to any piece of data relates to an event.
An Event Store on the other hand stores events, meaning every change that happens to any piece of data relates to an event.
The data is stored as events in an append-only log.
- Think of it as a ledger that gets new entries over time, accumulative
@ -28,7 +28,7 @@ The data is stored as events in an append-only log.
## Definitions
Eventsourcing has some specific terms that are often used in our documentation. To understand how ZITADEL works it is important to understand this key definitions.
Event Sourcing has some specific terms that are often used in our documentation. To understand how ZITADEL works it is important to understand this key definitions.
### Events
@ -43,9 +43,10 @@ Possible Events:
### Aggregate
An aggregate consist of multiple events. All events together in will lead to the current state of the aggregate.
The aggregate can be compared with an object or a resources. Aggregates define transaction boundaries.
An aggregate consist of multiple events. All events together from an aggregate will lead to the current state of the aggregate.
The aggregate can be compared with an object or a resources. An aggregates should be used as transaction boundary.
### Projections
Projections contain the computed objects, that will be used on the query side for all the requests.
Think of this as a normalized view of specific events of one or multiple aggregates.

View File

@ -13,8 +13,7 @@ Please be reminded that ZITADEL is open source — and so is the documentation.
<Column>
<ListWrapper title="General">
<ListElement link="./principles" type={ICONTYPE.TASKS} title="Principles" description="Design and engineering principles" />
<ListElement link="./eventstore" type={ICONTYPE.STORAGE} title="Eventstore" description="Learn how ZITADEL stores data" />
<ListElement link="./architecture" type={ICONTYPE.ARCHITECTURE} title="Architecture" description="Sotware-, Cluster- and Multi Cluster Architecture" />
<ListElement link="./architecture/software" type={ICONTYPE.ARCHITECTURE} title="Architecture" description="Sotware-, Cluster- and Multi Cluster Architecture" />
</ListWrapper>
<ListWrapper title="Structure">
<Column>
@ -40,6 +39,6 @@ Please be reminded that ZITADEL is open source — and so is the documentation.
<ListElement link="./features/actions" type={ICONTYPE.FILE} title="Actions" description="Customizing ZITADELs behavior using the actions feature" />
</ListWrapper>
<ListWrapper title="Customer Portal">
<ListElement link="./customerportal/instances" type={ICONTYPE.INSTANCE} title="Instances" description="Manage all your ZITADEL instances" />
<ListElement link="../guides/manage/cloud/instances" type={ICONTYPE.INSTANCE} title="Instances" description="Manage all your ZITADEL instances" />
</ListWrapper>
</Column>

View File

@ -17,13 +17,13 @@ By executing the commands below, you will download the following file:
</details>
```bash
# Download the docker compose example configuration. For example:
# Download the docker compose example configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploy/docker-compose.yaml
# Run the database and application containers
# Run the database and application containers.
docker compose up --detach
```
<DefaultUser components={props.components} />
<Next components={props.components} />
<Disclaimer components={props.components} />
<Disclaimer components={props.components} />

View File

@ -37,20 +37,20 @@ By executing the commands below, you will download the following files:
</details>
```bash
# Download the docker compose example configuration. For example:
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploye/loadbalancing-example/loadbalancing-example/docker-compose.yaml
# Download the docker compose example configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploy/loadbalancing-example/docker-compose.yaml
# Download the docker compose example configuration. For example:
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploye/loadbalancing-example/loadbalancing-example/example-traefik.yaml
# Download the Traefik example configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploy/loadbalancing-example/example-traefik.yaml
# Download and adjust the example configuration file containing standard configuration
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploye/loadbalancing-example/loadbalancing-example/example-zitadel-config.yaml
# Download and adjust the example configuration file containing standard configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploy/loadbalancing-example/example-zitadel-config.yaml
# Download and adjust the example configuration file containing secret configuration
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploye/loadbalancing-example/loadbalancing-example/example-zitadel-secrets.yaml
# Download and adjust the example configuration file containing secret configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploy/loadbalancing-example/example-zitadel-secrets.yaml
# Download and adjust the example configuration file containing database initialization configuration
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploye/loadbalancing-example/loadbalancing-example/example-zitadel-init-steps.yaml
# Download and adjust the example configuration file containing database initialization configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/deploy/loadbalancing-example/example-zitadel-init-steps.yaml
# A single ZITADEL instance always needs the same 32 characters long masterkey
# If you haven't done so already, you can generate a new one.

View File

@ -14,6 +14,6 @@ By default, it runs a highly available ZITADEL instance along with a secure and
## Prerequisits
- ZITADEL does not need much resource 1 CPU and 512MB memory is more than enough. (With more CPU the password hashing might be faster)
- ZITADEL does not need many resources, 1 CPU and 512MB memory are more than enough. (With more CPU, the password hashing might be faster)
- A cockroachDB or [🚧 Postgresql coming soon](https://github.com/zitadel/zitadel/pull/3998) as only needed storage
- If you want to front ZTIADEL with a revers proxy, web application firewall or content delivery network make sure to support [HTTP/2](../manage/self-hosted/http2)
- If you want to front ZITADEL with a reverse proxy, web application firewall or content delivery network, make sure to support [HTTP/2](../manage/self-hosted/http2)

View File

@ -0,0 +1,89 @@
---
title: Connect with AzureAD
---
## AzureAD Tenant as Identity Provider for ZITADEL
This guides shows you how to connect an AzureAD Tenant to ZITADEL.
:::info
In ZITADEL you can connect an Identity Provider (IdP) like an AzureAD to your instance and provide it as default to all organizations or you can register the IdP to a specific organization only. This can also be done through your customers in a self-service fashion.
:::
### Prerequisite
You need to have access to an AzureAD Tenant. If you do not yet have one follow [this guide from Microsoft](https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-create-new-tenant) to create one for free.
### AzureAD Configuration
#### Create a new Application
Browse to the [App registration menus create dialog](https://portal.azure.com/#view/Microsoft_AAD_RegisteredApps/CreateApplicationBlade/quickStartType~/null/isMSAApp~/false) to create a new app.
![Create an Application](/img/guides/azure_app_register.png)
:::info
Mare sure to select `web` as application type in the `Redirect URI (optional)` section.
You can leave the second field empty since we will change this in the next step.
:::
![Create an Application](/img/guides/azure_app.png)
#### Configure Redirect URIS
For this to work you need to whitelist the redirect URIs from your ZITADEL Instance.
In this example our test instance has the domain `test-qcon0h.zitadel.cloud`. In this case we need to whitelist these two entries:
- `https://test-qcon0h.zitadel.cloud/ui/login/register/externalidp/callback`
- `https://test-qcon0h.zitadel.cloud/ui/login/login/externalidp/callback`
:::info
To adapt this for you setup just replace the domain
:::
![Configure Redirect URIS](/img/guides/azure_app_redirects.png)
#### Create Client Secret
To allow your ZITADEL to communicate with the AzureAD you need to create a Secret
![Create Client Secret](/img/guides/azure_app_secrets.png)
:::info
Please save this for the later configuration of ZITADEL
:::
#### Configure ID Token Claims
![Configure ID Token Claims](/img/guides/azure_app_token.png)
### ZITADEL Configuration
#### Create IdP
Use the values displayed on the AzureAD Application page in your ZITADEL IdP Settings.
- You can find the `issuer` for ZITADEL of your AzureAD Tenant in the `Endpoints submenu`
- The `Client ID` of ZITADEL corresponds to the `Application (client) ID`
- The `Client Secret` was generated during the `Create Client Secret` step
![Azure Application](/img/guides/azure_app.png)
![Create IdP](/img/guides/azure_zitadel_settings.png)
#### Activate IdP
Once you created the IdP you need to activate it, to make it usable for your users.
![Activate the AzureAD](/img/guides/azure_zitadel_activate.png)
![Active AzureAD](/img/guides/azure_zitadel_active.png)
### Test the setup
To test the setup use a incognito mode and browse to your login page.
If you succeeded you should see a new button which should redirect you to your AzureAD Tenant.
![AzureAD Button](/img/guides/azure_zitadel_button.png)
![AzureAD Login](/img/guides/azure_login.png)

View File

@ -97,7 +97,7 @@ ZITADEL will show a set of identity providers by default. This configuration can
An organization's login settings will be shown
- as soon as the user has entered the loginname and ZITADEL can identitfy to which organization he belongs; or
- as soon as the user has entered the loginname and ZITADEL can identify to which organization he belongs; or
- by sending a primary domain scope.
To get your own configuration you will have to send the [primary domain scope](../../apis/openidoauth/scopes#reserved-scopes) in your [authorization request](../../guides/integrate/login-users#auth-request) .
The primary domain scope will restrict the login to your organization, so only users of your own organization will be able to login, also your branding and policies will trigger.

View File

@ -22,16 +22,16 @@ By executing the commands below, you will download the following files:
</details>
```bash
# Download the docker compose example configuration for a secure CockroachDB. For example:
# Download the docker compose example configuration for a secure CockroachDB.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/manage/self-hosted/configure/docker-compose.yaml
# Download and adjust the example configuration file containing standard configuration
# Download and adjust the example configuration file containing standard configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/manage/self-hosted/configure/example-zitadel-config.yaml
# Download and adjust the example configuration file containing secret configuration
# Download and adjust the example configuration file containing secret configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/manage/self-hosted/configure/example-zitadel-secrets.yaml
# Download and adjust the example configuration file containing database initialization configuration
# Download and adjust the example configuration file containing database initialization configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/guides/manage/self-hosted/configure/example-zitadel-init-steps.yaml
# A single ZITADEL instance always needs the same 32 characters long masterkey

View File

@ -72,7 +72,7 @@ eMail Support | support@zitadel.com
Chat Support | Private chat channel between CAOS and Customer that is opened when Subscription becomes active
Phone Support | +41 43 215 27 34
- ZITADEL Cloud system status, incidents and maintenance windows will be communicated via [our status page](https://status.zitadel.ch).
- ZITADEL Cloud system status, incidents and maintenance windows will be communicated via [our status page](https://status.zitadel.com).
- Questions regarding pricing, billing, and invoicing of our services should be addressed to billing@zitadel.com
- Security related questions and incidents can also be directly addressed to security@zitadel.com

View File

@ -147,7 +147,7 @@ module.exports = {
},
{
label: "Status",
href: "https://status.zitadel.ch/",
href: "https://status.zitadel.com/",
}
],
},

View File

@ -119,6 +119,7 @@ module.exports = {
"guides/integrate/access-zitadel-apis",
"guides/integrate/authenticated-mongodb-charts",
"guides/integrate/auth0",
"guides/integrate/azuread",
"guides/integrate/gitlab-self-hosted",
"guides/integrate/login-users",
"guides/integrate/serviceusers",
@ -222,10 +223,18 @@ module.exports = {
collapsed: false,
items: [
"concepts/eventstore/overview",
"concepts/eventstore/zitadel",
"concepts/eventstore/implementation",
],
},
{
type: "category",
label: "Architecture",
collapsed: false,
items: [
"concepts/architecture/software",
"concepts/architecture/solution",
],
},
"concepts/architecture",
{
type: "category",
label: "Structure",

View File

@ -171,13 +171,7 @@ const features = [
description="Design and engineering principles"
/>
<ListElement
link="./docs/concepts/eventstore"
type={ICONTYPE.STORAGE}
title="Eventstore"
description="Learn how ZITADEL stores data"
/>
<ListElement
link="./docs/concepts/architecture"
link="./docs/concepts/architecture/software"
type={ICONTYPE.ARCHITECTURE}
title="Architecture"
description="Sotware-, Cluster- and Multi Cluster Architecture"

BIN
docs/static/img/guides/azure_app.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 153 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 127 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

BIN
docs/static/img/guides/azure_login.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 444 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

View File

@ -51,6 +51,7 @@ type externalNotFoundOptionData struct {
ExternalIDPUserID string
ExternalIDPUserDisplayName string
ShowUsername bool
ShowUsernameSuffix bool
OrgRegister bool
ExternalEmail string
ExternalEmailVerified bool
@ -274,6 +275,19 @@ func (l *Login) renderExternalNotFoundOption(w http.ResponseWriter, r *http.Requ
human, externalIDP, _ = l.mapExternalUserToLoginUser(orgIAMPolicy, linkingUser, idpConfig)
}
var resourceOwner string
if authReq != nil {
resourceOwner = authReq.RequestedOrgID
}
if resourceOwner == "" {
resourceOwner = authz.GetInstance(r.Context()).DefaultOrganisationID()
}
labelPolicy, err := l.getLabelPolicy(r, resourceOwner)
if err != nil {
l.renderError(w, r, authReq, err)
return
}
data := externalNotFoundOptionData{
baseData: l.getBaseData(r, authReq, "ExternalNotFoundOption", errID, errMessage),
externalNotFoundOptionFormData: externalNotFoundOptionFormData{
@ -292,6 +306,7 @@ func (l *Login) renderExternalNotFoundOption(w http.ResponseWriter, r *http.Requ
ExternalEmail: human.EmailAddress,
ExternalEmailVerified: human.IsEmailVerified,
ShowUsername: orgIAMPolicy.UserLoginMustBeDomain,
ShowUsernameSuffix: !labelPolicy.HideLoginNameSuffix,
OrgRegister: orgIAMPolicy.UserLoginMustBeDomain,
}
if human.Phone != nil {

View File

@ -44,6 +44,7 @@ type externalRegisterData struct {
ExternalIDPUserID string
ExternalIDPUserDisplayName string
ShowUsername bool
ShowUsernameSuffix bool
OrgRegister bool
ExternalEmail string
ExternalEmailVerified bool
@ -121,13 +122,19 @@ func (l *Login) handleExternalUserRegister(w http.ResponseWriter, r *http.Reques
l.renderRegisterOption(w, r, authReq, err)
return
}
labelPolicy, err := l.getLabelPolicy(r, resourceOwner)
if err != nil {
l.renderRegisterOption(w, r, authReq, err)
return
}
user, externalIDP := l.mapTokenToLoginHumanAndExternalIDP(orgIamPolicy, tokens, idpConfig)
if err != nil {
l.renderRegisterOption(w, r, authReq, err)
return
}
if !idpConfig.AutoRegister {
l.renderExternalRegisterOverview(w, r, authReq, orgIamPolicy, user, externalIDP, nil)
l.renderExternalRegisterOverview(w, r, authReq, orgIamPolicy, user, externalIDP, labelPolicy.HideLoginNameSuffix, nil)
return
}
l.registerExternalUser(w, r, authReq, user, externalIDP)
@ -157,7 +164,7 @@ func (l *Login) registerExternalUser(w http.ResponseWriter, r *http.Request, aut
l.renderNextStep(w, r, authReq)
}
func (l *Login) renderExternalRegisterOverview(w http.ResponseWriter, r *http.Request, authReq *domain.AuthRequest, orgIAMPolicy *query.DomainPolicy, human *domain.Human, idp *domain.UserIDPLink, err error) {
func (l *Login) renderExternalRegisterOverview(w http.ResponseWriter, r *http.Request, authReq *domain.AuthRequest, orgIAMPolicy *query.DomainPolicy, human *domain.Human, idp *domain.UserIDPLink, hideLoginNameSuffix bool, err error) {
var errID, errMessage string
if err != nil {
errID, errMessage = l.getErrorMessage(r, err)
@ -180,6 +187,7 @@ func (l *Login) renderExternalRegisterOverview(w http.ResponseWriter, r *http.Re
ExternalEmailVerified: human.IsEmailVerified,
ShowUsername: orgIAMPolicy.UserLoginMustBeDomain,
OrgRegister: orgIAMPolicy.UserLoginMustBeDomain,
ShowUsernameSuffix: !hideLoginNameSuffix,
}
if human.Phone != nil {
data.Phone = human.PhoneNumber

View File

@ -28,3 +28,10 @@ func (l *Login) getLoginPolicy(r *http.Request, orgID string) (*query.LoginPolic
}
return l.query.LoginPolicyByID(r.Context(), false, orgID)
}
func (l *Login) getLabelPolicy(r *http.Request, orgID string) (*query.LabelPolicy, error) {
if orgID == "" {
return l.query.DefaultActiveLabelPolicy(r.Context())
}
return l.query.ActiveLabelPolicyByOrg(r.Context(), orgID)
}

View File

@ -37,6 +37,7 @@ type registerData struct {
HasNumber string
HasSymbol string
ShowUsername bool
ShowUsernameSuffix bool
OrgRegister bool
}
@ -149,6 +150,13 @@ func (l *Login) renderRegister(w http.ResponseWriter, r *http.Request, authReque
data.ShowUsername = orgIAMPolicy.UserLoginMustBeDomain
data.OrgRegister = orgIAMPolicy.UserLoginMustBeDomain
labelPolicy, err := l.getLabelPolicy(r, resourceOwner)
if err != nil {
l.renderRegister(w, r, authRequest, formData, err)
return
}
data.ShowUsernameSuffix = !labelPolicy.HideLoginNameSuffix
funcs := map[string]interface{}{
"selectedLanguage": func(l string) bool {
if formData == nil {

View File

@ -39,7 +39,7 @@
<div class="lgn-suffix-wrapper">
<input class="lgn-input lgn-suffix-input" type="text" id="username" name="username"
value="{{ .Username }}" required>
{{if .ShowUsername}}
{{if .ShowUsernameSuffix}}
<span id="default-login-suffix" lgnsuffix class="loginname-suffix">@{{.PrimaryDomain}}</span>
{{end}}
</div>

View File

@ -39,7 +39,7 @@
<div class="lgn-suffix-wrapper">
<input class="lgn-input lgn-suffix-input" type="text" id="username" name="username"
value="{{ .Username }}" required>
{{if .ShowUsername}}
{{if .ShowUsernameSuffix}}
<span id="default-login-suffix" lgnsuffix class="loginname-suffix">@{{.PrimaryDomain}}</span>
{{end}}
</div>

View File

@ -42,7 +42,7 @@
<label class="lgn-label" for="username">{{t "RegistrationUser.UsernameLabel"}}</label>
<div class="lgn-suffix-wrapper">
<input class="lgn-input lgn-suffix-input" type="text" id="username" name="username" autocomplete="email" value="{{ .Email }}" required>
{{if .ShowUsername}}
{{if .ShowUsernameSuffix}}
<span id="default-login-suffix" lgnsuffix class="loginname-suffix">@{{.PrimaryDomain}}</span>
{{end}}
</div>

View File

@ -2,6 +2,7 @@ package command
import (
"context"
"regexp"
"strings"
"github.com/zitadel/zitadel/internal/api/authz"
@ -14,6 +15,10 @@ import (
"github.com/zitadel/zitadel/internal/repository/project"
)
var (
allowDomainRunes = regexp.MustCompile("^[a-zA-Z0-9\\.\\-]+$")
)
func (c *Commands) AddInstanceDomain(ctx context.Context, instanceDomain string) (*domain.ObjectDetails, error) {
instanceAgg := instance.NewAggregate(authz.GetInstance(ctx).InstanceID())
validation := c.addInstanceDomain(instanceAgg, instanceDomain, false)
@ -84,6 +89,9 @@ func (c *Commands) addInstanceDomain(a *instance.Aggregate, instanceDomain strin
if instanceDomain = strings.TrimSpace(instanceDomain); instanceDomain == "" {
return nil, errors.ThrowInvalidArgument(nil, "INST-28nlD", "Errors.Invalid.Argument")
}
if !allowDomainRunes.MatchString(instanceDomain) {
return nil, errors.ThrowInvalidArgument(nil, "INST-S3v3w", "Errors.Instance.Domain.InvalidCharacter")
}
return func(ctx context.Context, filter preparation.FilterToQueryReducer) ([]eventstore.Command, error) {
domainWriteModel, err := getInstanceDomainWriteModel(ctx, filter, instanceDomain)
if err != nil {

View File

@ -52,6 +52,51 @@ func TestCommandSide_AddInstanceDomain(t *testing.T) {
err: caos_errs.IsErrorInvalidArgument,
},
},
{
name: "invalid domain ', error",
fields: fields{
eventstore: eventstoreExpect(
t,
),
},
args: args{
ctx: context.Background(),
domain: "hodor's-org.localhost",
},
res: res{
err: caos_errs.IsErrorInvalidArgument,
},
},
{
name: "invalid domain umlaut, error",
fields: fields{
eventstore: eventstoreExpect(
t,
),
},
args: args{
ctx: context.Background(),
domain: "bücher.ch",
},
res: res{
err: caos_errs.IsErrorInvalidArgument,
},
},
{
name: "invalid domain other unicode, error",
fields: fields{
eventstore: eventstoreExpect(
t,
),
},
args: args{
ctx: context.Background(),
domain: "🦒.ch",
},
res: res{
err: caos_errs.IsErrorInvalidArgument,
},
},
{
name: "domain already exists, precondition error",
fields: fields{

View File

@ -11,7 +11,7 @@ import (
)
const (
UserMetadataProjectionTable = "projections.user_metadata"
UserMetadataProjectionTable = "projections.user_metadata2"
UserMetadataColumnUserID = "user_id"
UserMetadataColumnCreationDate = "creation_date"
@ -42,7 +42,7 @@ func newUserMetadataProjection(ctx context.Context, config crdb.StatementHandler
crdb.NewColumn(UserMetadataColumnKey, crdb.ColumnTypeText),
crdb.NewColumn(UserMetadataColumnValue, crdb.ColumnTypeBytes, crdb.Nullable()),
},
crdb.NewPrimaryKey(UserMetadataColumnInstanceID, UserMetadataColumnUserID),
crdb.NewPrimaryKey(UserMetadataColumnInstanceID, UserMetadataColumnUserID, UserMetadataColumnKey),
crdb.WithIndex(crdb.NewIndex("ro_idx", []string{UserGrantResourceOwner})),
),
)

View File

@ -41,7 +41,7 @@ func TestUserMetadataProjection_reduces(t *testing.T) {
executer: &testExecuter{
executions: []execution{
{
expectedStmt: "UPSERT INTO projections.user_metadata (user_id, resource_owner, instance_id, creation_date, change_date, sequence, key, value) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)",
expectedStmt: "UPSERT INTO projections.user_metadata2 (user_id, resource_owner, instance_id, creation_date, change_date, sequence, key, value) VALUES ($1, $2, $3, $4, $5, $6, $7, $8)",
expectedArgs: []interface{}{
"agg-id",
"ro-id",
@ -77,7 +77,7 @@ func TestUserMetadataProjection_reduces(t *testing.T) {
executer: &testExecuter{
executions: []execution{
{
expectedStmt: "DELETE FROM projections.user_metadata WHERE (user_id = $1) AND (key = $2)",
expectedStmt: "DELETE FROM projections.user_metadata2 WHERE (user_id = $1) AND (key = $2)",
expectedArgs: []interface{}{
"agg-id",
"key",
@ -105,7 +105,7 @@ func TestUserMetadataProjection_reduces(t *testing.T) {
executer: &testExecuter{
executions: []execution{
{
expectedStmt: "DELETE FROM projections.user_metadata WHERE (user_id = $1)",
expectedStmt: "DELETE FROM projections.user_metadata2 WHERE (user_id = $1)",
expectedArgs: []interface{}{
"agg-id",
},
@ -132,7 +132,7 @@ func TestUserMetadataProjection_reduces(t *testing.T) {
executer: &testExecuter{
executions: []execution{
{
expectedStmt: "DELETE FROM projections.user_metadata WHERE (user_id = $1)",
expectedStmt: "DELETE FROM projections.user_metadata2 WHERE (user_id = $1)",
expectedArgs: []interface{}{
"agg-id",
},

View File

@ -12,13 +12,13 @@ import (
)
var (
userMetadataQuery = `SELECT projections.user_metadata.creation_date,` +
` projections.user_metadata.change_date,` +
` projections.user_metadata.resource_owner,` +
` projections.user_metadata.sequence,` +
` projections.user_metadata.key,` +
` projections.user_metadata.value` +
` FROM projections.user_metadata`
userMetadataQuery = `SELECT projections.user_metadata2.creation_date,` +
` projections.user_metadata2.change_date,` +
` projections.user_metadata2.resource_owner,` +
` projections.user_metadata2.sequence,` +
` projections.user_metadata2.key,` +
` projections.user_metadata2.value` +
` FROM projections.user_metadata2`
userMetadataCols = []string{
"creation_date",
"change_date",
@ -27,14 +27,14 @@ var (
"key",
"value",
}
userMetadataListQuery = `SELECT projections.user_metadata.creation_date,` +
` projections.user_metadata.change_date,` +
` projections.user_metadata.resource_owner,` +
` projections.user_metadata.sequence,` +
` projections.user_metadata.key,` +
` projections.user_metadata.value,` +
userMetadataListQuery = `SELECT projections.user_metadata2.creation_date,` +
` projections.user_metadata2.change_date,` +
` projections.user_metadata2.resource_owner,` +
` projections.user_metadata2.sequence,` +
` projections.user_metadata2.key,` +
` projections.user_metadata2.value,` +
` COUNT(*) OVER ()` +
` FROM projections.user_metadata`
` FROM projections.user_metadata2`
userMetadataListCols = []string{
"creation_date",
"change_date",

View File

@ -172,6 +172,7 @@ Errors:
IdpIsNotOIDC: IDP Konfiguration ist nicht vom Typ OIDC
Domain:
AlreadyExists: Domäne existiert bereits
InvalidCharacter: Nur alphanumerische Zeichen, . und - sind für eine Domäne erlaubt
IDP:
InvalidSearchQuery: Ungültiger Suchparameter
LoginPolicy:

View File

@ -172,6 +172,7 @@ Errors:
IdpIsNotOIDC: IDP configuration is not of type oidc
Domain:
AlreadyExists: Domain already exists
InvalidCharacter: Only alphanumeric characters, . and - are allowed for a domain
IDP:
InvalidSearchQuery: Invalid search query
LoginPolicy:

View File

@ -172,6 +172,7 @@ Errors:
IdpIsNotOIDC: La configuration IDP n'est pas de type oidc
Domain:
AlreadyExists: Le domaine existe déjà
InvalidCharacter: Seuls les caractères alphanumériques, . et - sont autorisés pour un domaine
IDP:
InvalidSearchQuery: Paramètre de recherche non valide
LoginPolicy:

View File

@ -174,6 +174,7 @@ Errors:
AlreadyExists: Il dominio già esistente
IDP:
InvalidSearchQuery: Parametro di ricerca non valido
InvalidCharacter: Per un dominio sono ammessi solo caratteri alfanumerici, . e -
LoginPolicy:
NotFound: Impostazioni di accesso non trovati
Invalid: Impostazioni di accesso non sono validi