zitadel/guides/development.md
Alexei-Barnes 09b021b257
feat: Configurable Unique Machine Identification (#3626)
* feat: Configurable Unique Machine Identification

This change fixes Segfault on AWS App Runner with v2 #3625

The change introduces two new dependencies:

* github.com/drone/envsubst for supporting AWS ECS, which has its metadata endpoint described by an environment variable
* github.com/jarcoal/jpath so that only relevant data from a metadata response is used to identify the machine.

The change ads new configuration (see `defaults.yaml`):

* `Machine.Identification` enables configuration of how machines are uniquely identified - I'm not sure about the top level category `Machine`, as I don't have anything else to add to it. Happy to hear suggestions for better naming or structure here.
* `Machine.Identifiation.PrivateId` turns on or off the existing private IP based identification. Default is on.
* `Machine.Identification.Hostname` turns on or off using the OS hostname to identify the machine. Great for most cloud environments, where this tends to be set to something that identifies the machine uniquely. Enabled by default.
* `Machine.Identification.Webhook` configures identification based on the response to an HTTP GET request.  Request headers can be configured, a JSONPath can be set for processing the response (no JSON parsing is done if this is not set), and the URL is allowed to contain environment variables in the format `"${var}"`.

The new flow for getting a unique machine id is:

1. PrivateIP (if enabled)
2. Hostname (if enabled)
3. Webhook (if enabled, to configured URL)
4. Give up and error out.

It's important that init configures machine identity first. Otherwise we could try to get an ID before configuring it. To prevent this from causing difficult to debug issues, where for example the default configuration was used, I've ensured that
the application will generate an error if the module hasn't been configured and you try to get an ID.

Misc changes:

* Spelling and gramatical corrections to `init.go::New()` long description.
* Spelling corrections to `verify_zitadel.go::newZitadel()`.
* Updated `production.md` and `development.md` based on the new build process. I think the run instructions are also out of date, but I'll leave that for someone else.
* `id.SonyFlakeGenerator` is now a function, which sets `id.sonyFlakeGenerator`, this allows us to defer initialization until configuration has been read.

* Update internal/id/config.go

Co-authored-by: Alexei-Barnes <82444470+Alexei-Barnes@users.noreply.github.com>

* Fix authored by @livio-a for tests

Co-authored-by: Livio Amstutz <livio.a@gmail.com>
2022-05-24 16:57:57 +02:00

4.3 KiB

Development

You should stay in the ZITADEL root directory to execute the statements in the following chapters.

Prerequisite

Minimum resources:

  • CPU's: 2
  • Memory: 4Gb

Installing goreleaser

If you get the error Failed to fetch https://repo.goreleaser.com/apt/Packages Certificate verification failed: The certificate is NOT trusted. The certificate chain uses expired certificate. while installing goreleaser with apt, then ensure that ca-certificates are installed:

sudo apt install ca-certificates

env variables

You can use the default vars provided in this .env-file or create your own and update the paths in the docker compose file.

Local Build

Simply run goreleaser to build locally. This will generate all the required files, such as angular and grpc automatically.

goreleaser build --snapshot --rm-dist --single-target

Production Build & Release

Simply use goreleaser:

goreleaser release

Run

Start storage

Use this if you only want to start the storage services needed by ZITADEL. These services are started in background (detached).

COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 \
&& docker compose -f ./build/local/docker-compose-local.yml --profile storage up -d

On apple silicon: Restart the command (second terminal docker restart zitadel_<SERVICE_NAME>_1) if db logs qemu: uncaught target signal 11 (Segmentation fault) - core dumped or no logs are written from db-migrations.

Initialize the console

Used to set the client id of the console. This step is for local development. If you don't work with a local backend you have to set the client id manually.

You must initialise the data) first.

COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 \
&& docker compose -f ./build/local/docker-compose-local.yml --profile console-stub up --exit-code-from client-id

The command exists as soon as the client id is set.

Start the Console

The console service is configured for hot reloading. You can also use docker compose for local development.

If you don't use the backend from local you have to configure the environment.json manually.

If you use the local backend ensure that you have set the correct client id.

Run console in docker compose

COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose -f ./build/local/docker-compose-local.yml --profile frontend up

Run backend

Used if you want to run the backend locally. It's recommended to initialise the data first.

Run backend in docker compose

COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose -f ./build/local/docker-compose-local.yml --profile storage --profile backend up

Run backend locally

Export environment variables
# exports all default env variables
while read line; do
    if [[ $line != #* ]] && [[ ! -z $line ]]; then
        export $line
    fi
done < build/local/local.env
Start command for backend
# starts zitadel with default config files
go run cmd/zitadel/main.go -console=false -localDevMode=true -config-files=cmd/zitadel/startup.yaml -config-files=cmd/zitadel/system-defaults.yaml -config-files=cmd/zitadel/authz.yaml start

If you want to run your backend locally and the frontend by docker compose you have to replace the following variables:

docker compose yaml:

service:
  client-id:
    environment:
      - HOST=backend-run
  grpc-web-gateway:
    environment:
      - BKD_HOST=backend-run

with

service:
  client-id:
    environment:
      - HOST=host.docker.internal
  grpc-web-gateway:
    environment:
      - BKD_HOST=host.docker.internal
Setup ZITADEL

The following command starts the backend of ZITADEL with the default config files:

go run cmd/zitadel/main.go -setup-files=cmd/zitadel/setup.yaml -setup-files=cmd/zitadel/system-defaults.yaml -setup-files=cmd/zitadel/authz.yaml setup

Initial login credentials

username: zitadel-admin@caos-ag.zitadel.ch

password: Password1!