feat: Configurable Unique Machine Identification (#3626)

* feat: Configurable Unique Machine Identification

This change fixes Segfault on AWS App Runner with v2 #3625

The change introduces two new dependencies:

* github.com/drone/envsubst for supporting AWS ECS, which has its metadata endpoint described by an environment variable
* github.com/jarcoal/jpath so that only relevant data from a metadata response is used to identify the machine.

The change ads new configuration (see `defaults.yaml`):

* `Machine.Identification` enables configuration of how machines are uniquely identified - I'm not sure about the top level category `Machine`, as I don't have anything else to add to it. Happy to hear suggestions for better naming or structure here.
* `Machine.Identifiation.PrivateId` turns on or off the existing private IP based identification. Default is on.
* `Machine.Identification.Hostname` turns on or off using the OS hostname to identify the machine. Great for most cloud environments, where this tends to be set to something that identifies the machine uniquely. Enabled by default.
* `Machine.Identification.Webhook` configures identification based on the response to an HTTP GET request.  Request headers can be configured, a JSONPath can be set for processing the response (no JSON parsing is done if this is not set), and the URL is allowed to contain environment variables in the format `"${var}"`.

The new flow for getting a unique machine id is:

1. PrivateIP (if enabled)
2. Hostname (if enabled)
3. Webhook (if enabled, to configured URL)
4. Give up and error out.

It's important that init configures machine identity first. Otherwise we could try to get an ID before configuring it. To prevent this from causing difficult to debug issues, where for example the default configuration was used, I've ensured that
the application will generate an error if the module hasn't been configured and you try to get an ID.

Misc changes:

* Spelling and gramatical corrections to `init.go::New()` long description.
* Spelling corrections to `verify_zitadel.go::newZitadel()`.
* Updated `production.md` and `development.md` based on the new build process. I think the run instructions are also out of date, but I'll leave that for someone else.
* `id.SonyFlakeGenerator` is now a function, which sets `id.sonyFlakeGenerator`, this allows us to defer initialization until configuration has been read.

* Update internal/id/config.go

Co-authored-by: Alexei-Barnes <82444470+Alexei-Barnes@users.noreply.github.com>

* Fix authored by @livio-a for tests

Co-authored-by: Livio Amstutz <livio.a@gmail.com>
This commit is contained in:
Alexei-Barnes
2022-05-24 15:57:57 +01:00
committed by GitHub
parent e1ee89982a
commit 09b021b257
25 changed files with 303 additions and 88 deletions

View File

@@ -5,38 +5,40 @@ You should stay in the ZITADEL root directory to execute the statements in the f
## Prerequisite
- Buildkit compatible docker installation
- [go](https://go.dev/doc/install)
- [goreleaser](https://goreleaser.com/install/)
Minimum resources:
- CPU's: 2
- Memory: 4Gb
### Installing goreleaser
If you get the error `Failed to fetch https://repo.goreleaser.com/apt/Packages Certificate verification failed: The certificate is NOT trusted. The certificate chain uses expired certificate.` while installing goreleaser with `apt`, then ensure that ca-certificates are installed:
```sh
sudo apt install ca-certificates
```
### env variables
You can use the default vars provided in [this .env-file](../build/local/local.env) or create your own and update the paths in the [docker compose file](../build/local/docker-compose-local.yml).
## Generate required files
## Local Build
This part is relevant if you start the backend or console without docker compose.
Simply run goreleaser to build locally. This will generate all the required files, such as angular and grpc automatically.
### Console
This command generates the grpc stub for console into the folder console/src/app/proto/generated for local development.
```bash
DOCKER_BUILDKIT=1 docker build -f build/zitadel/Dockerfile . -t zitadel:gen-fe --target js-client -o .
```sh
goreleaser build --snapshot --rm-dist --single-target
```
### Start the Backend
## Production Build & Release
With these commands you can generate the stub for the backend.
Simply use goreleaser:
```bash
# generates grpc stub
DOCKER_BUILDKIT=1 docker build -f build/zitadel/Dockerfile . -t zitadel:gen-be --target go-client -o .
# generates keys for cryptography
COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 \
&& docker compose -f ./build/local/docker-compose-local.yml --profile backend-stub up --exit-code-from keys
```sh
goreleaser release
```
## Run

View File

@@ -1,7 +1,13 @@
# Production Build
This can also be run locally!
To create a production build to run locally, create a snapshot release with goreleaser:
```bash
DOCKER_BUILDKIT=1 docker build -f build/dockerfile . -t zitadel:local --build-arg ENV=prod
```sh
goreleaser release --snapshot --rm-dist
```
This can be released to production (if you have credentials configured) using gorelease as well:
```sh
goreleaser release
```