feat: Merge master (#1260)

* chore(site): dependabot deps (#1148)

* chore(deps): bump highlight.js from 10.4.1 to 10.5.0 in /site (#1143)

Bumps [highlight.js](https://github.com/highlightjs/highlight.js) from 10.4.1 to 10.5.0.
- [Release notes](https://github.com/highlightjs/highlight.js/releases)
- [Changelog](https://github.com/highlightjs/highlight.js/blob/master/CHANGES.md)
- [Commits](https://github.com/highlightjs/highlight.js/compare/10.4.1...10.5.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/plugin-transform-runtime in /site (#1144)

Bumps [@babel/plugin-transform-runtime](https://github.com/babel/babel/tree/HEAD/packages/babel-plugin-transform-runtime) from 7.12.1 to 7.12.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.10/packages/babel-plugin-transform-runtime)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump sirv from 1.0.7 to 1.0.10 in /site (#1145)

Bumps [sirv](https://github.com/lukeed/sirv) from 1.0.7 to 1.0.10.
- [Release notes](https://github.com/lukeed/sirv/releases)
- [Commits](https://github.com/lukeed/sirv/compare/v1.0.7...v1.0.10)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump rollup from 2.34.0 to 2.35.1 in /site (#1142)

Bumps [rollup](https://github.com/rollup/rollup) from 2.34.0 to 2.35.1.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v2.34.0...v2.35.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-node-resolve in /site (#1141)

Bumps [@rollup/plugin-node-resolve](https://github.com/rollup/plugins) from 10.0.0 to 11.0.1.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/node-resolve-v10.0.0...commonjs-v11.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump marked from 1.2.5 to 1.2.7 in /site (#1140)

Bumps [marked](https://github.com/markedjs/marked) from 1.2.5 to 1.2.7.
- [Release notes](https://github.com/markedjs/marked/releases)
- [Changelog](https://github.com/markedjs/marked/blob/master/release.config.js)
- [Commits](https://github.com/markedjs/marked/compare/v1.2.5...v1.2.7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/core from 7.12.9 to 7.12.10 in /site (#1139)

Bumps [@babel/core](https://github.com/babel/babel/tree/HEAD/packages/babel-core) from 7.12.9 to 7.12.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.10/packages/babel-core)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump rollup-plugin-svelte from 6.1.1 to 7.0.0 in /site (#1138)

Bumps [rollup-plugin-svelte](https://github.com/sveltejs/rollup-plugin-svelte) from 6.1.1 to 7.0.0.
- [Release notes](https://github.com/sveltejs/rollup-plugin-svelte/releases)
- [Changelog](https://github.com/sveltejs/rollup-plugin-svelte/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sveltejs/rollup-plugin-svelte/compare/v6.1.1...v7.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/preset-env from 7.12.1 to 7.12.11 in /site (#1137)

Bumps [@babel/preset-env](https://github.com/babel/babel/tree/HEAD/packages/babel-preset-env) from 7.12.1 to 7.12.11.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.11/packages/babel-preset-env)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* downgrade svelte plugin

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(console): dependabot deps (#1147)

* chore(deps-dev): bump @types/node from 14.14.13 to 14.14.19 in /console (#1146)

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.13 to 14.14.19.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump ts-protoc-gen from 0.13.0 to 0.14.0 in /console (#1129)

Bumps [ts-protoc-gen](https://github.com/improbable-eng/ts-protoc-gen) from 0.13.0 to 0.14.0.
- [Release notes](https://github.com/improbable-eng/ts-protoc-gen/releases)
- [Changelog](https://github.com/improbable-eng/ts-protoc-gen/blob/master/CHANGELOG.md)
- [Commits](https://github.com/improbable-eng/ts-protoc-gen/compare/0.13.0...0.14.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/language-service in /console (#1128)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.0.4 to 11.0.5.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.0.5/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.0.4 to 11.0.5 in /console (#1127)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.0.4 to 11.0.5.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.0.4...v11.0.5)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1126)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1100.4 to 0.1100.5.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* audit

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* feat: e-mail templates (#1158)

* View definition added

* Get templates and texts from the database.

* Fill in texts in templates

* Fill in texts in templates

* Client API added

* Weekly backup

* Weekly backup

* Daily backup

* Weekly backup

* Tests added

* Corrections from merge branch

* Fixes from pull request review

* chore(console): dependencies (#1189)

* chore(deps-dev): bump @angular/language-service in /console (#1187)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.0.5 to 11.0.9.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.0.9/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump google-proto-files from 2.3.0 to 2.4.0 in /console (#1186)

Bumps [google-proto-files](https://github.com/googleapis/nodejs-proto-files) from 2.3.0 to 2.4.0.
- [Release notes](https://github.com/googleapis/nodejs-proto-files/releases)
- [Changelog](https://github.com/googleapis/nodejs-proto-files/blob/master/CHANGELOG.md)
- [Commits](https://github.com/googleapis/nodejs-proto-files/compare/v2.3.0...v2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @types/node from 14.14.19 to 14.14.21 in /console (#1185)

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.19 to 14.14.21.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.0.5 to 11.0.7 in /console (#1184)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.0.5 to 11.0.7.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.0.5...v11.0.7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump karma from 5.2.3 to 6.0.0 in /console (#1183)

Bumps [karma](https://github.com/karma-runner/karma) from 5.2.3 to 6.0.0.
- [Release notes](https://github.com/karma-runner/karma/releases)
- [Changelog](https://github.com/karma-runner/karma/blob/master/CHANGELOG.md)
- [Commits](https://github.com/karma-runner/karma/compare/v5.2.3...v6.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1182)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1100.5 to 0.1100.7.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix(console): trigger unauthenticated dialog only once (#1170)

* fix: trigger dialog once

* remove log

* typed trigger

* chore(console): dependencies (#1205)

* chore(deps-dev): bump stylelint from 13.8.0 to 13.9.0 in /console (#1204)

Bumps [stylelint](https://github.com/stylelint/stylelint) from 13.8.0 to 13.9.0.
- [Release notes](https://github.com/stylelint/stylelint/releases)
- [Changelog](https://github.com/stylelint/stylelint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/stylelint/stylelint/compare/13.8.0...13.9.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/language-service in /console (#1203)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.0.9 to 11.1.0.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.1.0/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump karma from 6.0.0 to 6.0.1 in /console (#1202)

Bumps [karma](https://github.com/karma-runner/karma) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/karma-runner/karma/releases)
- [Changelog](https://github.com/karma-runner/karma/blob/master/CHANGELOG.md)
- [Commits](https://github.com/karma-runner/karma/compare/v6.0.0...v6.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.0.7 to 11.1.1 in /console (#1201)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.0.7 to 11.1.1.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.0.7...v11.1.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @types/jasmine from 3.6.2 to 3.6.3 in /console (#1200)

Bumps [@types/jasmine](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/jasmine) from 3.6.2 to 3.6.3.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/jasmine)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* chore(deps-dev): bump @types/node from 14.14.21 to 14.14.22 in /console (#1199)

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.21 to 14.14.22.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1198)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1100.7 to 0.1101.1.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* chore(deps): bump angularx-qrcode from 10.0.11 to 11.0.0 in /console (#1197)

Bumps [angularx-qrcode](https://github.com/cordobo/angularx-qrcode) from 10.0.11 to 11.0.0.
- [Release notes](https://github.com/cordobo/angularx-qrcode/releases)
- [Commits](https://github.com/cordobo/angularx-qrcode/compare/10.0.11...11.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix pack lock

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: handle sequence correctly in subscription (#1209)

* fix: correct master after merges again (#1230)

* chore(docs): correct `iss` claim of jwt profile (#1229)

* core(docs): correct `iss` claim of jwt profile

* fix: correct master after merges again (#1230)

* feat(login): new palette based styles (#1149)

* chore(deps-dev): bump rollup from 2.33.2 to 2.34.0 in /site (#1040)

Bumps [rollup](https://github.com/rollup/rollup) from 2.33.2 to 2.34.0.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v2.33.2...v2.34.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump svelte-i18n from 3.2.5 to 3.3.0 in /site (#1039)

Bumps [svelte-i18n](https://github.com/kaisermann/svelte-i18n) from 3.2.5 to 3.3.0.
- [Release notes](https://github.com/kaisermann/svelte-i18n/releases)
- [Changelog](https://github.com/kaisermann/svelte-i18n/blob/main/CHANGELOG.md)
- [Commits](https://github.com/kaisermann/svelte-i18n/compare/v3.2.5...v3.3.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-url from 5.0.1 to 6.0.0 in /site (#1038)

Bumps [@rollup/plugin-url](https://github.com/rollup/plugins) from 5.0.1 to 6.0.0.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/url-v5.0.1...url-v6.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump svelte from 3.29.7 to 3.30.1 in /site (#1037)

Bumps [svelte](https://github.com/sveltejs/svelte) from 3.29.7 to 3.30.1.
- [Release notes](https://github.com/sveltejs/svelte/releases)
- [Changelog](https://github.com/sveltejs/svelte/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sveltejs/svelte/compare/v3.29.7...v3.30.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump marked from 1.2.4 to 1.2.5 in /site (#1036)

Bumps [marked](https://github.com/markedjs/marked) from 1.2.4 to 1.2.5.
- [Release notes](https://github.com/markedjs/marked/releases)
- [Changelog](https://github.com/markedjs/marked/blob/master/release.config.js)
- [Commits](https://github.com/markedjs/marked/compare/v1.2.4...v1.2.5)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/core from 7.12.3 to 7.12.9 in /site (#1035)

Bumps [@babel/core](https://github.com/babel/babel/tree/HEAD/packages/babel-core) from 7.12.3 to 7.12.9.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.9/packages/babel-core)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump rollup-plugin-svelte from 6.1.1 to 7.0.0 in /site (#1034)

Bumps [rollup-plugin-svelte](https://github.com/sveltejs/rollup-plugin-svelte) from 6.1.1 to 7.0.0.
- [Release notes](https://github.com/sveltejs/rollup-plugin-svelte/releases)
- [Changelog](https://github.com/sveltejs/rollup-plugin-svelte/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sveltejs/rollup-plugin-svelte/compare/v6.1.1...v7.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-commonjs in /site (#1033)

Bumps [@rollup/plugin-commonjs](https://github.com/rollup/plugins) from 15.1.0 to 17.0.0.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/commonjs-v15.1.0...commonjs-v17.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-node-resolve in /site (#1032)

Bumps [@rollup/plugin-node-resolve](https://github.com/rollup/plugins) from 10.0.0 to 11.0.0.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/node-resolve-v10.0.0...commonjs-v11.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/preset-env from 7.12.1 to 7.12.7 in /site (#1031)

Bumps [@babel/preset-env](https://github.com/babel/babel/tree/HEAD/packages/babel-preset-env) from 7.12.1 to 7.12.7.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.7/packages/babel-preset-env)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* go

* bundle files, lgn-color, legacy theme

* remove old references

* light dark context, button styles, zitadel brand

* button theme, edit templates

* typography theme mixins

* input styles, container, extend light dark palette

* footer, palette, container

* container, label, assets, header

* action container, input, typography label, adapt button theme

* a and footer styles, adapt palette

* user log profile, resourcetempurl

* postinstall againnn

* wrochage

* rm local grpc

* button elevation, helper for components

* radio

* radio button mixins, bundle

* qr code styles, secret clipboard, icon pack

* stroked buttons, icon buttons, header action, typography

* fix password policy styles

* account selection

* account selection, lgn avatar

* mocks

* template fixes, animations scss

* checkbox, register temp

* checkbox appr

* fix checkbox, remove input interference

* select theme

* avatar script, user selection, password policy validation fix

* fix formfield state for register and change pwd

* footer, main style, qr code fix, mfa type fix, account sel, checkbox

* fotter tos, user select

* reverse buttons for intial submit action

* theme script, themed error messages, header img source

* content wrapper, i18n, mobile

* emptyline

* idp mixins, fix unstyled html

* register container

* register layout, list themes, policy theme, register org

* massive asset cleanup

* fix source path, add missing icon, fix complexity refs, prefix

* remove material icons, unused assets, fix icon font

* move icon pack

* avatar, contrast theme, error fix

* zitadel css map

* revert go mod

* fix mfa verify actions

* add idp styles

* fix google colors, idp styles

* fix: bugs

* fix register options, google

* fix script, mobile layout

* precompile font selection

* go mod tidy

* assets and cleanup

* input suffix, fix alignment, actions, add progress bar themes

* progress bar mixins, layout fixes

* remove test from loginname

* cleanup comments, scripts

* clear comments

* fix external back button

* fix mfa alignment

* fix actions layout, on dom change listener for suffix

* free tier change, success label

* fix: button font line-height

* remove tabindex

* remove comment

* remove comment

* Update internal/ui/login/handler/password_handler.go

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Maximilian Peintner <csaq7175@uibk.ac.at>
Co-authored-by: Livio Amstutz <livio.a@gmail.com>

* chore(console): dependencies (#1233)

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1214)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1101.1 to 0.1101.2.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump karma from 6.0.1 to 6.0.3 in /console (#1215)

Bumps [karma](https://github.com/karma-runner/karma) from 6.0.1 to 6.0.3.
- [Release notes](https://github.com/karma-runner/karma/releases)
- [Changelog](https://github.com/karma-runner/karma/blob/master/CHANGELOG.md)
- [Commits](https://github.com/karma-runner/karma/compare/v6.0.1...v6.0.3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/language-service in /console (#1216)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.1.0 to 11.1.1.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.1.1/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.1.1 to 11.1.2 in /console (#1217)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.1.1 to 11.1.2.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.1.1...v11.1.2)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* lock

* site deps

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: get email texts with default language (#1238)

* fix(login): mail verification (#1237)

* fix: mail verification

* not block, stroked

* fix: issues of new login ui (#1241)

* fix: i18n of register

* fix: autofocus

* feat(operator): zitadel and database operator (#1208)

* feat(operator): add base for zitadel operator

* fix(operator): changed pipeline to release operator

* fix(operator): fmt with only one parameter

* fix(operator): corrected workflow job name

* fix(zitadelctl): added restore and backuplist command

* fix(zitadelctl): scale for restore

* chore(container): use scratch for deploy container

* fix(zitadelctl): limit image to scratch

* fix(migration): added migration scripts for newer version

* fix(operator): changed handling of kubeconfig in operator logic

* fix(operator): changed handling of secrets in operator logic

* fix(operator): use new version of zitadel

* fix(operator): added path for migrations

* fix(operator): delete doublets of migration scripts

* fix(operator): delete subpaths and integrate logic into init container

* fix(operator): corrected path in dockerfile for local migrations

* fix(operator): added migrations for cockroachdb-secure

* fix(operator): delete logic for ambassador module

* fix(operator): added read and write secret commands

* fix(operator): correct and align operator pipeline with zitadel pipeline

* fix(operator): correct yaml error in operator pipeline

* fix(operator): correct action name in operator pipeline

* fix(operator): correct case-sensitive filename in operator pipeline

* fix(operator): upload artifacts from buildx output

* fix(operator): corrected attribute spelling error

* fix(operator): combined jobs for operator binary and image

* fix(operator): added missing comma in operator pipeline

* fix(operator): added codecov for operator image

* fix(operator): added codecov for operator image

* fix(testing): code changes for testing and several unit-tests (#1009)

* fix(operator): usage of interface of kubernetes client for testing and several unit-tests

* fix(operator): several unit-tests

* fix(operator): several unit-tests

* fix(operator): changed order for the operator logic

* fix(operator): added version of zitadelctl from semantic release

* fix(operator): corrected function call with version of zitadelctl

* fix(operator): corrected function call with version of zitadelctl

* fix(operator): add check output to operator release pipeline

* fix(operator): set --short length everywhere to 12

* fix(operator): zitadel setup in job instead of exec with several unit tests

* fix(operator): fixes to combine newest zitadel and testing branch

* fix(operator): corrected path in Dockerfile

* fix(operator): fixed unit-test that was ignored during changes

* fix(operator): fixed unit-test that was ignored during changes

* fix(operator): corrected Dockerfile to correctly use env variable

* fix(operator): quickfix takeoff deployment

* fix(operator): corrected the clusterrolename in the applied artifacts

* fix: update secure migrations

* fix(operator): migrations (#1057)

* fix(operator): copied migrations from orbos repository

* fix(operator): newest migrations

* chore: use cockroach-secure

* fix: rename migration

* fix: remove insecure cockroach migrations

Co-authored-by: Stefan Benz <stefan@caos.ch>

* fix: finalize labels

* fix(operator): cli logging concurrent and fixe deployment of operator during restore

* fix: finalize labels and cli commands

* fix: restore

* chore: cockroachdb is always secure

* chore: use orbos consistent-labels latest commit

* test: make tests compatible with new labels

* fix: default to sa token for start command

* fix: use cockroachdb v12.02

* fix: don't delete flyway user

* test: fix migration test

* fix: use correct table qualifiers

* fix: don't alter sequence ownership

* fix: upgrade flyway

* fix: change ownership of all dbs and tables to admin user

* fix: change defaultdb user

* fix: treat clientid status codes >= 400 as errors

* fix: reconcile specified ZITADEL version, not binary version

* fix: add ca-certs

* fix: use latest orbos code

* fix: use orbos with fixed race condition

* fix: use latest ORBOS code

* fix: use latest ORBOS code

* fix: make migration and scaling around restoring work

* fix(operator): move zitadel operator

* chore(migrations): include owner change migration

* feat(db): add code base for database operator

* fix(db): change used image registry for database operator

* fix(db): generated mock

* fix(db): add accidentally ignored file

* fix(db): add cockroachdb backup image to pipeline

* fix(db): correct pipeline and image versions

* fix(db): correct version of used orbos

* fix(db): correct database import

* fix(db): go mod tidy

* fix(db): use new version for orbos

* fix(migrations): include migrations into zitadelctl binary (#1211)

* fix(db): use statik to integrate migrations into binary

* fix(migrations): corrections unit tests and pipeline for integrated migrations into zitadelctl binary

* fix(migrations): correction in dockerfile for pipeline build

* fix(migrations): correction in dockerfile for pipeline build

* fix(migrations):  dockerfile changes for cache optimization

* fix(database): correct used part-of label in database operator

* fix(database): correct used selectable label in zitadel operator

* fix(operator): correct lables for user secrets in zitadel operator

* fix(operator): correct lables for service test in zitadel operator

* fix: don't enable database features for user operations (#1227)

* fix: don't enable database features for user operations

* fix: omit database feature for connection info adapter

* fix: use latest orbos version

* fix: update ORBOS (#1240)

Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: Elio Bischof <eliobischof@gmail.com>

* Merge branch 'new-eventstore' into cascades

# Conflicts:
#	internal/auth/repository/auth_request.go
#	internal/auth/repository/eventsourcing/eventstore/auth_request.go
#	internal/management/repository/eventsourcing/eventstore/user_grant.go
#	internal/management/repository/user_grant.go
#	internal/ui/login/handler/external_login_handler.go
#	internal/ui/login/handler/external_register_handler.go
#	internal/ui/login/handler/init_password_handler.go
#	internal/ui/login/handler/register_handler.go
#	internal/user/repository/view/model/notify_user.go
#	internal/v2/command/org_policy_login.go
#	internal/v2/command/project.go
#	internal/v2/command/user.go
#	internal/v2/command/user_human.go
#	internal/v2/command/user_human_externalidp.go
#	internal/v2/command/user_human_init.go
#	internal/v2/command/user_human_password.go
#	internal/v2/command/user_human_webauthn.go
#	internal/v2/domain/next_step.go
#	internal/v2/domain/policy_login.go
#	internal/v2/domain/request.go

* chore: add local migrate_local.go again (#1261)

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michael Waeger <49439088+michaelulrichwaeger@users.noreply.github.com>
Co-authored-by: Livio Amstutz <livio.a@gmail.com>
Co-authored-by: Maximilian Peintner <csaq7175@uibk.ac.at>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: Elio Bischof <eliobischof@gmail.com>
This commit is contained in:
Fabi
2021-02-08 16:48:41 +01:00
committed by GitHub
parent 320679467b
commit db11cf1da3
646 changed files with 34637 additions and 6507 deletions

113
operator/adapt.go Normal file
View File

@@ -0,0 +1,113 @@
package operator
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
"gopkg.in/yaml.v3"
)
type AdaptFunc func(monitor mntr.Monitor, desired *tree.Tree, current *tree.Tree) (QueryFunc, DestroyFunc, map[string]*secret.Secret, error)
type EnsureFunc func(k8sClient kubernetes.ClientInt) error
type DestroyFunc func(k8sClient kubernetes.ClientInt) error
type QueryFunc func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (EnsureFunc, error)
func Parse(gitClient *git.Client, file string) (*tree.Tree, error) {
if err := gitClient.Clone(); err != nil {
return nil, err
}
tree := &tree.Tree{}
if err := yaml.Unmarshal(gitClient.Read(file), tree); err != nil {
return nil, err
}
return tree, nil
}
func ResourceDestroyToZitadelDestroy(destroyFunc resources.DestroyFunc) DestroyFunc {
return func(k8sClient kubernetes.ClientInt) error {
return destroyFunc(k8sClient)
}
}
func ResourceQueryToZitadelQuery(queryFunc resources.QueryFunc) QueryFunc {
return func(k8sClient kubernetes.ClientInt, _ map[string]interface{}) (EnsureFunc, error) {
ensure, err := queryFunc(k8sClient)
ensureInternal := ResourceEnsureToZitadelEnsure(ensure)
return func(k8sClient kubernetes.ClientInt) error {
return ensureInternal(k8sClient)
}, err
}
}
func ResourceEnsureToZitadelEnsure(ensureFunc resources.EnsureFunc) EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
return ensureFunc(k8sClient)
}
}
func EnsureFuncToQueryFunc(ensure EnsureFunc) QueryFunc {
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (ensureFunc EnsureFunc, err error) {
return ensure, err
}
}
func QueriersToEnsureFunc(monitor mntr.Monitor, infoLogs bool, queriers []QueryFunc, k8sClient kubernetes.ClientInt, queried map[string]interface{}) (EnsureFunc, error) {
if infoLogs {
monitor.Info("querying...")
} else {
monitor.Debug("querying...")
}
ensurers := make([]EnsureFunc, 0)
for _, querier := range queriers {
ensurer, err := querier(k8sClient, queried)
if err != nil {
return nil, errors.Wrap(err, "error while querying")
}
ensurers = append(ensurers, ensurer)
}
if infoLogs {
monitor.Info("queried")
} else {
monitor.Debug("queried")
}
return func(k8sClient kubernetes.ClientInt) error {
if infoLogs {
monitor.Info("ensuring...")
} else {
monitor.Debug("ensuring...")
}
for _, ensurer := range ensurers {
if err := ensurer(k8sClient); err != nil {
return errors.Wrap(err, "error while ensuring")
}
}
if infoLogs {
monitor.Info("ensured")
} else {
monitor.Debug("ensured")
}
return nil
}, nil
}
func DestroyersToDestroyFunc(monitor mntr.Monitor, destroyers []DestroyFunc) DestroyFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("destroying...")
for _, destroyer := range destroyers {
if err := destroyer(k8sClient); err != nil {
return errors.Wrap(err, "error while destroying")
}
}
monitor.Info("destroyed")
return nil
}
}

83
operator/api/api.go Normal file
View File

@@ -0,0 +1,83 @@
package api
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/common"
"gopkg.in/yaml.v3"
)
const (
zitadelFile = "zitadel.yml"
databaseFile = "database.yml"
)
type PushDesiredFunc func(monitor mntr.Monitor) error
func ExistsZitadelYml(gitClient *git.Client) (bool, error) {
return existsFileInGit(gitClient, zitadelFile)
}
func ReadZitadelYml(gitClient *git.Client) (*tree.Tree, error) {
return readFileInGit(gitClient, zitadelFile)
}
func PushZitadelYml(monitor mntr.Monitor, msg string, gitClient *git.Client, desired *tree.Tree) (err error) {
return pushFileInGit(monitor, msg, gitClient, desired, zitadelFile)
}
func PushZitadelDesiredFunc(gitClient *git.Client, desired *tree.Tree) PushDesiredFunc {
return func(monitor mntr.Monitor) error {
monitor.Info("Writing zitadel desired state")
return PushZitadelYml(monitor, "Zitadel desired state written", gitClient, desired)
}
}
func ExistsDatabaseYml(gitClient *git.Client) (bool, error) {
return existsFileInGit(gitClient, databaseFile)
}
func ReadDatabaseYml(gitClient *git.Client) (*tree.Tree, error) {
return readFileInGit(gitClient, databaseFile)
}
func PushDatabaseYml(monitor mntr.Monitor, msg string, gitClient *git.Client, desired *tree.Tree) (err error) {
return pushFileInGit(monitor, msg, gitClient, desired, databaseFile)
}
func PushDatabaseDesiredFunc(gitClient *git.Client, desired *tree.Tree) PushDesiredFunc {
return func(monitor mntr.Monitor) error {
monitor.Info("Writing database desired state")
return PushDatabaseYml(monitor, "Database desired state written", gitClient, desired)
}
}
func pushFileInGit(monitor mntr.Monitor, msg string, gitClient *git.Client, desired *tree.Tree, path string) (err error) {
monitor.OnChange = func(_ string, fields map[string]string) {
err = gitClient.UpdateRemote(mntr.SprintCommit(msg, fields), git.File{
Path: path,
Content: common.MarshalYAML(desired),
})
mntr.LogMessage(msg, fields)
}
monitor.Changed(msg)
return err
}
func existsFileInGit(gitClient *git.Client, path string) (bool, error) {
of := gitClient.Read(path)
if of != nil && len(of) > 0 {
return true, nil
}
return false, nil
}
func readFileInGit(gitClient *git.Client, path string) (*tree.Tree, error) {
tree := &tree.Tree{}
if err := yaml.Unmarshal(gitClient.Read(path), tree); err != nil {
return nil, err
}
return tree, nil
}

25
operator/common/yaml.go Normal file
View File

@@ -0,0 +1,25 @@
package common
import (
"bytes"
"gopkg.in/yaml.v3"
)
func MarshalYAML(sth interface{}) []byte {
if sth == nil {
return nil
}
buf := new(bytes.Buffer)
encoder := yaml.NewEncoder(buf)
defer func() {
encoder.Close()
buf.Truncate(0)
}()
encoder.SetIndent(2)
if err := encoder.Encode(sth); err != nil {
panic(err)
}
return buf.Bytes()
}

View File

@@ -0,0 +1,71 @@
package backups
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
func GetQueryAndDestroyFuncs(
monitor mntr.Monitor,
desiredTree *tree.Tree,
currentTree *tree.Tree,
name string,
namespace string,
componentLabels *labels.Component,
checkDBReady operator.EnsureFunc,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
version string,
features []string,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
switch desiredTree.Common.Kind {
case "databases.caos.ch/BucketBackup":
return bucket.AdaptFunc(
name,
namespace,
labels.MustForComponent(
labels.MustReplaceAPI(
labels.GetAPIFromComponent(componentLabels),
"BucketBackup",
desiredTree.Common.Version,
),
"backup"),
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(monitor, desiredTree, currentTree)
default:
return nil, nil, nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}
func GetBackupList(
monitor mntr.Monitor,
name string,
desiredTree *tree.Tree,
) (
[]string,
error,
) {
switch desiredTree.Common.Kind {
case "databases.caos.ch/BucketBackup":
return bucket.BackupList()(monitor, name, desiredTree)
default:
return nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}

View File

@@ -0,0 +1,230 @@
package bucket
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/orbos/pkg/labels"
secretpkg "github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
coreDB "github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
const (
secretName = "backup-serviceaccountjson"
secretKey = "serviceaccountjson"
)
func AdaptFunc(
name string,
namespace string,
componentLabels *labels.Component,
checkDBReady operator.EnsureFunc,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
version string,
features []string,
) operator.AdaptFunc {
return func(monitor mntr.Monitor, desired *tree.Tree, current *tree.Tree) (queryFunc operator.QueryFunc, destroyFunc operator.DestroyFunc, secrets map[string]*secretpkg.Secret, err error) {
internalMonitor := monitor.WithField("component", "backup")
desiredKind, err := ParseDesiredV0(desired)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
internalMonitor.Verbose()
}
destroyS, err := secret.AdaptFuncToDestroy(namespace, secretName)
if err != nil {
return nil, nil, nil, err
}
queryS, err := secret.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, secretName), map[string]string{secretKey: desiredKind.Spec.ServiceAccountJSON.Value})
if err != nil {
return nil, nil, nil, err
}
_, destroyB, err := backup.AdaptFunc(
internalMonitor,
name,
namespace,
componentLabels,
[]string{},
checkDBReady,
desiredKind.Spec.Bucket,
desiredKind.Spec.Cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
if err != nil {
return nil, nil, nil, err
}
_, destroyR, err := restore.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
[]string{},
desiredKind.Spec.Bucket,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, nil, nil, err
}
_, destroyC, err := clean.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
[]string{},
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, nil, nil, err
}
destroyers := make([]operator.DestroyFunc, 0)
for _, feature := range features {
switch feature {
case backup.Normal, backup.Instant:
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyS),
destroyB,
)
case clean.Instant:
destroyers = append(destroyers,
destroyC,
)
case restore.Instant:
destroyers = append(destroyers,
destroyR,
)
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
currentDB, err := coreDB.ParseQueriedForDatabase(queried)
if err != nil {
return nil, err
}
databases, err := currentDB.GetListDatabasesFunc()(k8sClient)
if err != nil {
databases = []string{}
}
queryB, _, err := backup.AdaptFunc(
internalMonitor,
name,
namespace,
componentLabels,
databases,
checkDBReady,
desiredKind.Spec.Bucket,
desiredKind.Spec.Cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
if err != nil {
return nil, err
}
queryR, _, err := restore.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
databases,
desiredKind.Spec.Bucket,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, err
}
queryC, _, err := clean.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
databases,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, err
}
queriers := make([]operator.QueryFunc, 0)
if databases != nil && len(databases) != 0 {
for _, feature := range features {
switch feature {
case backup.Normal, backup.Instant:
queriers = append(queriers,
operator.ResourceQueryToZitadelQuery(queryS),
queryB,
)
case clean.Instant:
queriers = append(queriers,
queryC,
)
case restore.Instant:
queriers = append(queriers,
queryR,
)
}
}
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
getSecretsMap(desiredKind),
nil
}
}

View File

@@ -0,0 +1,372 @@
package bucket
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
"testing"
)
func TestBucket_Secrets(t *testing.T) {
masterkey := "testMk"
features := []string{backup.Normal}
saJson := "testSA"
bucketName := "testBucket2"
cron := "testCron2"
monitor := mntr.Monitor{}
namespace := "testNs2"
kindVersion := "v0"
kind := "BucketBackup"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", kindVersion), "testComponent")
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/" + kind,
Version: kindVersion,
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
allSecrets := map[string]string{
"serviceaccountjson": saJson,
}
_, _, secrets, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
for key, value := range allSecrets {
assert.Contains(t, secrets, key)
assert.Equal(t, value, secrets[key].Value)
}
}
func TestBucket_AdaptBackup(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{backup.Normal}
saJson := "testSA"
bucketName := "testBucket2"
cron := "testCron2"
monitor := mntr.Monitor{}
namespace := "testNs2"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetBackup(client, namespace, k8sLabels, saJson)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(client))
}
func TestBucket_AdaptInstantBackup(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{backup.Instant}
bucketName := "testBucket1"
cron := "testCron"
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
saJson := "testSA"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetInstantBackup(client, namespace, backupName, k8sLabels, saJson)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NotNil(t, ensure)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBucket_AdaptRestore(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{restore.Instant}
bucketName := "testBucket1"
cron := "testCron"
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
saJson := "testSA"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetRestore(client, namespace, backupName, k8sLabels, saJson)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NotNil(t, ensure)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBucket_AdaptClean(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{clean.Instant}
bucketName := "testBucket1"
cron := "testCron"
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
saJson := "testSA"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetClean(client, namespace, backupName)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NotNil(t, ensure)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,136 @@
package backup
import (
"github.com/caos/zitadel/operator"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/cronjob"
"github.com/caos/orbos/pkg/kubernetes/resources/job"
"github.com/caos/orbos/pkg/labels"
corev1 "k8s.io/api/core/v1"
)
const (
defaultMode int32 = 256
certPath = "/cockroach/cockroach-certs"
secretPath = "/secrets/sa.json"
backupPath = "/cockroach"
backupNameEnv = "BACKUP_NAME"
cronJobNamePrefix = "backup-"
internalSecretName = "client-certs"
image = "ghcr.io/caos/zitadel-crbackup"
rootSecretName = "cockroachdb.client.root"
timeout time.Duration = 60
Normal = "backup"
Instant = "instantbackup"
)
func AdaptFunc(
monitor mntr.Monitor,
backupName string,
namespace string,
componentLabels *labels.Component,
databases []string,
checkDBReady operator.EnsureFunc,
bucketName string,
cron string,
secretName string,
secretKey string,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
features []string,
version string,
) (
queryFunc operator.QueryFunc,
destroyFunc operator.DestroyFunc,
err error,
) {
command := getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
)
jobSpecDef := getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
command,
)
destroyers := []operator.DestroyFunc{}
queriers := []operator.QueryFunc{}
cronJobDef := getCronJob(
namespace,
labels.MustForName(componentLabels, GetJobName(backupName)),
cron,
jobSpecDef,
)
destroyCJ, err := cronjob.AdaptFuncToDestroy(cronJobDef.Namespace, cronJobDef.Name)
if err != nil {
return nil, nil, err
}
queryCJ, err := cronjob.AdaptFuncToEnsure(cronJobDef)
if err != nil {
return nil, nil, err
}
jobDef := getJob(
namespace,
labels.MustForName(componentLabels, cronJobNamePrefix+backupName),
jobSpecDef,
)
destroyJ, err := job.AdaptFuncToDestroy(jobDef.Namespace, jobDef.Name)
if err != nil {
return nil, nil, err
}
queryJ, err := job.AdaptFuncToEnsure(jobDef)
if err != nil {
return nil, nil, err
}
for _, feature := range features {
switch feature {
case Normal:
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyCJ),
)
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryCJ),
)
case Instant:
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyJ),
)
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryJ),
operator.EnsureFuncToQueryFunc(getCleanupFunc(monitor, jobDef.Namespace, jobDef.Name)),
)
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}
func GetJobName(backupName string) string {
return cronJobNamePrefix + backupName
}

View File

@@ -0,0 +1,307 @@
package backup
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"testing"
)
func TestBackup_AdaptInstantBackup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Instant}
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
bucketName := "testBucket"
cron := "testCron"
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_AdaptInstantBackup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Instant}
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb2"}
bucketName := "testBucket2"
cron := "testCron2"
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_AdaptBackup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Normal}
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
bucketName := "testBucket"
cron := "testCron"
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getCronJob(
namespace,
nameLabels,
cron,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyCronJob(jobDef).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_AdaptBackup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Normal}
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb2"}
bucketName := "testBucket2"
cron := "testCron2"
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getCronJob(
namespace,
nameLabels,
cron,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyCronJob(jobDef).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,25 @@
package backup
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func getCleanupFunc(monitor mntr.Monitor, namespace string, name string) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("waiting for backup to be completed")
if err := k8sClient.WaitUntilJobCompleted(namespace, name, timeout); err != nil {
monitor.Error(errors.Wrap(err, "error while waiting for backup to be completed"))
return err
}
monitor.Info("backup is completed, cleanup")
if err := k8sClient.DeleteJob(namespace, name); err != nil {
monitor.Error(errors.Wrap(err, "error while trying to cleanup backup"))
return err
}
monitor.Info("restore backup is completed")
return nil
}
}

View File

@@ -0,0 +1,40 @@
package backup
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/golang/mock/gomock"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Cleanup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test"
namespace := "testNs"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}
func TestBackup_Cleanup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test2"
namespace := "testNs2"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}

View File

@@ -0,0 +1,33 @@
package backup
import "strings"
func getBackupCommand(
timestamp string,
databases []string,
bucketName string,
backupName string,
) string {
backupCommands := make([]string, 0)
if timestamp != "" {
backupCommands = append(backupCommands, "export "+backupNameEnv+"="+timestamp)
} else {
backupCommands = append(backupCommands, "export "+backupNameEnv+"=$(date +%Y-%m-%dT%H:%M:%SZ)")
}
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/backup.sh",
backupName,
bucketName,
database,
backupPath,
secretPath,
certPath,
"${" + backupNameEnv + "}",
}, " "))
}
return strings.Join(backupCommands, " && ")
}

View File

@@ -0,0 +1,53 @@
package backup
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Command1(t *testing.T) {
timestamp := ""
databases := []string{}
bucketName := "test"
backupName := "test"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=$(date +%Y-%m-%dT%H:%M:%SZ)"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command2(t *testing.T) {
timestamp := "test"
databases := []string{}
bucketName := "test"
backupName := "test"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=test"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command3(t *testing.T) {
timestamp := ""
databases := []string{"testDb"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=$(date +%Y-%m-%dT%H:%M:%SZ) && /scripts/backup.sh testBackup testBucket testDb " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "}"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command4(t *testing.T) {
timestamp := "test"
databases := []string{"test1", "test2", "test3"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=test && " +
"/scripts/backup.sh testBackup testBucket test1 " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "} && " +
"/scripts/backup.sh testBackup testBucket test2 " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "} && " +
"/scripts/backup.sh testBackup testBucket test3 " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "}"
assert.Equal(t, equals, cmd)
}

View File

@@ -0,0 +1,101 @@
package backup
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
batchv1 "k8s.io/api/batch/v1"
"k8s.io/api/batch/v1beta1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func getCronJob(
namespace string,
nameLabels *labels.Name,
cron string,
jobSpecDef batchv1.JobSpec,
) *v1beta1.CronJob {
return &v1beta1.CronJob{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: v1beta1.CronJobSpec{
Schedule: cron,
ConcurrencyPolicy: v1beta1.ForbidConcurrent,
JobTemplate: v1beta1.JobTemplateSpec{
Spec: jobSpecDef,
},
},
}
}
func getJob(
namespace string,
nameLabels *labels.Name,
jobSpecDef batchv1.JobSpec,
) *batchv1.Job {
return &batchv1.Job{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: jobSpecDef,
}
}
func getJobSpecDef(
nodeselector map[string]string,
tolerations []corev1.Toleration,
secretName string,
secretKey string,
backupName string,
version string,
command string,
) batchv1.JobSpec {
return batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: backupName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
}
}

View File

@@ -0,0 +1,123 @@
package backup
import (
"github.com/caos/zitadel/operator/helpers"
"github.com/stretchr/testify/assert"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
"testing"
)
func TestBackup_JobSpec1(t *testing.T) {
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
command := "test"
secretKey := "testKey"
secretName := "testSecretName"
equals := batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: backupName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
}
assert.Equal(t, equals, getJobSpecDef(nodeselector, tolerations, secretName, secretKey, backupName, version, command))
}
func TestBackup_JobSpec2(t *testing.T) {
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
command := "test2"
secretKey := "testKey2"
secretName := "testSecretName2"
equals := batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: backupName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
}
assert.Equal(t, equals, getJobSpecDef(nodeselector, tolerations, secretName, secretKey, backupName, version, command))
}

View File

@@ -0,0 +1,86 @@
package clean
import (
"github.com/caos/zitadel/operator"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/job"
"github.com/caos/orbos/pkg/labels"
corev1 "k8s.io/api/core/v1"
)
const (
Instant = "clean"
defaultMode = int32(256)
certPath = "/cockroach/cockroach-certs"
secretPath = "/secrets/sa.json"
internalSecretName = "client-certs"
image = "ghcr.io/caos/zitadel-crbackup"
rootSecretName = "cockroachdb.client.root"
jobPrefix = "backup-"
jobSuffix = "-clean"
timeout time.Duration = 60
)
func AdaptFunc(
monitor mntr.Monitor,
backupName string,
namespace string,
componentLabels *labels.Component,
databases []string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
checkDBReady operator.EnsureFunc,
secretName string,
secretKey string,
version string,
) (
queryFunc operator.QueryFunc,
destroyFunc operator.DestroyFunc,
err error,
) {
command := getCommand(databases)
jobDef := getJob(
namespace,
labels.MustForName(componentLabels, GetJobName(backupName)),
nodeselector,
tolerations,
secretName,
secretKey,
version,
command)
destroyJ, err := job.AdaptFuncToDestroy(jobDef.Namespace, jobDef.Name)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyJ),
}
queryJ, err := job.AdaptFuncToEnsure(jobDef)
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryJ),
operator.EnsureFuncToQueryFunc(getCleanupFunc(monitor, jobDef.Namespace, jobDef.Name)),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}
func GetJobName(backupName string) string {
return jobPrefix + backupName + jobSuffix
}

View File

@@ -0,0 +1,134 @@
package clean
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"testing"
)
func TestBackup_Adapt1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
databases,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_Adapt2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb1", "testDb2"}
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
databases,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,29 @@
package clean
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func getCleanupFunc(
monitor mntr.Monitor,
namespace string,
jobName string,
) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("waiting for clean to be completed")
if err := k8sClient.WaitUntilJobCompleted(namespace, jobName, 60); err != nil {
monitor.Error(errors.Wrap(err, "error while waiting for clean to be completed"))
return err
}
monitor.Info("clean is completed, cleanup")
if err := k8sClient.DeleteJob(namespace, jobName); err != nil {
monitor.Error(errors.Wrap(err, "error while trying to cleanup clean"))
return err
}
monitor.Info("clean cleanup is completed")
return nil
}
}

View File

@@ -0,0 +1,40 @@
package clean
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/golang/mock/gomock"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Cleanup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test"
namespace := "testNs"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}
func TestBackup_Cleanup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test2"
namespace := "testNs2"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}

View File

@@ -0,0 +1,32 @@
package clean
import "strings"
func getCommand(
databases []string,
) string {
backupCommands := make([]string, 0)
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/clean-db.sh",
certPath,
database,
}, " "))
}
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/clean-user.sh",
certPath,
database,
}, " "))
}
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/clean-migration.sh",
certPath,
}, " "))
return strings.Join(backupCommands, " && ")
}

View File

@@ -0,0 +1,35 @@
package clean
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestClean_Command1(t *testing.T) {
databases := []string{}
cmd := getCommand(databases)
equals := "/scripts/clean-migration.sh " + certPath
assert.Equal(t, equals, cmd)
}
func TestClean_Command2(t *testing.T) {
databases := []string{"test"}
cmd := getCommand(databases)
equals := "/scripts/clean-db.sh " + certPath + " test && /scripts/clean-user.sh " + certPath + " test && /scripts/clean-migration.sh " + certPath
assert.Equal(t, equals, cmd)
}
func TestClean_Command3(t *testing.T) {
databases := []string{"test1", "test2", "test3"}
cmd := getCommand(databases)
equals := "/scripts/clean-db.sh " + certPath + " test1 && /scripts/clean-db.sh " + certPath + " test2 && /scripts/clean-db.sh " + certPath + " test3 && " +
"/scripts/clean-user.sh " + certPath + " test1 && /scripts/clean-user.sh " + certPath + " test2 && /scripts/clean-user.sh " + certPath + " test3 && " +
"/scripts/clean-migration.sh " + certPath
assert.Equal(t, equals, cmd)
}

View File

@@ -0,0 +1,73 @@
package clean
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func getJob(
namespace string,
nameLabels *labels.Name,
nodeselector map[string]string,
tolerations []corev1.Toleration,
secretName string,
secretKey string,
version string,
command string,
) *batchv1.Job {
return &batchv1.Job{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
NodeSelector: nodeselector,
Tolerations: tolerations,
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{{
Name: nameLabels.Name(),
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
}

View File

@@ -0,0 +1,164 @@
package clean
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
"github.com/stretchr/testify/assert"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestBackup_Job1(t *testing.T) {
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
version := "testVersion"
command := "test"
secretKey := "testKey"
secretName := "testSecretName"
jobName := "testJob"
namespace := "testNs"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testOpVersion",
"caos.ch/apiversion": "testVersion",
"caos.ch/kind": "testKind"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testOpVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}
func TestBackup_Job2(t *testing.T) {
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
version := "testVersion2"
command := "test2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := "testJob2"
namespace := "testNs2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testOpVersion2",
"caos.ch/apiversion": "testVersion2",
"caos.ch/kind": "testKind2"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testOpVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}

View File

@@ -0,0 +1,42 @@
package bucket
import (
secret2 "github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec *Spec
}
type Spec struct {
Verbose bool
Cron string `yaml:"cron,omitempty"`
Bucket string `yaml:"bucket,omitempty"`
ServiceAccountJSON *secret2.Secret `yaml:"serviceAccountJSON,omitempty"`
}
func (s *Spec) IsZero() bool {
if (s.ServiceAccountJSON == nil || s.ServiceAccountJSON.IsZero()) &&
!s.Verbose &&
s.Cron == "" &&
s.Bucket == "" {
return true
}
return false
}
func ParseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{
Common: desiredTree.Common,
Spec: &Spec{},
}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
return desiredKind, nil
}

View File

@@ -0,0 +1,129 @@
package bucket
import (
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/stretchr/testify/assert"
"gopkg.in/yaml.v3"
"testing"
)
const (
masterkey = "testMk"
cron = "testCron"
bucketName = "testBucket"
saJson = "testSa"
yamlFile = `kind: databases.caos.ch/BucketBackup
version: v0
spec:
verbose: true
cron: testCron
bucket: testBucket
serviceAccountJSON:
encryption: AES256
encoding: Base64
value: luyAqtopzwLcaIhJj7KhWmbUsA7cQg==
`
yamlFileWithoutSecret = `kind: databases.caos.ch/BucketBackup
version: v0
spec:
verbose: true
cron: testCron
bucket: testBucket
`
yamlEmpty = `kind: databases.caos.ch/BucketBackup
version: v0`
)
var (
desired = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
Encryption: "AES256",
Encoding: "Base64",
},
},
}
desiredWithoutSecret = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
},
}
desiredEmpty = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: false,
Cron: "",
Bucket: "",
ServiceAccountJSON: &secret.Secret{
Value: "",
},
},
}
desiredNil = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
}
)
func marshalYaml(t *testing.T, masterkey string, struc *DesiredV0) []byte {
secret.Masterkey = masterkey
data, err := yaml.Marshal(struc)
assert.NoError(t, err)
return data
}
func unmarshalYaml(t *testing.T, masterkey string, yamlFile []byte) *tree.Tree {
secret.Masterkey = masterkey
desiredTree := &tree.Tree{}
assert.NoError(t, yaml.Unmarshal(yamlFile, desiredTree))
return desiredTree
}
func getDesiredTree(t *testing.T, masterkey string, desired *DesiredV0) *tree.Tree {
return unmarshalYaml(t, masterkey, marshalYaml(t, masterkey, desired))
}
func TestBucket_DesiredParse(t *testing.T) {
assert.Equal(t, yamlFileWithoutSecret, string(marshalYaml(t, masterkey, &desiredWithoutSecret)))
desiredTree := unmarshalYaml(t, masterkey, []byte(yamlFile))
desiredKind, err := ParseDesiredV0(desiredTree)
assert.NoError(t, err)
assert.Equal(t, &desired, desiredKind)
}
func TestBucket_DesiredNotZero(t *testing.T) {
desiredTree := unmarshalYaml(t, masterkey, []byte(yamlFile))
desiredKind, err := ParseDesiredV0(desiredTree)
assert.NoError(t, err)
assert.False(t, desiredKind.Spec.IsZero())
}
func TestBucket_DesiredZero(t *testing.T) {
desiredTree := unmarshalYaml(t, masterkey, []byte(yamlEmpty))
desiredKind, err := ParseDesiredV0(desiredTree)
assert.NoError(t, err)
assert.True(t, desiredKind.Spec.IsZero())
}

View File

@@ -0,0 +1,63 @@
package bucket
import (
"cloud.google.com/go/storage"
"context"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups/core"
"github.com/pkg/errors"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"strings"
)
func BackupList() core.BackupListFunc {
return func(monitor mntr.Monitor, name string, desired *tree.Tree) ([]string, error) {
desiredKind, err := ParseDesiredV0(desired)
if err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
monitor.Verbose()
}
return listFilesWithFilter(desiredKind.Spec.ServiceAccountJSON.Value, desiredKind.Spec.Bucket, name)
}
}
func listFilesWithFilter(serviceAccountJSON string, bucketName, name string) ([]string, error) {
ctx := context.Background()
client, err := storage.NewClient(ctx, option.WithCredentialsJSON([]byte(serviceAccountJSON)))
if err != nil {
return nil, err
}
bkt := client.Bucket(bucketName)
names := make([]string, 0)
it := bkt.Objects(ctx, &storage.Query{Prefix: name + "/"})
for {
attrs, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return nil, err
}
parts := strings.Split(attrs.Name, "/")
found := false
for _, name := range names {
if name == parts[1] {
found = true
}
}
if !found {
names = append(names, parts[1])
}
}
return names, nil
}

View File

@@ -0,0 +1,97 @@
package bucket
import (
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/golang/mock/gomock"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func SetQueriedForDatabases(databases []string) map[string]interface{} {
queried := map[string]interface{}{}
core.SetQueriedForDatabaseDBList(queried, databases)
return queried
}
func SetInstantBackup(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
backupName string,
labels map[string]string,
saJson string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespace,
Labels: labels,
},
StringData: map[string]string{secretKey: saJson},
Type: "Opaque",
}).Times(1).Return(nil)
k8sClient.EXPECT().ApplyJob(gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().GetJob(namespace, backup.GetJobName(backupName)).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, backup.GetJobName(backupName)))
k8sClient.EXPECT().WaitUntilJobCompleted(namespace, backup.GetJobName(backupName), gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().DeleteJob(namespace, backup.GetJobName(backupName)).Times(1).Return(nil)
}
func SetBackup(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
labels map[string]string,
saJson string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespace,
Labels: labels,
},
StringData: map[string]string{secretKey: saJson},
Type: "Opaque",
}).Times(1).Return(nil)
k8sClient.EXPECT().ApplyCronJob(gomock.Any()).Times(1).Return(nil)
}
func SetClean(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
backupName string,
) {
k8sClient.EXPECT().ApplyJob(gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().GetJob(namespace, clean.GetJobName(backupName)).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, clean.GetJobName(backupName)))
k8sClient.EXPECT().WaitUntilJobCompleted(namespace, clean.GetJobName(backupName), gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().DeleteJob(namespace, clean.GetJobName(backupName)).Times(1).Return(nil)
}
func SetRestore(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
backupName string,
labels map[string]string,
saJson string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespace,
Labels: labels,
},
StringData: map[string]string{secretKey: saJson},
Type: "Opaque",
}).Times(1).Return(nil)
k8sClient.EXPECT().ApplyJob(gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().GetJob(namespace, restore.GetJobName(backupName)).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, restore.GetJobName(backupName)))
k8sClient.EXPECT().WaitUntilJobCompleted(namespace, restore.GetJobName(backupName), gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().DeleteJob(namespace, restore.GetJobName(backupName)).Times(1).Return(nil)
}

View File

@@ -0,0 +1,95 @@
package restore
import (
"github.com/caos/zitadel/operator"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/job"
"github.com/caos/orbos/pkg/labels"
corev1 "k8s.io/api/core/v1"
)
const (
Instant = "restore"
defaultMode = int32(256)
certPath = "/cockroach/cockroach-certs"
secretPath = "/secrets/sa.json"
jobPrefix = "backup-"
jobSuffix = "-restore"
image = "ghcr.io/caos/zitadel-crbackup"
internalSecretName = "client-certs"
rootSecretName = "cockroachdb.client.root"
timeout time.Duration = 60
)
func AdaptFunc(
monitor mntr.Monitor,
backupName string,
namespace string,
componentLabels *labels.Component,
databases []string,
bucketName string,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
checkDBReady operator.EnsureFunc,
secretName string,
secretKey string,
version string,
) (
queryFunc operator.QueryFunc,
destroyFunc operator.DestroyFunc,
err error,
) {
jobName := jobPrefix + backupName + jobSuffix
command := getCommand(
timestamp,
databases,
bucketName,
backupName,
)
jobdef := getJob(
namespace,
labels.MustForName(componentLabels, GetJobName(backupName)),
nodeselector,
tolerations,
secretName,
secretKey,
version,
command)
destroyJ, err := job.AdaptFuncToDestroy(jobName, namespace)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyJ),
}
queryJ, err := job.AdaptFuncToEnsure(jobdef)
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryJ),
operator.EnsureFuncToQueryFunc(getCleanupFunc(monitor, jobdef.Namespace, jobdef.Name)),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}
func GetJobName(backupName string) string {
return jobPrefix + backupName + jobSuffix
}

View File

@@ -0,0 +1,148 @@
package restore
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"testing"
)
func TestBackup_Adapt1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
timestamp := "testTs"
backupName := "testName2"
bucketName := "testBucket2"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
timestamp,
databases,
bucketName,
backupName,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
bucketName,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_Adapt2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb1", "testDb2"}
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
timestamp := "testTs"
backupName := "testName2"
bucketName := "testBucket2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
timestamp,
databases,
bucketName,
backupName,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
bucketName,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,25 @@
package restore
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func getCleanupFunc(monitor mntr.Monitor, namespace, jobName string) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("waiting for restore to be completed")
if err := k8sClient.WaitUntilJobCompleted(namespace, jobName, timeout); err != nil {
monitor.Error(errors.Wrap(err, "error while waiting for restore to be completed"))
return err
}
monitor.Info("restore is completed, cleanup")
if err := k8sClient.DeleteJob(namespace, jobName); err != nil {
monitor.Error(errors.Wrap(err, "error while trying to cleanup restore"))
return err
}
monitor.Info("restore cleanup is completed")
return nil
}
}

View File

@@ -0,0 +1,40 @@
package restore
import (
"errors"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Cleanup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test"
namespace := "testNs"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}
func TestBackup_Cleanup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test2"
namespace := "testNs2"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}

View File

@@ -0,0 +1,28 @@
package restore
import "strings"
func getCommand(
timestamp string,
databases []string,
bucketName string,
backupName string,
) string {
backupCommands := make([]string, 0)
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/restore.sh",
bucketName,
backupName,
timestamp,
database,
secretPath,
certPath,
}, " "))
}
return strings.Join(backupCommands, " && ")
}

View File

@@ -0,0 +1,50 @@
package restore
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Command1(t *testing.T) {
timestamp := ""
databases := []string{}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := ""
assert.Equal(t, equals, cmd)
}
func TestBackup_Command2(t *testing.T) {
timestamp := ""
databases := []string{"testDb"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := "/scripts/restore.sh testBucket testBackup testDb /secrets/sa.json /cockroach/cockroach-certs"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command3(t *testing.T) {
timestamp := "test"
databases := []string{"testDb"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := "/scripts/restore.sh testBucket testBackup test testDb /secrets/sa.json /cockroach/cockroach-certs"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command4(t *testing.T) {
timestamp := ""
databases := []string{}
bucketName := "test"
backupName := "test"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := ""
assert.Equal(t, equals, cmd)
}

View File

@@ -0,0 +1,72 @@
package restore
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func getJob(
namespace string,
nameLabels *labels.Name,
nodeselector map[string]string,
tolerations []corev1.Toleration,
secretName string,
secretKey string,
version string,
command string,
) *batchv1.Job {
return &batchv1.Job{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
NodeSelector: nodeselector,
Tolerations: tolerations,
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{{
Name: nameLabels.Name(),
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
}

View File

@@ -0,0 +1,163 @@
package restore
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
"github.com/stretchr/testify/assert"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestBackup_Job1(t *testing.T) {
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
version := "testVersion"
command := "test"
secretKey := "testKey"
secretName := "testSecretName"
jobName := "testJob"
namespace := "testNs"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testOpVersion",
"caos.ch/apiversion": "testVersion",
"caos.ch/kind": "testKind"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testOpVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}
func TestBackup_Job2(t *testing.T) {
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
version := "testVersion2"
command := "test2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := "testJob2"
namespace := "testNs2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testOpVersion2",
"caos.ch/apiversion": "testVersion2",
"caos.ch/kind": "testKind2"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testOpVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}

View File

@@ -0,0 +1,19 @@
package bucket
import (
"github.com/caos/orbos/pkg/secret"
)
func getSecretsMap(desiredKind *DesiredV0) map[string]*secret.Secret {
secrets := make(map[string]*secret.Secret, 0)
if desiredKind.Spec == nil {
desiredKind.Spec = &Spec{}
}
if desiredKind.Spec.ServiceAccountJSON == nil {
desiredKind.Spec.ServiceAccountJSON = &secret.Secret{}
}
secrets["serviceaccountjson"] = desiredKind.Spec.ServiceAccountJSON
return secrets
}

View File

@@ -0,0 +1,22 @@
package bucket
import (
"github.com/caos/orbos/pkg/secret"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBucket_getSecretsFull(t *testing.T) {
secrets := getSecretsMap(&desired)
assert.Equal(t, desired.Spec.ServiceAccountJSON, secrets["serviceaccountjson"])
}
func TestBucket_getSecretsEmpty(t *testing.T) {
secrets := getSecretsMap(&desiredWithoutSecret)
assert.Equal(t, &secret.Secret{}, secrets["serviceaccountjson"])
}
func TestBucket_getSecretsNil(t *testing.T) {
secrets := getSecretsMap(&desiredNil)
assert.Equal(t, &secret.Secret{}, secrets["serviceaccountjson"])
}

View File

@@ -0,0 +1,8 @@
package core
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
)
type BackupListFunc func(monitor mntr.Monitor, name string, desired *tree.Tree) ([]string, error)

View File

@@ -0,0 +1,64 @@
package core
import (
"crypto/rsa"
"errors"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
)
const queriedName = "database"
type DatabaseCurrent interface {
GetURL() string
GetPort() string
GetReadyQuery() operator.EnsureFunc
GetCertificateKey() *rsa.PrivateKey
SetCertificateKey(*rsa.PrivateKey)
GetCertificate() []byte
SetCertificate([]byte)
GetAddUserFunc() func(user string) (operator.QueryFunc, error)
GetDeleteUserFunc() func(user string) (operator.DestroyFunc, error)
GetListUsersFunc() func(k8sClient kubernetes.ClientInt) ([]string, error)
GetListDatabasesFunc() func(k8sClient kubernetes.ClientInt) ([]string, error)
}
func ParseQueriedForDatabase(queried map[string]interface{}) (DatabaseCurrent, error) {
queriedDB, ok := queried[queriedName]
if !ok {
return nil, errors.New("no current state for database found")
}
currentDBTree, ok := queriedDB.(*tree.Tree)
if !ok {
return nil, errors.New("current state does not fullfil interface")
}
currentDB, ok := currentDBTree.Parsed.(DatabaseCurrent)
if !ok {
return nil, errors.New("current state does not fullfil interface")
}
return currentDB, nil
}
func SetQueriedForDatabase(queried map[string]interface{}, databaseCurrent *tree.Tree) {
queried[queriedName] = databaseCurrent
}
func SetQueriedForDatabaseDBList(queried map[string]interface{}, databases []string) {
currentDBList := &CurrentDBList{
Common: &tree.Common{
Kind: "DBList",
Version: "V0",
},
Current: &DatabaseCurrentDBList{
Databases: databases,
},
}
currentDB := &tree.Tree{
Parsed: currentDBList,
}
SetQueriedForDatabase(queried, currentDB)
}

View File

@@ -0,0 +1,65 @@
package core
import (
"crypto/rsa"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
)
var current DatabaseCurrent = &CurrentDBList{}
type CurrentDBList struct {
Common *tree.Common `yaml:",inline"`
Current *DatabaseCurrentDBList
}
type DatabaseCurrentDBList struct {
Databases []string
}
func (c *CurrentDBList) GetURL() string {
return ""
}
func (c *CurrentDBList) GetPort() string {
return ""
}
func (c *CurrentDBList) GetReadyQuery() operator.EnsureFunc {
return nil
}
func (c *CurrentDBList) GetCertificateKey() *rsa.PrivateKey {
return nil
}
func (c *CurrentDBList) SetCertificateKey(key *rsa.PrivateKey) {
return
}
func (c *CurrentDBList) GetCertificate() []byte {
return nil
}
func (c *CurrentDBList) SetCertificate(cert []byte) {
return
}
func (c *CurrentDBList) GetListDatabasesFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return func(k8sClient kubernetes.ClientInt) ([]string, error) {
return c.Current.Databases, nil
}
}
func (c *CurrentDBList) GetListUsersFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return nil
}
func (c *CurrentDBList) GetAddUserFunc() func(user string) (operator.QueryFunc, error) {
return nil
}
func (c *CurrentDBList) GetDeleteUserFunc() func(user string) (operator.DestroyFunc, error) {
return nil
}

View File

@@ -0,0 +1,3 @@
package core
//go:generate mockgen -source current.go -package coremock -destination mock/current.mock.go github.com/caos/internal/operator/database/kinds/databases/core DatabaseCurrent

View File

@@ -0,0 +1,186 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: current.go
// Package coremock is a generated GoMock package.
package coremock
import (
rsa "crypto/rsa"
kubernetes "github.com/caos/orbos/pkg/kubernetes"
operator "github.com/caos/zitadel/operator"
gomock "github.com/golang/mock/gomock"
reflect "reflect"
)
// MockDatabaseCurrent is a mock of DatabaseCurrent interface
type MockDatabaseCurrent struct {
ctrl *gomock.Controller
recorder *MockDatabaseCurrentMockRecorder
}
// MockDatabaseCurrentMockRecorder is the mock recorder for MockDatabaseCurrent
type MockDatabaseCurrentMockRecorder struct {
mock *MockDatabaseCurrent
}
// NewMockDatabaseCurrent creates a new mock instance
func NewMockDatabaseCurrent(ctrl *gomock.Controller) *MockDatabaseCurrent {
mock := &MockDatabaseCurrent{ctrl: ctrl}
mock.recorder = &MockDatabaseCurrentMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use
func (m *MockDatabaseCurrent) EXPECT() *MockDatabaseCurrentMockRecorder {
return m.recorder
}
// GetURL mocks base method
func (m *MockDatabaseCurrent) GetURL() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetURL")
ret0, _ := ret[0].(string)
return ret0
}
// GetURL indicates an expected call of GetURL
func (mr *MockDatabaseCurrentMockRecorder) GetURL() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetURL", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetURL))
}
// GetPort mocks base method
func (m *MockDatabaseCurrent) GetPort() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPort")
ret0, _ := ret[0].(string)
return ret0
}
// GetPort indicates an expected call of GetPort
func (mr *MockDatabaseCurrentMockRecorder) GetPort() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPort", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetPort))
}
// GetReadyQuery mocks base method
func (m *MockDatabaseCurrent) GetReadyQuery() operator.EnsureFunc {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetReadyQuery")
ret0, _ := ret[0].(operator.EnsureFunc)
return ret0
}
// GetReadyQuery indicates an expected call of GetReadyQuery
func (mr *MockDatabaseCurrentMockRecorder) GetReadyQuery() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetReadyQuery", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetReadyQuery))
}
// GetCertificateKey mocks base method
func (m *MockDatabaseCurrent) GetCertificateKey() *rsa.PrivateKey {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCertificateKey")
ret0, _ := ret[0].(*rsa.PrivateKey)
return ret0
}
// GetCertificateKey indicates an expected call of GetCertificateKey
func (mr *MockDatabaseCurrentMockRecorder) GetCertificateKey() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCertificateKey", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetCertificateKey))
}
// SetCertificateKey mocks base method
func (m *MockDatabaseCurrent) SetCertificateKey(arg0 *rsa.PrivateKey) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetCertificateKey", arg0)
}
// SetCertificateKey indicates an expected call of SetCertificateKey
func (mr *MockDatabaseCurrentMockRecorder) SetCertificateKey(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetCertificateKey", reflect.TypeOf((*MockDatabaseCurrent)(nil).SetCertificateKey), arg0)
}
// GetCertificate mocks base method
func (m *MockDatabaseCurrent) GetCertificate() []byte {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCertificate")
ret0, _ := ret[0].([]byte)
return ret0
}
// GetCertificate indicates an expected call of GetCertificate
func (mr *MockDatabaseCurrentMockRecorder) GetCertificate() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCertificate", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetCertificate))
}
// SetCertificate mocks base method
func (m *MockDatabaseCurrent) SetCertificate(arg0 []byte) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetCertificate", arg0)
}
// SetCertificate indicates an expected call of SetCertificate
func (mr *MockDatabaseCurrentMockRecorder) SetCertificate(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetCertificate", reflect.TypeOf((*MockDatabaseCurrent)(nil).SetCertificate), arg0)
}
// GetAddUserFunc mocks base method
func (m *MockDatabaseCurrent) GetAddUserFunc() func(string) (operator.QueryFunc, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAddUserFunc")
ret0, _ := ret[0].(func(string) (operator.QueryFunc, error))
return ret0
}
// GetAddUserFunc indicates an expected call of GetAddUserFunc
func (mr *MockDatabaseCurrentMockRecorder) GetAddUserFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAddUserFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetAddUserFunc))
}
// GetDeleteUserFunc mocks base method
func (m *MockDatabaseCurrent) GetDeleteUserFunc() func(string) (operator.DestroyFunc, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetDeleteUserFunc")
ret0, _ := ret[0].(func(string) (operator.DestroyFunc, error))
return ret0
}
// GetDeleteUserFunc indicates an expected call of GetDeleteUserFunc
func (mr *MockDatabaseCurrentMockRecorder) GetDeleteUserFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetDeleteUserFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetDeleteUserFunc))
}
// GetListUsersFunc mocks base method
func (m *MockDatabaseCurrent) GetListUsersFunc() func(kubernetes.ClientInt) ([]string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetListUsersFunc")
ret0, _ := ret[0].(func(kubernetes.ClientInt) ([]string, error))
return ret0
}
// GetListUsersFunc indicates an expected call of GetListUsersFunc
func (mr *MockDatabaseCurrentMockRecorder) GetListUsersFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetListUsersFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetListUsersFunc))
}
// GetListDatabasesFunc mocks base method
func (m *MockDatabaseCurrent) GetListDatabasesFunc() func(kubernetes.ClientInt) ([]string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetListDatabasesFunc")
ret0, _ := ret[0].(func(kubernetes.ClientInt) ([]string, error))
return ret0
}
// GetListDatabasesFunc indicates an expected call of GetListDatabasesFunc
func (mr *MockDatabaseCurrentMockRecorder) GetListDatabasesFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetListDatabasesFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetListDatabasesFunc))
}

View File

@@ -0,0 +1,68 @@
package databases
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases/managed"
"github.com/caos/zitadel/operator/database/kinds/databases/provided"
"github.com/pkg/errors"
core "k8s.io/api/core/v1"
)
const (
component = "database"
)
func ComponentSelector() *labels.Selector {
return labels.OpenComponentSelector("ZITADEL", component)
}
func GetQueryAndDestroyFuncs(
monitor mntr.Monitor,
desiredTree *tree.Tree,
currentTree *tree.Tree,
namespace string,
apiLabels *labels.API,
timestamp string,
nodeselector map[string]string,
tolerations []core.Toleration,
version string,
features []string,
) (
query operator.QueryFunc,
destroy operator.DestroyFunc,
secrets map[string]*secret.Secret,
err error,
) {
componentLabels := labels.MustForComponent(apiLabels, component)
internalMonitor := monitor.WithField("component", component)
switch desiredTree.Common.Kind {
case "databases.caos.ch/CockroachDB":
return managed.AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(internalMonitor, desiredTree, currentTree)
case "databases.caos.ch/ProvidedDatabase":
return provided.AdaptFunc()(internalMonitor, desiredTree, currentTree)
default:
return nil, nil, nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}
func GetBackupList(
monitor mntr.Monitor,
desiredTree *tree.Tree,
) (
[]string,
error,
) {
switch desiredTree.Common.Kind {
case "databases.caos.ch/CockroachDB":
return managed.BackupList()(monitor, desiredTree)
case "databases.caos.ch/ProvidedDatabse":
return nil, errors.Errorf("no backups supported for database kind %s", desiredTree.Common.Kind)
default:
return nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}

View File

@@ -0,0 +1,256 @@
package managed
import (
"github.com/caos/zitadel/operator"
"strconv"
"strings"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate"
corev1 "k8s.io/api/core/v1"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/pdb"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/rbac"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/services"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/statefulset"
"github.com/pkg/errors"
)
const (
SfsName = "cockroachdb"
pdbName = SfsName + "-budget"
serviceAccountName = SfsName
PublicServiceName = SfsName + "-public"
privateServiceName = SfsName
cockroachPort = int32(26257)
cockroachHTTPPort = int32(8080)
image = "cockroachdb/cockroach:v20.2.3"
)
func AdaptFunc(
componentLabels *labels.Component,
namespace string,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
version string,
features []string,
) func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
return func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
internalMonitor := monitor.WithField("kind", "cockroachdb")
allSecrets := map[string]*secret.Secret{}
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
internalMonitor.Verbose()
}
var (
isFeatureDatabase bool
isFeatureRestore bool
)
for _, feature := range features {
switch feature {
case "database":
isFeatureDatabase = true
case "restore":
isFeatureRestore = true
}
}
queryCert, destroyCert, addUser, deleteUser, listUsers, err := certificate.AdaptFunc(internalMonitor, namespace, componentLabels, desiredKind.Spec.ClusterDns, isFeatureDatabase)
if err != nil {
return nil, nil, nil, err
}
addRoot, err := addUser("root")
if err != nil {
return nil, nil, nil, err
}
destroyRoot, err := deleteUser("root")
if err != nil {
return nil, nil, nil, err
}
queryRBAC, destroyRBAC, err := rbac.AdaptFunc(internalMonitor, namespace, labels.MustForName(componentLabels, serviceAccountName))
cockroachNameLabels := labels.MustForName(componentLabels, SfsName)
cockroachSelector := labels.DeriveNameSelector(cockroachNameLabels, false)
cockroachSelectabel := labels.AsSelectable(cockroachNameLabels)
querySFS, destroySFS, ensureInit, checkDBReady, listDatabases, err := statefulset.AdaptFunc(
internalMonitor,
cockroachSelectabel,
cockroachSelector,
desiredKind.Spec.Force,
namespace,
image,
serviceAccountName,
desiredKind.Spec.ReplicaCount,
desiredKind.Spec.StorageCapacity,
cockroachPort,
cockroachHTTPPort,
desiredKind.Spec.StorageClass,
desiredKind.Spec.NodeSelector,
desiredKind.Spec.Tolerations,
desiredKind.Spec.Resources,
)
if err != nil {
return nil, nil, nil, err
}
queryS, destroyS, err := services.AdaptFunc(
internalMonitor,
namespace,
labels.MustForName(componentLabels, PublicServiceName),
labels.MustForName(componentLabels, privateServiceName),
cockroachSelector,
cockroachPort,
cockroachHTTPPort,
)
//externalName := "cockroachdb-public." + namespaceStr + ".svc.cluster.local"
//queryES, destroyES, err := service.AdaptFunc("cockroachdb-public", "default", labels, []service.Port{}, "ExternalName", map[string]string{}, false, "", externalName)
//if err != nil {
// return nil, nil, err
//}
queryPDB, err := pdb.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, pdbName), cockroachSelector, "1")
if err != nil {
return nil, nil, nil, err
}
destroyPDB, err := pdb.AdaptFuncToDestroy(namespace, pdbName)
if err != nil {
return nil, nil, nil, err
}
currentDB := &Current{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Current: &CurrentDB{
CA: &certificate.Current{},
},
}
current.Parsed = currentDB
queriers := make([]operator.QueryFunc, 0)
if isFeatureDatabase {
queriers = append(queriers,
queryRBAC,
queryCert,
addRoot,
operator.ResourceQueryToZitadelQuery(querySFS),
operator.ResourceQueryToZitadelQuery(queryPDB),
queryS,
operator.EnsureFuncToQueryFunc(ensureInit),
)
}
destroyers := make([]operator.DestroyFunc, 0)
if isFeatureDatabase {
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyPDB),
destroyS,
operator.ResourceDestroyToZitadelDestroy(destroySFS),
destroyRBAC,
destroyCert,
destroyRoot,
)
}
if desiredKind.Spec.Backups != nil {
oneBackup := false
for backupName := range desiredKind.Spec.Backups {
if timestamp != "" && strings.HasPrefix(timestamp, backupName) {
oneBackup = true
}
}
for backupName, desiredBackup := range desiredKind.Spec.Backups {
currentBackup := &tree.Tree{}
if timestamp == "" || !oneBackup || (timestamp != "" && strings.HasPrefix(timestamp, backupName)) {
queryB, destroyB, secrets, err := backups.GetQueryAndDestroyFuncs(
internalMonitor,
desiredBackup,
currentBackup,
backupName,
namespace,
componentLabels,
checkDBReady,
strings.TrimPrefix(timestamp, backupName+"."),
nodeselector,
tolerations,
version,
features,
)
if err != nil {
return nil, nil, nil, err
}
secret.AppendSecrets(backupName, allSecrets, secrets)
destroyers = append(destroyers, destroyB)
queriers = append(queriers, queryB)
}
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
if !isFeatureRestore {
queriedCurrentDB, err := core.ParseQueriedForDatabase(queried)
if err != nil || queriedCurrentDB == nil {
// TODO: query system state
currentDB.Current.Port = strconv.Itoa(int(cockroachPort))
currentDB.Current.URL = PublicServiceName
currentDB.Current.ReadyFunc = checkDBReady
currentDB.Current.AddUserFunc = addUser
currentDB.Current.DeleteUserFunc = deleteUser
currentDB.Current.ListUsersFunc = listUsers
currentDB.Current.ListDatabasesFunc = listDatabases
core.SetQueriedForDatabase(queried, current)
internalMonitor.Info("set current state of managed database")
}
}
ensure, err := operator.QueriersToEnsureFunc(internalMonitor, true, queriers, k8sClient, queried)
return ensure, err
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
allSecrets,
nil
}
}

View File

@@ -0,0 +1,177 @@
package managed
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
"testing"
"time"
)
func getTreeWithDBAndBackup(t *testing.T, masterkey string, saJson string, backupName string) *tree.Tree {
bucketDesired := getDesiredTree(t, masterkey, &bucket.DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &bucket.Spec{
Verbose: true,
Cron: "testCron",
Bucket: "testBucket",
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
bucketDesiredKind, err := bucket.ParseDesiredV0(bucketDesired)
assert.NoError(t, err)
bucketDesired.Parsed = bucketDesiredKind
return getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Spec: Spec{
Verbose: false,
ReplicaCount: 1,
StorageCapacity: "368Gi",
StorageClass: "testSC",
NodeSelector: map[string]string{},
ClusterDns: "testDns",
Backups: map[string]*tree.Tree{backupName: bucketDesired},
},
})
}
func TestManaged_AdaptBucketBackup(t *testing.T) {
monitor := mntr.Monitor{}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
labels := map[string]string{
"app.kubernetes.io/component": "backup",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
backupName := "testBucket"
saJson := "testSA"
masterkey := "testMk"
desired := getTreeWithDBAndBackup(t, masterkey, saJson, backupName)
features := []string{backup.Normal}
bucket.SetBackup(k8sClient, namespace, labels, saJson)
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := bucket.SetQueriedForDatabases(databases)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestManaged_AdaptBucketInstantBackup(t *testing.T) {
monitor := mntr.Monitor{}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
labels := map[string]string{
"app.kubernetes.io/component": "backup",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
masterkey := "testMk"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
saJson := "testSA"
backupName := "testBucket"
features := []string{backup.Instant}
bucket.SetInstantBackup(k8sClient, namespace, backupName, labels, saJson)
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
desired := getTreeWithDBAndBackup(t, masterkey, saJson, backupName)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := bucket.SetQueriedForDatabases(databases)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestManaged_AdaptBucketCleanAndRestore(t *testing.T) {
monitor := mntr.Monitor{}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
labels := map[string]string{
"app.kubernetes.io/component": "backup",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
masterkey := "testMk"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
saJson := "testSA"
backupName := "testBucket"
features := []string{restore.Instant, clean.Instant}
bucket.SetRestore(k8sClient, namespace, backupName, labels, saJson)
bucket.SetClean(k8sClient, namespace, backupName)
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60)).Times(2)
desired := getTreeWithDBAndBackup(t, masterkey, saJson, backupName)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := bucket.SetQueriedForDatabases(databases)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,255 @@
package managed
import (
"gopkg.in/yaml.v3"
"testing"
"time"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
policy "k8s.io/api/policy/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
func getDesiredTree(t *testing.T, masterkey string, desired interface{}) *tree.Tree {
secret.Masterkey = masterkey
desiredTree := &tree.Tree{}
data, err := yaml.Marshal(desired)
assert.NoError(t, err)
assert.NoError(t, yaml.Unmarshal(data, desiredTree))
return desiredTree
}
func TestManaged_Adapt1(t *testing.T) {
monitor := mntr.Monitor{}
nodeLabels := map[string]string{
"app.kubernetes.io/component": "database",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
cockroachLabels := map[string]string{
"app.kubernetes.io/component": "database",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb-budget",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "testKind",
}
cockroachSelectorLabels := map[string]string{
"app.kubernetes.io/component": "database",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
features := []string{"database"}
masterkey := "testMk"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
queried := map[string]interface{}{}
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Spec: Spec{
Verbose: false,
ReplicaCount: 1,
StorageCapacity: "368Gi",
StorageClass: "testSC",
NodeSelector: map[string]string{},
ClusterDns: "testDns",
},
})
unav := intstr.FromInt(1)
k8sClient.EXPECT().ApplyPodDisruptionBudget(&policy.PodDisruptionBudget{
ObjectMeta: metav1.ObjectMeta{
Name: "cockroachdb-budget",
Namespace: namespace,
Labels: cockroachLabels,
},
Spec: policy.PodDisruptionBudgetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: cockroachSelectorLabels,
},
MaxUnavailable: &unav,
},
})
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ApplyService(gomock.Any()).Times(3)
k8sClient.EXPECT().ApplyServiceAccount(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRoleBinding(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRoleBinding(gomock.Any()).Times(1)
//statefulset
k8sClient.EXPECT().ApplyStatefulSet(gomock.Any(), gomock.Any()).Times(1)
//running for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, false, time.Duration(60))
//not ready for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(1))
//ready after setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
//client
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
//node
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestManaged_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
timestamp := "testTs"
nodeLabels := map[string]string{
"app.kubernetes.io/component": "database2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
cockroachLabels := map[string]string{
"app.kubernetes.io/component": "database2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": "cockroachdb-budget",
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v1",
"caos.ch/kind": "testKind2",
}
cockroachSelectorLabels := map[string]string{
"app.kubernetes.io/component": "database2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": "cockroachdb",
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "v1"), "database2")
nodeselector := map[string]string{"test2": "test2"}
var tolerations []corev1.Toleration
version := "testVersion2"
features := []string{"database"}
masterkey := "testMk2"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
queried := map[string]interface{}{}
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Spec: Spec{
Verbose: false,
ReplicaCount: 1,
StorageCapacity: "368Gi",
StorageClass: "testSC",
NodeSelector: map[string]string{},
ClusterDns: "testDns",
},
})
unav := intstr.FromInt(1)
k8sClient.EXPECT().ApplyPodDisruptionBudget(&policy.PodDisruptionBudget{
ObjectMeta: metav1.ObjectMeta{
Name: "cockroachdb-budget",
Namespace: namespace,
Labels: cockroachLabels,
},
Spec: policy.PodDisruptionBudgetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: cockroachSelectorLabels,
},
MaxUnavailable: &unav,
},
})
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ApplyService(gomock.Any()).Times(3)
k8sClient.EXPECT().ApplyServiceAccount(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRoleBinding(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRoleBinding(gomock.Any()).Times(1)
//statefulset
k8sClient.EXPECT().ApplyStatefulSet(gomock.Any(), gomock.Any()).Times(1)
//running for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, false, time.Duration(60))
//not ready for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(1))
//ready after setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
//client
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
//node
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,91 @@
package certificate
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/client"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/node"
)
var (
nodeSecret = "cockroachdb.node"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
componentLabels *labels.Component,
clusterDns string,
generateNodeIfNotExists bool,
) (
operator.QueryFunc,
operator.DestroyFunc,
func(user string) (operator.QueryFunc, error),
func(user string) (operator.DestroyFunc, error),
func(k8sClient kubernetes.ClientInt) ([]string, error),
error,
) {
cMonitor := monitor.WithField("type", "certificates")
queryNode, destroyNode, err := node.AdaptFunc(
cMonitor,
namespace,
labels.MustForName(componentLabels, nodeSecret),
clusterDns,
generateNodeIfNotExists,
)
if err != nil {
return nil, nil, nil, nil, nil, err
}
queriers := []operator.QueryFunc{
queryNode,
}
destroyers := []operator.DestroyFunc{
destroyNode,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(cMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(cMonitor, destroyers),
func(user string) (operator.QueryFunc, error) {
query, _, err := client.AdaptFunc(
cMonitor,
namespace,
componentLabels,
)
if err != nil {
return nil, err
}
queryClient := query(user)
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
_, err := queryNode(k8sClient, queried)
if err != nil {
return nil, err
}
return queryClient(k8sClient, queried)
}, nil
},
func(user string) (operator.DestroyFunc, error) {
_, destroy, err := client.AdaptFunc(
cMonitor,
namespace,
componentLabels,
)
if err != nil {
return nil, err
}
return destroy(user), nil
},
func(k8sClient kubernetes.ClientInt) ([]string, error) {
return client.QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
},
nil
}

View File

@@ -0,0 +1,305 @@
package certificate
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
const (
caPem = `-----BEGIN CERTIFICATE-----
MIIE9TCCAt2gAwIBAgICB+MwDQYJKoZIhvcNAQELBQAwKzESMBAGA1UEChMJQ29j
a3JvYWNoMRUwEwYDVQQDEwxDb2Nrcm9hY2ggQ0EwHhcNMjAxMjAxMTAyMjA0WhcN
MzAxMjAxMTAyMjA0WjArMRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENv
Y2tyb2FjaCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOja5IXJ
GUY9sFgyvWkav+O74gzcv8y69uJzSOx0piOP+sfpZWVeGEjqO4JgdcS5NPMrT5Tb
aJ52CjiOdlHVTyR87i5JmOIvA2qA4dWQmSbX7AQ8r9ptYDe9xMn+qFegcbr4YxCz
K9mmbDZUhlLO7cz3QV6nvRxGFWbzffo8BXZnOUCAOyOHrbnPpLumnfZlL5BckdtY
pS7jAlUpKSMBTK4AHcmrouFsNKHqlUopYXeJFdg9g1F0DuCnVP9x7+XcUW8dAVut
Q7Jswy+++GAXs6mPVsYFLXUSYNyW+Bfl/jKwx0XTQx/6iyNpK0XtAzjBZFjXwmot
0mODkqnfE3BB4lXxZ5knomAQEGSScUhCUb9upbF4uJJF27xr/kIkwtWxMGpCXds0
IxI+wNRCenhfFZEIQCzri0zn6WdN8b/gbv1BErNcccYwolPUv1oUgYbzowbQ5O2D
aLQPqO1VAZiZHLxb787bRywpCl33VZ1ptMHi2ogKjcsh4DQ9SsRj+rU/Tk5lyk7G
FHteyHcq12TGpz9/CQYMacl8yeRRHfNO3Rq0jFTYeD4+ZdVBPKeuTHyXGzy2T157
pgqMFzwqxlNYPzpuz7xsZBExJzCtcomB8fMlsCJnxV/kuMTTsWsPrRc7hsqzBCq3
plfYT8a7EBCVmJmDu+6mMVh+A9M2zZCqtV57AgMBAAGjIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQDRmkBYDk/F
lYTgMaL7yhRAowDR8mtOl06Oba/MCDDsmBvfOVIqY9Sa9sJtb78Uvecv9WAE4qpb
PNSaIvNmujmGQrBxtGwQdeHZvtXQ1lKAGBM+ukfz/NFU+6rwOdTuLgS5jTaiM4r1
uMoIG6S5z/brWGHAw8Qv2Fq05dSLnevmPWK903V/RnwmCKT9as5Qpfkaie2PNory
euxVGeGolxzgbSps3ZJlDSSymQBl10iJYYdGIsgcw5FHcCdpS4AutdpQbIFuCGk2
CHTcTTa4KMaVfRm/gKm1gh8UnuVDLBKQS8+1CHFYUnih8ozBNUyBo5r5L4BHJK/n
f0gnrYqaxtqPAyHkfo0PRc50HlAQRS/gW8ESv2PQmcSWlDggEzXt5MqFIYOfEXWE
gtC7Ct1P7gzIRxolYVsNSgBR62sJM1PUa39E5v0QJsmuM51txHuw/PGqkkBlEwx9
4u3FFXD9Pslg/g1l50Ek5O3/TWMta4Q+NagDDaOdkb4LRQr522ySgGIinBw00Xr+
Ics2L42rP1LQyaE7+BljDEwmGaT7mrl8QGZR9uv/9mbtk2+q7wAOQzcMSFFACfwx
KdKENmK0GpQZ6HaeDZO7wxGiAyFKhTNpd39XNOEhaKpWLSUxnZTLGlMIzvSyMGCj
PV3xm+Xg1Xvki7f/qC1djysMcaokoJzNdg==
-----END CERTIFICATE-----
`
caPrivPem = `-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA2Owc3IOljq3hkKO9q6jT1kFZp/+Y9dv/lTIbsMhV7gPrn2WS
Os2KQphcrB9DvGoZAxUh1aD8MUO7USIMhFodQEq7vfycDT16jrTnc1fSDDcfk3Ra
DvqZGcBkj0lc6w5LK3FNl2y6rA/26teGbyNaVXJfW01dKKy4E2or2+1+tUamyjiU
cvPzNWEoPPJT5HcadLTxZr/SKDvDXFyej4nKT9+j1pKmaqQrIrL4KXKq75LhU+kH
L4TG+kJ35isiv5OTpd2jG6ssz0i+ZEtX4hjIM6eCnZaiFT//33nSL3zl2LUjmyou
ks2FDuX9m90mg8UtcrpA/eVlwyg8nJ/d7/Yn3ZjuHVOgzE8YuQXauJ16HuLcFgEW
WQg1uwhOhrc5b0YJndZbJ2Re4qfaBoC5fODRXoUPvqG9k9kNp98xtclNCIyW8EpA
Su8QmK42ksOJox+OQamHzatrGppgIK77TY3ZFcBHETHClabgsfjv/1GNXXAexJI4
Cjntnou+yoc+LUj87WJfD3ERGgm0nDfnE7uZ7kccO5Lj/oajeO+Q+QJaLZ4I2jz+
a05k16naGGT29AnM+iwBqqoTODr4Z7905niZc6+fEOPml1V7wSuJs2eE4jOa3ixX
5tnruw74rN82Zfrkg6kWPOEfBBXzSotRiHv+BAV2tFbnC55ItnHn1ZE68V0CAwEA
AQKCAgEAssWsZ4PLXpIo8qYve5hQtSP4er7oVb8wnMnGDmSchOMQPbZc1D9uscGV
pnjBvzcFVAgHcWMSVJuIda4E+NK3hrPQlBvqk/LV3WRz1xhKUKzhRgm+6tdWc+We
OoRwonuOMchX9PKzyXgCu7pR3agaG499zOYuX4Yw0jdO3BqXsVf/v2rv1Oj9yEFB
AzGHOCN8VzCEPnTaAzR1pdnjB1K8vCUIhp8nrX2M2zT51lbdT0ISl6/VrzDTN46t
97AXHCHIrgrCENx6un4uAsQhMoHQBNoJiEyLWc371zYzpdVeK8HlDUyvQ2dDQGsF
Hn4c7r4C3aloRJbYzgSMJ1yNcOTCJpciQsq1VmCQFOHfbum2ejquXJ7BbeRGeHdM
145epckL+hbECTCpSs0Z5t90NdfoJspvr+3sOEt6h3DMUGjvobrf/s3KiRY5hHdc
x86Htx3QgWNCG6a+ga7h7s3t+4ZtoPPWn+vlAoxD/2eCzsDgE/XLDmC2T4yS8UOb
LIb4UN7wl2sNM8zhm4BfoiKfjGu4dPZIlsPP+ZKRby1O4ogHHoPREBTH1VSEplVM
fA/KSITV+rUfO3T/qXIFZ4/Wa5YZoULiMCOJOWNgXQzWvTf7Qr31LhhfXd+uIw30
LDtjdkpT43zlKxRosQFLiV7q3fVbvKPVQfxzBz7M1Gl74IllpmECggEBAOnAimKl
w/9mcMkEd2L2ZzDI+57W8DU6grUUILZfjJG+uvOiMmIA5M1wzsVHil3ihZthrJ5A
UUAKG97JgwjpIAliAzgtPZY2AK/bJz7Hht1EwewrV+RyHM+dMv6em2OA+XUjFSPX
VsecFDaUQebkypw236KdA4jgRG6hr4xObdXgTN6RFCv1mI1EijZ/YbKgPgTJaPI4
b2N5QokYFygUCwRxKIt7/Z4hQs9LbdW680NcXtPRPnS1SmwYJbi7wTX+o5f6nfNV
YvojborjXwNrZe0p+FfaEuD44wf6kNlRGfcKXoaAncXV/M5/oXf1dktKP630eq9g
0MAKFYJ6MAUheakCggEBAO2Rgfy/tEydBuP9BIfC84BXk8EwnRJ0ZfmXv77HFj3F
VT5QNU97I5gZJOFkL7OO8bL0/WljlqeBEht0DmHNmOt5Pyew4LRLpfeN6rClZgRN
V4wqKXjoZAIYa9khQcwwFNER1RdI+PkuusJtrvY6CbwwG9LbBq2NR4C1YSgaQnhV
NqdXK5dwrYEky6lI31sDD4BYeiJVKlkkNCQAVOC+04Mrsa9F0NG7TKXzji5hU8l5
x8squjvJ6vmobhmsRTL1LMpafUrt5pHL9jcWIZYxJJo9mB0zxJKcsLI8IOg2QPoj
tQ395FZ2YtjNzZa1CYeUOUiaQu+uvztfE36AdW/vUpUCggEAMV7bW66LURw/4hUx
ahOFBAbPLmNTZMqw5LIVnq9br0TLk73ESnLJ4KJc6coMbXv0oDbnEJ2hC5eW/10s
cetbOuAasfjMMzfAuWPeTCI0V/O3ybv12mhHsYoQRTsWstOA3L7GLkXDLHHIyyZR
LQVRzeDBJ0Vmg7hqe7tmqom+JRg05CVcT1SWHfBGCPCqn+G8d6Jaqh5FWIs6BF60
NWDWWt/TonJTxNxdkg7qaeQMkUOnO7HMMTZBO8d14Ci3zEG2J9llFwoH17E4Hdmc
Lcq3QnpE27lRl3a57Ot9QIkipMzp3hq4OBrURIEsh3uuuoQ6IvGqH/Sg4o6+sEpC
bjL90QKCAQBDc/0kdooK9sruEPkoUwIwfq1FPThb9RC/PYcD9CMshssdVkjMuHny
xbDjDj89DGk0FrudINm11b/+a4Vp36Z7tYFpE5+5kYEeOP1aCpxcvFkPQyljWxiK
P8TfccHs5/oBIr8OTXnjxpDgg6QZ5YC+HirIQ8gxntuef+GGMW6OHCPYf7ew2B1r
fbcV6csBXG0aVATZmrTbepwTXMS8y3Hi3JUm3vvbkQLCW9US9i+EFT/VP9yA/WPq
Xxhj0bYUMej1y5unmsTMwMy392Cx9GIgKTz3jatStYq2ELyHMmBgpaLSxjP/GL4Y
MNce42hBRqS9KI+43jUN9oDiejbeAWXBAoIBACZKnS6EppNDSdos0f32J5TxEUxv
lF9AuVXEAPDR/m3+PlSRzrlf7uowdHTfsemHMSduL8weLNhduprwz4TmW9Fo6fSF
UePLNbXcMX3omAk+AKKOiexLG0fCXGW2zr4nHZJzTbK2+La3yLeUcoPu1puNpLiq
LVj2bH3zKWVRA9/ovuN6V5w18ojjdqOw4bw5qXcdZhoWLxI8Q9Oqua24f/dnRpuI
I8mRtPQ3+vuOKbTT+/80eAUpSfEKwAg1Mjgury9q1/B4Ib6hAGzpJuXxG7xQjnsJ
EFcN1kvdg5WGK41+fYMdexPaLamjhDGN0e1vxJfAukWIAsBMwp8wfEWZvzA=
-----END RSA PRIVATE KEY-----
`
)
func TestCertificate_AdaptWithCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
dbCurrent.EXPECT().SetCertificate(ca).Times(1)
dbCurrent.EXPECT().SetCertificateKey(caPriv).Times(1)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, _, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptWithoutCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1).Return(nil)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, _, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptAlreadyExisting(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
caCertKey := "ca.crt"
caPrivKeyKey := "ca.key"
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{},
Data: map[string][]byte{
caCertKey: []byte(caPem),
caPrivKeyKey: []byte(caPrivPem),
},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, _, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptCreateUser(t *testing.T) {
monitor := mntr.Monitor{}
clusterDns := "testDns"
namespace := "testNs"
user := "test"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
caCertKey := "ca.crt"
caPrivKeyKey := "ca.key"
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{},
Data: map[string][]byte{
caCertKey: []byte(caPem),
caPrivKeyKey: []byte(caPrivPem),
},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any())
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
_, _, createUser, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
query, err := createUser(user)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,111 @@
package certificates
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"math/big"
"net"
"time"
)
func NewCA() (*rsa.PrivateKey, []byte, error) {
ca := &x509.Certificate{
SerialNumber: big.NewInt(2019),
Subject: pkix.Name{
Organization: []string{"Cockroach"},
CommonName: "Cockroach CA",
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(10, 0, 0),
IsCA: true,
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
caPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
caBytes, err := x509.CreateCertificate(rand.Reader, ca, ca, &caPrivKey.PublicKey, caPrivKey)
if err != nil {
return nil, nil, err
}
return caPrivKey, caBytes, nil
}
func NewClient(caPrivKey *rsa.PrivateKey, ca []byte, user string) (*rsa.PrivateKey, []byte, error) {
cert := &x509.Certificate{
SerialNumber: big.NewInt(1658),
Subject: pkix.Name{
Organization: []string{"Cockroach"},
CommonName: user,
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(10, 0, 0),
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment,
}
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
caCert, err := x509.ParseCertificate(ca)
if err != nil {
return nil, nil, err
}
certBytes, err := x509.CreateCertificate(rand.Reader, cert, caCert, &certPrivKey.PublicKey, caPrivKey)
if err != nil {
return nil, nil, err
}
return certPrivKey, certBytes, nil
}
func NewNode(caPrivKey *rsa.PrivateKey, ca []byte, namespace string, clusterDns string) (*rsa.PrivateKey, []byte, error) {
cert := &x509.Certificate{
SerialNumber: big.NewInt(1658),
Subject: pkix.Name{
Organization: []string{"Cockroach"},
CommonName: "node",
},
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1)},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(10, 0, 0),
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment,
DNSNames: []string{
"localhost",
"cockroachdb-public",
"cockroachdb-public.default",
"cockroachdb-public." + namespace,
"cockroachdb-public." + namespace + ".svc." + clusterDns,
"*.cockroachdb",
"*.cockroachdb." + namespace,
"*.cockroachdb." + namespace + ".svc." + clusterDns,
},
}
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
caCert, err := x509.ParseCertificate(ca)
if err != nil {
return nil, nil, err
}
certBytes, err := x509.CreateCertificate(rand.Reader, cert, caCert, &certPrivKey.PublicKey, caPrivKey)
if err != nil {
return nil, nil, err
}
return certPrivKey, certBytes, nil
}

View File

@@ -0,0 +1,54 @@
package certificates
import (
"crypto/x509"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/stretchr/testify/assert"
"testing"
)
func TestCertificates_CAE(t *testing.T) {
priv, rootCa, err := NewCA()
assert.NoError(t, err)
assert.NotNil(t, priv)
pemCa, err := pem.EncodeCertificate(rootCa)
pemkey, err := pem.EncodeKey(priv)
assert.NotNil(t, pemCa)
assert.NotNil(t, pemkey)
_, err = x509.ParseCertificate(rootCa)
assert.NoError(t, err)
}
func TestCertificates_CA(t *testing.T) {
_, rootCa, err := NewCA()
assert.NoError(t, err)
_, err = x509.ParseCertificate(rootCa)
assert.NoError(t, err)
}
func TestCertificates_Chain(t *testing.T) {
rootKey, rootCert, err := NewCA()
assert.NoError(t, err)
rootPem, err := pem.EncodeCertificate(rootCert)
assert.NoError(t, err)
roots := x509.NewCertPool()
ok := roots.AppendCertsFromPEM(rootPem)
assert.Equal(t, ok, true)
_, clientCert, err := NewClient(rootKey, rootCert, "test")
cert, err := x509.ParseCertificate(clientCert)
assert.NoError(t, err)
opts := x509.VerifyOptions{
Roots: roots,
KeyUsages: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
}
_, err = cert.Verify(opts)
assert.NoError(t, err)
}

View File

@@ -0,0 +1,101 @@
package client
import (
"errors"
"github.com/caos/zitadel/operator"
"strings"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/certificates"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
)
const (
clientSecretPrefix = "cockroachdb.client."
caCertKey = "ca.crt"
clientCertKeyPrefix = "client."
clientCertKeySuffix = ".crt"
clientPrivKeyKeyPrefix = "client."
clientPrivKeyKeySuffix = ".key"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
componentLabels *labels.Component,
) (
func(client string) operator.QueryFunc,
func(client string) operator.DestroyFunc,
error,
) {
return func(client string) operator.QueryFunc {
clientSecret := clientSecretPrefix + client
nameLabels := labels.MustForName(componentLabels, strings.ReplaceAll(clientSecret, "_", "-"))
clientCertKey := clientCertKeyPrefix + client + clientCertKeySuffix
clientPrivKeyKey := clientPrivKeyKeyPrefix + client + clientPrivKeyKeySuffix
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
queriers := make([]operator.QueryFunc, 0)
currentDB, err := core.ParseQueriedForDatabase(queried)
if err != nil {
return nil, err
}
caCert := currentDB.GetCertificate()
caKey := currentDB.GetCertificateKey()
if caKey == nil || caCert == nil || len(caCert) == 0 {
return nil, errors.New("no ca-certificate found")
}
clientPrivKey, clientCert, err := certificates.NewClient(caKey, caCert, client)
if err != nil {
return nil, err
}
pemClientPrivKey, err := pem.EncodeKey(clientPrivKey)
if err != nil {
return nil, err
}
pemClientCert, err := pem.EncodeCertificate(clientCert)
if err != nil {
return nil, err
}
pemCaCert, err := pem.EncodeCertificate(caCert)
if err != nil {
return nil, err
}
clientSecretData := map[string]string{
caCertKey: string(pemCaCert),
clientPrivKeyKey: string(pemClientPrivKey),
clientCertKey: string(pemClientCert),
}
queryClientSecret, err := secret.AdaptFuncToEnsure(namespace, labels.AsSelectable(nameLabels), clientSecretData)
if err != nil {
return nil, err
}
queriers = append(queriers, operator.ResourceQueryToZitadelQuery(queryClientSecret))
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
}
}, func(client string) operator.DestroyFunc {
clientSecret := clientSecretPrefix + client
destroy, err := secret.AdaptFuncToDestroy(namespace, clientSecret)
if err != nil {
return nil
}
return operator.ResourceDestroyToZitadelDestroy(destroy)
},
nil
}

View File

@@ -0,0 +1,169 @@
package client
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"testing"
)
const (
caPem = `-----BEGIN CERTIFICATE-----
MIIE9TCCAt2gAwIBAgICB+MwDQYJKoZIhvcNAQELBQAwKzESMBAGA1UEChMJQ29j
a3JvYWNoMRUwEwYDVQQDEwxDb2Nrcm9hY2ggQ0EwHhcNMjAxMjAxMTAyMjA0WhcN
MzAxMjAxMTAyMjA0WjArMRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENv
Y2tyb2FjaCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOja5IXJ
GUY9sFgyvWkav+O74gzcv8y69uJzSOx0piOP+sfpZWVeGEjqO4JgdcS5NPMrT5Tb
aJ52CjiOdlHVTyR87i5JmOIvA2qA4dWQmSbX7AQ8r9ptYDe9xMn+qFegcbr4YxCz
K9mmbDZUhlLO7cz3QV6nvRxGFWbzffo8BXZnOUCAOyOHrbnPpLumnfZlL5BckdtY
pS7jAlUpKSMBTK4AHcmrouFsNKHqlUopYXeJFdg9g1F0DuCnVP9x7+XcUW8dAVut
Q7Jswy+++GAXs6mPVsYFLXUSYNyW+Bfl/jKwx0XTQx/6iyNpK0XtAzjBZFjXwmot
0mODkqnfE3BB4lXxZ5knomAQEGSScUhCUb9upbF4uJJF27xr/kIkwtWxMGpCXds0
IxI+wNRCenhfFZEIQCzri0zn6WdN8b/gbv1BErNcccYwolPUv1oUgYbzowbQ5O2D
aLQPqO1VAZiZHLxb787bRywpCl33VZ1ptMHi2ogKjcsh4DQ9SsRj+rU/Tk5lyk7G
FHteyHcq12TGpz9/CQYMacl8yeRRHfNO3Rq0jFTYeD4+ZdVBPKeuTHyXGzy2T157
pgqMFzwqxlNYPzpuz7xsZBExJzCtcomB8fMlsCJnxV/kuMTTsWsPrRc7hsqzBCq3
plfYT8a7EBCVmJmDu+6mMVh+A9M2zZCqtV57AgMBAAGjIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQDRmkBYDk/F
lYTgMaL7yhRAowDR8mtOl06Oba/MCDDsmBvfOVIqY9Sa9sJtb78Uvecv9WAE4qpb
PNSaIvNmujmGQrBxtGwQdeHZvtXQ1lKAGBM+ukfz/NFU+6rwOdTuLgS5jTaiM4r1
uMoIG6S5z/brWGHAw8Qv2Fq05dSLnevmPWK903V/RnwmCKT9as5Qpfkaie2PNory
euxVGeGolxzgbSps3ZJlDSSymQBl10iJYYdGIsgcw5FHcCdpS4AutdpQbIFuCGk2
CHTcTTa4KMaVfRm/gKm1gh8UnuVDLBKQS8+1CHFYUnih8ozBNUyBo5r5L4BHJK/n
f0gnrYqaxtqPAyHkfo0PRc50HlAQRS/gW8ESv2PQmcSWlDggEzXt5MqFIYOfEXWE
gtC7Ct1P7gzIRxolYVsNSgBR62sJM1PUa39E5v0QJsmuM51txHuw/PGqkkBlEwx9
4u3FFXD9Pslg/g1l50Ek5O3/TWMta4Q+NagDDaOdkb4LRQr522ySgGIinBw00Xr+
Ics2L42rP1LQyaE7+BljDEwmGaT7mrl8QGZR9uv/9mbtk2+q7wAOQzcMSFFACfwx
KdKENmK0GpQZ6HaeDZO7wxGiAyFKhTNpd39XNOEhaKpWLSUxnZTLGlMIzvSyMGCj
PV3xm+Xg1Xvki7f/qC1djysMcaokoJzNdg==
-----END CERTIFICATE-----
`
caPrivPem = `-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA2Owc3IOljq3hkKO9q6jT1kFZp/+Y9dv/lTIbsMhV7gPrn2WS
Os2KQphcrB9DvGoZAxUh1aD8MUO7USIMhFodQEq7vfycDT16jrTnc1fSDDcfk3Ra
DvqZGcBkj0lc6w5LK3FNl2y6rA/26teGbyNaVXJfW01dKKy4E2or2+1+tUamyjiU
cvPzNWEoPPJT5HcadLTxZr/SKDvDXFyej4nKT9+j1pKmaqQrIrL4KXKq75LhU+kH
L4TG+kJ35isiv5OTpd2jG6ssz0i+ZEtX4hjIM6eCnZaiFT//33nSL3zl2LUjmyou
ks2FDuX9m90mg8UtcrpA/eVlwyg8nJ/d7/Yn3ZjuHVOgzE8YuQXauJ16HuLcFgEW
WQg1uwhOhrc5b0YJndZbJ2Re4qfaBoC5fODRXoUPvqG9k9kNp98xtclNCIyW8EpA
Su8QmK42ksOJox+OQamHzatrGppgIK77TY3ZFcBHETHClabgsfjv/1GNXXAexJI4
Cjntnou+yoc+LUj87WJfD3ERGgm0nDfnE7uZ7kccO5Lj/oajeO+Q+QJaLZ4I2jz+
a05k16naGGT29AnM+iwBqqoTODr4Z7905niZc6+fEOPml1V7wSuJs2eE4jOa3ixX
5tnruw74rN82Zfrkg6kWPOEfBBXzSotRiHv+BAV2tFbnC55ItnHn1ZE68V0CAwEA
AQKCAgEAssWsZ4PLXpIo8qYve5hQtSP4er7oVb8wnMnGDmSchOMQPbZc1D9uscGV
pnjBvzcFVAgHcWMSVJuIda4E+NK3hrPQlBvqk/LV3WRz1xhKUKzhRgm+6tdWc+We
OoRwonuOMchX9PKzyXgCu7pR3agaG499zOYuX4Yw0jdO3BqXsVf/v2rv1Oj9yEFB
AzGHOCN8VzCEPnTaAzR1pdnjB1K8vCUIhp8nrX2M2zT51lbdT0ISl6/VrzDTN46t
97AXHCHIrgrCENx6un4uAsQhMoHQBNoJiEyLWc371zYzpdVeK8HlDUyvQ2dDQGsF
Hn4c7r4C3aloRJbYzgSMJ1yNcOTCJpciQsq1VmCQFOHfbum2ejquXJ7BbeRGeHdM
145epckL+hbECTCpSs0Z5t90NdfoJspvr+3sOEt6h3DMUGjvobrf/s3KiRY5hHdc
x86Htx3QgWNCG6a+ga7h7s3t+4ZtoPPWn+vlAoxD/2eCzsDgE/XLDmC2T4yS8UOb
LIb4UN7wl2sNM8zhm4BfoiKfjGu4dPZIlsPP+ZKRby1O4ogHHoPREBTH1VSEplVM
fA/KSITV+rUfO3T/qXIFZ4/Wa5YZoULiMCOJOWNgXQzWvTf7Qr31LhhfXd+uIw30
LDtjdkpT43zlKxRosQFLiV7q3fVbvKPVQfxzBz7M1Gl74IllpmECggEBAOnAimKl
w/9mcMkEd2L2ZzDI+57W8DU6grUUILZfjJG+uvOiMmIA5M1wzsVHil3ihZthrJ5A
UUAKG97JgwjpIAliAzgtPZY2AK/bJz7Hht1EwewrV+RyHM+dMv6em2OA+XUjFSPX
VsecFDaUQebkypw236KdA4jgRG6hr4xObdXgTN6RFCv1mI1EijZ/YbKgPgTJaPI4
b2N5QokYFygUCwRxKIt7/Z4hQs9LbdW680NcXtPRPnS1SmwYJbi7wTX+o5f6nfNV
YvojborjXwNrZe0p+FfaEuD44wf6kNlRGfcKXoaAncXV/M5/oXf1dktKP630eq9g
0MAKFYJ6MAUheakCggEBAO2Rgfy/tEydBuP9BIfC84BXk8EwnRJ0ZfmXv77HFj3F
VT5QNU97I5gZJOFkL7OO8bL0/WljlqeBEht0DmHNmOt5Pyew4LRLpfeN6rClZgRN
V4wqKXjoZAIYa9khQcwwFNER1RdI+PkuusJtrvY6CbwwG9LbBq2NR4C1YSgaQnhV
NqdXK5dwrYEky6lI31sDD4BYeiJVKlkkNCQAVOC+04Mrsa9F0NG7TKXzji5hU8l5
x8squjvJ6vmobhmsRTL1LMpafUrt5pHL9jcWIZYxJJo9mB0zxJKcsLI8IOg2QPoj
tQ395FZ2YtjNzZa1CYeUOUiaQu+uvztfE36AdW/vUpUCggEAMV7bW66LURw/4hUx
ahOFBAbPLmNTZMqw5LIVnq9br0TLk73ESnLJ4KJc6coMbXv0oDbnEJ2hC5eW/10s
cetbOuAasfjMMzfAuWPeTCI0V/O3ybv12mhHsYoQRTsWstOA3L7GLkXDLHHIyyZR
LQVRzeDBJ0Vmg7hqe7tmqom+JRg05CVcT1SWHfBGCPCqn+G8d6Jaqh5FWIs6BF60
NWDWWt/TonJTxNxdkg7qaeQMkUOnO7HMMTZBO8d14Ci3zEG2J9llFwoH17E4Hdmc
Lcq3QnpE27lRl3a57Ot9QIkipMzp3hq4OBrURIEsh3uuuoQ6IvGqH/Sg4o6+sEpC
bjL90QKCAQBDc/0kdooK9sruEPkoUwIwfq1FPThb9RC/PYcD9CMshssdVkjMuHny
xbDjDj89DGk0FrudINm11b/+a4Vp36Z7tYFpE5+5kYEeOP1aCpxcvFkPQyljWxiK
P8TfccHs5/oBIr8OTXnjxpDgg6QZ5YC+HirIQ8gxntuef+GGMW6OHCPYf7ew2B1r
fbcV6csBXG0aVATZmrTbepwTXMS8y3Hi3JUm3vvbkQLCW9US9i+EFT/VP9yA/WPq
Xxhj0bYUMej1y5unmsTMwMy392Cx9GIgKTz3jatStYq2ELyHMmBgpaLSxjP/GL4Y
MNce42hBRqS9KI+43jUN9oDiejbeAWXBAoIBACZKnS6EppNDSdos0f32J5TxEUxv
lF9AuVXEAPDR/m3+PlSRzrlf7uowdHTfsemHMSduL8weLNhduprwz4TmW9Fo6fSF
UePLNbXcMX3omAk+AKKOiexLG0fCXGW2zr4nHZJzTbK2+La3yLeUcoPu1puNpLiq
LVj2bH3zKWVRA9/ovuN6V5w18ojjdqOw4bw5qXcdZhoWLxI8Q9Oqua24f/dnRpuI
I8mRtPQ3+vuOKbTT+/80eAUpSfEKwAg1Mjgury9q1/B4Ib6hAGzpJuXxG7xQjnsJ
EFcN1kvdg5WGK41+fYMdexPaLamjhDGN0e1vxJfAukWIAsBMwp8wfEWZvzA=
-----END RSA PRIVATE KEY-----
`
)
func TestClient_Adapt1(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
user := "test"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
k8sClient.EXPECT().ApplySecret(gomock.Any())
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
createUser, _, err := AdaptFunc(monitor, namespace, componentLabels)
assert.NoError(t, err)
query := createUser(user)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestClient_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs2"
user := "test2"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
k8sClient.EXPECT().ApplySecret(gomock.Any())
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
createUser, _, err := AdaptFunc(monitor, namespace, componentLabels)
assert.NoError(t, err)
query := createUser(user)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,32 @@
package client
import (
"strings"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/kubernetes"
)
func QueryCertificates(
namespace string,
selector *labels.Selector,
k8sClient kubernetes.ClientInt,
) (
[]string,
error,
) {
list, err := k8sClient.ListSecrets(namespace, labels.MustK8sMap(selector))
if err != nil {
return nil, err
}
certs := []string{}
for _, secret := range list.Items {
if strings.HasPrefix(secret.Name, clientSecretPrefix) {
certs = append(certs, strings.TrimPrefix(secret.Name, clientSecretPrefix))
}
}
return certs, nil
}

View File

@@ -0,0 +1,95 @@
package client
import (
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestClient_Query0(t *testing.T) {
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "testComponent")
clientLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ListSecrets(namespace, clientLabels).Times(1).Return(secretList, nil)
users, err := QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
assert.NoError(t, err)
assert.Equal(t, users, []string{})
}
func TestClient_Query(t *testing.T) {
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "testComponent")
clientLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{
Name: clientSecretPrefix + "test",
},
Data: map[string][]byte{},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, clientLabels).Times(1).Return(secretList, nil)
users, err := QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
assert.NoError(t, err)
assert.Contains(t, users, "test")
}
func TestClient_Query2(t *testing.T) {
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "testComponent")
clientLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{
Name: clientSecretPrefix + "test1",
},
Data: map[string][]byte{},
Type: "Opaque",
}, {
ObjectMeta: metav1.ObjectMeta{
Name: clientSecretPrefix + "test2",
},
Data: map[string][]byte{},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, clientLabels).Times(1).Return(secretList, nil)
users, err := QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
assert.NoError(t, err)
assert.Equal(t, users, []string{"test1", "test2"})
}

View File

@@ -0,0 +1,10 @@
package certificate
import (
"crypto/rsa"
)
type Current struct {
CertificateKey *rsa.PrivateKey
Certificate []byte
}

View File

@@ -0,0 +1,150 @@
package node
import (
"crypto/rsa"
"errors"
"github.com/caos/zitadel/operator"
"reflect"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/certificates"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
)
const (
caCertKey = "ca.crt"
caPrivKeyKey = "ca.key"
nodeCertKey = "node.crt"
nodePrivKeyKey = "node.key"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
nameLabels *labels.Name,
clusterDns string,
generateIfNotExists bool,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
caPrivKey := new(rsa.PrivateKey)
caCert := make([]byte, 0)
nodeSecretSelector := labels.MustK8sMap(labels.DeriveNameSelector(nameLabels, false))
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
queriers := make([]operator.QueryFunc, 0)
currentDB, err := core.ParseQueriedForDatabase(queried)
if err != nil {
return nil, err
}
allNodeSecrets, err := k8sClient.ListSecrets(namespace, nodeSecretSelector)
if err != nil {
return nil, err
}
if len(allNodeSecrets.Items) == 0 {
if !generateIfNotExists {
return nil, errors.New("node secret not found")
}
emptyCert := true
emptyKey := true
if currentCaCert := currentDB.GetCertificate(); currentCaCert != nil && len(currentCaCert) != 0 {
emptyCert = false
caCert = currentCaCert
}
if currentCaCertKey := currentDB.GetCertificateKey(); currentCaCertKey != nil && !reflect.DeepEqual(currentCaCertKey, &rsa.PrivateKey{}) {
emptyKey = false
caPrivKey = currentCaCertKey
}
if emptyCert || emptyKey {
caPrivKeyInternal, caCertInternal, err := certificates.NewCA()
if err != nil {
return nil, err
}
caPrivKey = caPrivKeyInternal
caCert = caCertInternal
nodePrivKey, nodeCert, err := certificates.NewNode(caPrivKey, caCert, namespace, clusterDns)
if err != nil {
return nil, err
}
pemNodePrivKey, err := pem.EncodeKey(nodePrivKey)
if err != nil {
return nil, err
}
pemCaPrivKey, err := pem.EncodeKey(caPrivKey)
if err != nil {
return nil, err
}
pemCaCert, err := pem.EncodeCertificate(caCert)
if err != nil {
return nil, err
}
pemNodeCert, err := pem.EncodeCertificate(nodeCert)
if err != nil {
return nil, err
}
nodeSecretData := map[string]string{
caPrivKeyKey: string(pemCaPrivKey),
caCertKey: string(pemCaCert),
nodePrivKeyKey: string(pemNodePrivKey),
nodeCertKey: string(pemNodeCert),
}
queryNodeSecret, err := secret.AdaptFuncToEnsure(namespace, labels.AsSelectable(nameLabels), nodeSecretData)
if err != nil {
return nil, err
}
queriers = append(queriers, operator.ResourceQueryToZitadelQuery(queryNodeSecret))
}
} else {
key, err := pem.DecodeKey(allNodeSecrets.Items[0].Data[caPrivKeyKey])
if err != nil {
return nil, err
}
caPrivKey = key
cert, err := pem.DecodeCertificate(allNodeSecrets.Items[0].Data[caCertKey])
if err != nil {
return nil, err
}
caCert = cert
}
currentDB.SetCertificate(caCert)
currentDB.SetCertificateKey(caPrivKey)
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
}, func(k8sClient kubernetes.ClientInt) error {
allNodeSecrets, err := k8sClient.ListSecrets(namespace, nodeSecretSelector)
if err != nil {
return err
}
for _, deleteSecret := range allNodeSecrets.Items {
destroyer, err := secret.AdaptFuncToDestroy(namespace, deleteSecret.Name)
if err != nil {
return err
}
if err := destroyer(k8sClient); err != nil {
return err
}
}
return nil
}, nil
}

View File

@@ -0,0 +1,245 @@
package node
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
const (
caPem = `-----BEGIN CERTIFICATE-----
MIIE9TCCAt2gAwIBAgICB+MwDQYJKoZIhvcNAQELBQAwKzESMBAGA1UEChMJQ29j
a3JvYWNoMRUwEwYDVQQDEwxDb2Nrcm9hY2ggQ0EwHhcNMjAxMjAxMTAyMjA0WhcN
MzAxMjAxMTAyMjA0WjArMRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENv
Y2tyb2FjaCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOja5IXJ
GUY9sFgyvWkav+O74gzcv8y69uJzSOx0piOP+sfpZWVeGEjqO4JgdcS5NPMrT5Tb
aJ52CjiOdlHVTyR87i5JmOIvA2qA4dWQmSbX7AQ8r9ptYDe9xMn+qFegcbr4YxCz
K9mmbDZUhlLO7cz3QV6nvRxGFWbzffo8BXZnOUCAOyOHrbnPpLumnfZlL5BckdtY
pS7jAlUpKSMBTK4AHcmrouFsNKHqlUopYXeJFdg9g1F0DuCnVP9x7+XcUW8dAVut
Q7Jswy+++GAXs6mPVsYFLXUSYNyW+Bfl/jKwx0XTQx/6iyNpK0XtAzjBZFjXwmot
0mODkqnfE3BB4lXxZ5knomAQEGSScUhCUb9upbF4uJJF27xr/kIkwtWxMGpCXds0
IxI+wNRCenhfFZEIQCzri0zn6WdN8b/gbv1BErNcccYwolPUv1oUgYbzowbQ5O2D
aLQPqO1VAZiZHLxb787bRywpCl33VZ1ptMHi2ogKjcsh4DQ9SsRj+rU/Tk5lyk7G
FHteyHcq12TGpz9/CQYMacl8yeRRHfNO3Rq0jFTYeD4+ZdVBPKeuTHyXGzy2T157
pgqMFzwqxlNYPzpuz7xsZBExJzCtcomB8fMlsCJnxV/kuMTTsWsPrRc7hsqzBCq3
plfYT8a7EBCVmJmDu+6mMVh+A9M2zZCqtV57AgMBAAGjIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQDRmkBYDk/F
lYTgMaL7yhRAowDR8mtOl06Oba/MCDDsmBvfOVIqY9Sa9sJtb78Uvecv9WAE4qpb
PNSaIvNmujmGQrBxtGwQdeHZvtXQ1lKAGBM+ukfz/NFU+6rwOdTuLgS5jTaiM4r1
uMoIG6S5z/brWGHAw8Qv2Fq05dSLnevmPWK903V/RnwmCKT9as5Qpfkaie2PNory
euxVGeGolxzgbSps3ZJlDSSymQBl10iJYYdGIsgcw5FHcCdpS4AutdpQbIFuCGk2
CHTcTTa4KMaVfRm/gKm1gh8UnuVDLBKQS8+1CHFYUnih8ozBNUyBo5r5L4BHJK/n
f0gnrYqaxtqPAyHkfo0PRc50HlAQRS/gW8ESv2PQmcSWlDggEzXt5MqFIYOfEXWE
gtC7Ct1P7gzIRxolYVsNSgBR62sJM1PUa39E5v0QJsmuM51txHuw/PGqkkBlEwx9
4u3FFXD9Pslg/g1l50Ek5O3/TWMta4Q+NagDDaOdkb4LRQr522ySgGIinBw00Xr+
Ics2L42rP1LQyaE7+BljDEwmGaT7mrl8QGZR9uv/9mbtk2+q7wAOQzcMSFFACfwx
KdKENmK0GpQZ6HaeDZO7wxGiAyFKhTNpd39XNOEhaKpWLSUxnZTLGlMIzvSyMGCj
PV3xm+Xg1Xvki7f/qC1djysMcaokoJzNdg==
-----END CERTIFICATE-----
`
caPrivPem = `-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA2Owc3IOljq3hkKO9q6jT1kFZp/+Y9dv/lTIbsMhV7gPrn2WS
Os2KQphcrB9DvGoZAxUh1aD8MUO7USIMhFodQEq7vfycDT16jrTnc1fSDDcfk3Ra
DvqZGcBkj0lc6w5LK3FNl2y6rA/26teGbyNaVXJfW01dKKy4E2or2+1+tUamyjiU
cvPzNWEoPPJT5HcadLTxZr/SKDvDXFyej4nKT9+j1pKmaqQrIrL4KXKq75LhU+kH
L4TG+kJ35isiv5OTpd2jG6ssz0i+ZEtX4hjIM6eCnZaiFT//33nSL3zl2LUjmyou
ks2FDuX9m90mg8UtcrpA/eVlwyg8nJ/d7/Yn3ZjuHVOgzE8YuQXauJ16HuLcFgEW
WQg1uwhOhrc5b0YJndZbJ2Re4qfaBoC5fODRXoUPvqG9k9kNp98xtclNCIyW8EpA
Su8QmK42ksOJox+OQamHzatrGppgIK77TY3ZFcBHETHClabgsfjv/1GNXXAexJI4
Cjntnou+yoc+LUj87WJfD3ERGgm0nDfnE7uZ7kccO5Lj/oajeO+Q+QJaLZ4I2jz+
a05k16naGGT29AnM+iwBqqoTODr4Z7905niZc6+fEOPml1V7wSuJs2eE4jOa3ixX
5tnruw74rN82Zfrkg6kWPOEfBBXzSotRiHv+BAV2tFbnC55ItnHn1ZE68V0CAwEA
AQKCAgEAssWsZ4PLXpIo8qYve5hQtSP4er7oVb8wnMnGDmSchOMQPbZc1D9uscGV
pnjBvzcFVAgHcWMSVJuIda4E+NK3hrPQlBvqk/LV3WRz1xhKUKzhRgm+6tdWc+We
OoRwonuOMchX9PKzyXgCu7pR3agaG499zOYuX4Yw0jdO3BqXsVf/v2rv1Oj9yEFB
AzGHOCN8VzCEPnTaAzR1pdnjB1K8vCUIhp8nrX2M2zT51lbdT0ISl6/VrzDTN46t
97AXHCHIrgrCENx6un4uAsQhMoHQBNoJiEyLWc371zYzpdVeK8HlDUyvQ2dDQGsF
Hn4c7r4C3aloRJbYzgSMJ1yNcOTCJpciQsq1VmCQFOHfbum2ejquXJ7BbeRGeHdM
145epckL+hbECTCpSs0Z5t90NdfoJspvr+3sOEt6h3DMUGjvobrf/s3KiRY5hHdc
x86Htx3QgWNCG6a+ga7h7s3t+4ZtoPPWn+vlAoxD/2eCzsDgE/XLDmC2T4yS8UOb
LIb4UN7wl2sNM8zhm4BfoiKfjGu4dPZIlsPP+ZKRby1O4ogHHoPREBTH1VSEplVM
fA/KSITV+rUfO3T/qXIFZ4/Wa5YZoULiMCOJOWNgXQzWvTf7Qr31LhhfXd+uIw30
LDtjdkpT43zlKxRosQFLiV7q3fVbvKPVQfxzBz7M1Gl74IllpmECggEBAOnAimKl
w/9mcMkEd2L2ZzDI+57W8DU6grUUILZfjJG+uvOiMmIA5M1wzsVHil3ihZthrJ5A
UUAKG97JgwjpIAliAzgtPZY2AK/bJz7Hht1EwewrV+RyHM+dMv6em2OA+XUjFSPX
VsecFDaUQebkypw236KdA4jgRG6hr4xObdXgTN6RFCv1mI1EijZ/YbKgPgTJaPI4
b2N5QokYFygUCwRxKIt7/Z4hQs9LbdW680NcXtPRPnS1SmwYJbi7wTX+o5f6nfNV
YvojborjXwNrZe0p+FfaEuD44wf6kNlRGfcKXoaAncXV/M5/oXf1dktKP630eq9g
0MAKFYJ6MAUheakCggEBAO2Rgfy/tEydBuP9BIfC84BXk8EwnRJ0ZfmXv77HFj3F
VT5QNU97I5gZJOFkL7OO8bL0/WljlqeBEht0DmHNmOt5Pyew4LRLpfeN6rClZgRN
V4wqKXjoZAIYa9khQcwwFNER1RdI+PkuusJtrvY6CbwwG9LbBq2NR4C1YSgaQnhV
NqdXK5dwrYEky6lI31sDD4BYeiJVKlkkNCQAVOC+04Mrsa9F0NG7TKXzji5hU8l5
x8squjvJ6vmobhmsRTL1LMpafUrt5pHL9jcWIZYxJJo9mB0zxJKcsLI8IOg2QPoj
tQ395FZ2YtjNzZa1CYeUOUiaQu+uvztfE36AdW/vUpUCggEAMV7bW66LURw/4hUx
ahOFBAbPLmNTZMqw5LIVnq9br0TLk73ESnLJ4KJc6coMbXv0oDbnEJ2hC5eW/10s
cetbOuAasfjMMzfAuWPeTCI0V/O3ybv12mhHsYoQRTsWstOA3L7GLkXDLHHIyyZR
LQVRzeDBJ0Vmg7hqe7tmqom+JRg05CVcT1SWHfBGCPCqn+G8d6Jaqh5FWIs6BF60
NWDWWt/TonJTxNxdkg7qaeQMkUOnO7HMMTZBO8d14Ci3zEG2J9llFwoH17E4Hdmc
Lcq3QnpE27lRl3a57Ot9QIkipMzp3hq4OBrURIEsh3uuuoQ6IvGqH/Sg4o6+sEpC
bjL90QKCAQBDc/0kdooK9sruEPkoUwIwfq1FPThb9RC/PYcD9CMshssdVkjMuHny
xbDjDj89DGk0FrudINm11b/+a4Vp36Z7tYFpE5+5kYEeOP1aCpxcvFkPQyljWxiK
P8TfccHs5/oBIr8OTXnjxpDgg6QZ5YC+HirIQ8gxntuef+GGMW6OHCPYf7ew2B1r
fbcV6csBXG0aVATZmrTbepwTXMS8y3Hi3JUm3vvbkQLCW9US9i+EFT/VP9yA/WPq
Xxhj0bYUMej1y5unmsTMwMy392Cx9GIgKTz3jatStYq2ELyHMmBgpaLSxjP/GL4Y
MNce42hBRqS9KI+43jUN9oDiejbeAWXBAoIBACZKnS6EppNDSdos0f32J5TxEUxv
lF9AuVXEAPDR/m3+PlSRzrlf7uowdHTfsemHMSduL8weLNhduprwz4TmW9Fo6fSF
UePLNbXcMX3omAk+AKKOiexLG0fCXGW2zr4nHZJzTbK2+La3yLeUcoPu1puNpLiq
LVj2bH3zKWVRA9/ovuN6V5w18ojjdqOw4bw5qXcdZhoWLxI8Q9Oqua24f/dnRpuI
I8mRtPQ3+vuOKbTT+/80eAUpSfEKwAg1Mjgury9q1/B4Ib6hAGzpJuXxG7xQjnsJ
EFcN1kvdg5WGK41+fYMdexPaLamjhDGN0e1vxJfAukWIAsBMwp8wfEWZvzA=
-----END RSA PRIVATE KEY-----
`
)
func TestNode_AdaptWithCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb"), "testNode")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "testNode",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
dbCurrent.EXPECT().SetCertificate(ca).Times(1)
dbCurrent.EXPECT().SetCertificateKey(caPriv).Times(1)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, err := AdaptFunc(monitor, namespace, nameLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptWithoutCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb"), "testNode")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "testNode",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1).Return(nil)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, err := AdaptFunc(monitor, namespace, nameLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptAlreadyExisting(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb"), "testNode")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "testNode",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{},
Data: map[string][]byte{
caCertKey: []byte(caPem),
caPrivKeyKey: []byte(caPrivPem),
},
Type: "Opaque",
}},
}
caCert, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPrivKey, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
dbCurrent.EXPECT().SetCertificate(caCert).Times(1)
dbCurrent.EXPECT().SetCertificateKey(caPrivKey).Times(1)
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, err := AdaptFunc(monitor, namespace, nameLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,48 @@
package pem
import (
"bytes"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
)
func EncodeCertificate(data []byte) ([]byte, error) {
certPem := new(bytes.Buffer)
if err := pem.Encode(certPem, &pem.Block{
Type: "CERTIFICATE",
Bytes: data,
}); err != nil {
return nil, err
}
return certPem.Bytes(), nil
}
func EncodeKey(key *rsa.PrivateKey) ([]byte, error) {
keyPem := new(bytes.Buffer)
if err := pem.Encode(keyPem, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(key),
}); err != nil {
return nil, err
}
return keyPem.Bytes(), nil
}
func DecodeKey(data []byte) (*rsa.PrivateKey, error) {
block, _ := pem.Decode(data)
if block == nil || block.Type != "RSA PRIVATE KEY" {
return nil, errors.New("failed to decode PEM block containing public key")
}
return x509.ParsePKCS1PrivateKey(block.Bytes)
}
func DecodeCertificate(data []byte) ([]byte, error) {
block, _ := pem.Decode(data)
if block == nil || block.Type != "CERTIFICATE" {
return nil, errors.New("failed to decode PEM block containing public key")
}
return block.Bytes, nil
}

View File

@@ -0,0 +1,73 @@
package managed
import (
"crypto/rsa"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate"
)
type Current struct {
Common *tree.Common `yaml:",inline"`
Current *CurrentDB
}
type CurrentDB struct {
URL string
Port string
ReadyFunc operator.EnsureFunc
CA *certificate.Current
AddUserFunc func(user string) (operator.QueryFunc, error)
DeleteUserFunc func(user string) (operator.DestroyFunc, error)
ListUsersFunc func(k8sClient kubernetes.ClientInt) ([]string, error)
ListDatabasesFunc func(k8sClient kubernetes.ClientInt) ([]string, error)
}
func (c *Current) GetURL() string {
return c.Current.URL
}
func (c *Current) GetPort() string {
return c.Current.Port
}
func (c *Current) GetReadyQuery() operator.EnsureFunc {
return c.Current.ReadyFunc
}
func (c *Current) GetCA() *certificate.Current {
return c.Current.CA
}
func (c *Current) GetCertificateKey() *rsa.PrivateKey {
return c.Current.CA.CertificateKey
}
func (c *Current) SetCertificateKey(key *rsa.PrivateKey) {
c.Current.CA.CertificateKey = key
}
func (c *Current) GetCertificate() []byte {
return c.Current.CA.Certificate
}
func (c *Current) SetCertificate(cert []byte) {
c.Current.CA.Certificate = cert
}
func (c *Current) GetListDatabasesFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return c.Current.ListDatabasesFunc
}
func (c *Current) GetListUsersFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return c.Current.ListUsersFunc
}
func (c *Current) GetAddUserFunc() func(user string) (operator.QueryFunc, error) {
return c.Current.AddUserFunc
}
func (c *Current) GetDeleteUserFunc() func(user string) (operator.DestroyFunc, error) {
return c.Current.DeleteUserFunc
}

View File

@@ -0,0 +1,50 @@
package database
import (
"fmt"
"github.com/caos/zitadel/operator"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
deployName string,
containerName string,
certsDir string,
userName string,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
cmdSql := fmt.Sprintf("cockroach sql --certs-dir=%s", certsDir)
createSql := fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s ", userName)
deleteSql := fmt.Sprintf("DROP DATABASE IF EXISTS %s", userName)
ensureDatabase := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, createSql))
}
destroyDatabase := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, deleteSql))
}
queriers := []operator.QueryFunc{
operator.EnsureFuncToQueryFunc(ensureDatabase),
}
destroyers := []operator.DestroyFunc{
destroyDatabase,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}

View File

@@ -0,0 +1,39 @@
package managed
import (
"github.com/caos/orbos/pkg/kubernetes/k8s"
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec Spec
}
type Spec struct {
Verbose bool
Force bool `yaml:"force,omitempty"`
ReplicaCount int `yaml:"replicaCount,omitempty"`
StorageCapacity string `yaml:"storageCapacity,omitempty"`
StorageClass string `yaml:"storageClass,omitempty"`
NodeSelector map[string]string `yaml:"nodeSelector,omitempty"`
Tolerations []corev1.Toleration `yaml:"tolerations,omitempty"`
ClusterDns string `yaml:"clusterDNS,omitempty"`
Backups map[string]*tree.Tree `yaml:"backups,omitempty"`
Resources *k8s.Resources `yaml:"resources,omitempty"`
}
func parseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{
Common: desiredTree.Common,
Spec: Spec{},
}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
return desiredKind, nil
}

View File

@@ -0,0 +1,36 @@
package managed
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups"
"github.com/pkg/errors"
)
func BackupList() func(monitor mntr.Monitor, desired *tree.Tree) ([]string, error) {
return func(monitor mntr.Monitor, desired *tree.Tree) ([]string, error) {
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
monitor.Verbose()
}
backuplists := make([]string, 0)
if desiredKind.Spec.Backups != nil {
for name, def := range desiredKind.Spec.Backups {
backuplist, err := backups.GetBackupList(monitor, name, def)
if err != nil {
return nil, err
}
for _, backup := range backuplist {
backuplists = append(backuplists, name+"."+backup)
}
}
}
return backuplists, nil
}
}

View File

@@ -0,0 +1,106 @@
package rbac
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/clusterrole"
"github.com/caos/orbos/pkg/kubernetes/resources/clusterrolebinding"
"github.com/caos/orbos/pkg/kubernetes/resources/role"
"github.com/caos/orbos/pkg/kubernetes/resources/rolebinding"
"github.com/caos/orbos/pkg/kubernetes/resources/serviceaccount"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
nameLabels *labels.Name,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("component", "rbac")
serviceAccountLabels := nameLabels
roleLabels := nameLabels
clusterRoleLabels := nameLabels
destroySA, err := serviceaccount.AdaptFuncToDestroy(namespace, serviceAccountLabels.Name())
if err != nil {
return nil, nil, err
}
destroyR, err := role.AdaptFuncToDestroy(namespace, roleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyCR, err := clusterrole.AdaptFuncToDestroy(clusterRoleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyRB, err := rolebinding.AdaptFuncToDestroy(namespace, roleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyCRB, err := clusterrolebinding.AdaptFuncToDestroy(roleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyR),
operator.ResourceDestroyToZitadelDestroy(destroyCR),
operator.ResourceDestroyToZitadelDestroy(destroyRB),
operator.ResourceDestroyToZitadelDestroy(destroyCRB),
operator.ResourceDestroyToZitadelDestroy(destroySA),
}
querySA, err := serviceaccount.AdaptFuncToEnsure(namespace, serviceAccountLabels)
if err != nil {
return nil, nil, err
}
queryR, err := role.AdaptFuncToEnsure(namespace, roleLabels, []string{""}, []string{"secrets"}, []string{"create", "get"})
if err != nil {
return nil, nil, err
}
queryCR, err := clusterrole.AdaptFuncToEnsure(clusterRoleLabels, []string{"certificates.k8s.io"}, []string{"certificatesigningrequests"}, []string{"create", "get", "watch"})
if err != nil {
return nil, nil, err
}
subjects := []rolebinding.Subject{{Kind: "ServiceAccount", Name: serviceAccountLabels.Name(), Namespace: namespace}}
queryRB, err := rolebinding.AdaptFuncToEnsure(namespace, roleLabels, subjects, roleLabels.Name())
if err != nil {
return nil, nil, err
}
subjectsCRB := []clusterrolebinding.Subject{{Kind: "ServiceAccount", Name: serviceAccountLabels.Name(), Namespace: namespace}}
queryCRB, err := clusterrolebinding.AdaptFuncToEnsure(roleLabels, subjectsCRB, roleLabels.Name())
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
//serviceaccount
operator.ResourceQueryToZitadelQuery(querySA),
//rbac
operator.ResourceQueryToZitadelQuery(queryR),
operator.ResourceQueryToZitadelQuery(queryCR),
operator.ResourceQueryToZitadelQuery(queryRB),
operator.ResourceQueryToZitadelQuery(queryCRB),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,207 @@
package rbac
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestRbac_Adapt1(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
name := "testName"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "testComponent"), name)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyServiceAccount(&corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
}})
k8sClient.EXPECT().ApplyRole(&rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{"create", "get"},
},
},
})
k8sClient.EXPECT().ApplyClusterRole(&rbacv1.ClusterRole{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{"certificates.k8s.io"},
Resources: []string{"certificatesigningrequests"},
Verbs: []string{"create", "get", "watch"},
},
},
})
k8sClient.EXPECT().ApplyRoleBinding(&rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
Name: name,
Kind: "Role",
APIGroup: "rbac.authorization.k8s.io",
},
})
k8sClient.EXPECT().ApplyClusterRoleBinding(&rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Name: name,
Kind: "ClusterRole",
},
})
query, _, err := AdaptFunc(monitor, namespace, nameLabels)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestRbac_Adapt2(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
name := "testName2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "cockroachdb", "v0"), "testComponent2"), name)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyServiceAccount(&corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
}})
k8sClient.EXPECT().ApplyRole(&rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{"create", "get"},
},
},
})
k8sClient.EXPECT().ApplyClusterRole(&rbacv1.ClusterRole{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{"certificates.k8s.io"},
Resources: []string{"certificatesigningrequests"},
Verbs: []string{"create", "get", "watch"},
},
},
})
k8sClient.EXPECT().ApplyRoleBinding(&rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
Name: name,
Kind: "Role",
APIGroup: "rbac.authorization.k8s.io",
},
})
k8sClient.EXPECT().ApplyClusterRoleBinding(&rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Name: name,
Kind: "ClusterRole",
},
})
query, _, err := AdaptFunc(monitor, namespace, nameLabels)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,79 @@
package services
import (
"github.com/caos/zitadel/operator"
"strconv"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/service"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
publicServiceNameLabels *labels.Name,
privateServiceNameLabels *labels.Name,
cockroachSelector *labels.Selector,
cockroachPort int32,
cockroachHTTPPort int32,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("type", "services")
publicServiceSelectable := labels.AsSelectable(publicServiceNameLabels)
destroySPD, err := service.AdaptFuncToDestroy("default", publicServiceSelectable.Name())
if err != nil {
return nil, nil, err
}
destroySP, err := service.AdaptFuncToDestroy(namespace, publicServiceSelectable.Name())
if err != nil {
return nil, nil, err
}
destroyS, err := service.AdaptFuncToDestroy(namespace, privateServiceNameLabels.Name())
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroySPD),
operator.ResourceDestroyToZitadelDestroy(destroySP),
operator.ResourceDestroyToZitadelDestroy(destroyS),
}
ports := []service.Port{
{Port: 26257, TargetPort: strconv.Itoa(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: strconv.Itoa(int(cockroachHTTPPort)), Name: "http"},
}
querySPD, err := service.AdaptFuncToEnsure("default", publicServiceSelectable, ports, "", cockroachSelector, false, "", "")
if err != nil {
return nil, nil, err
}
querySP, err := service.AdaptFuncToEnsure(namespace, publicServiceSelectable, ports, "", cockroachSelector, false, "", "")
if err != nil {
return nil, nil, err
}
queryS, err := service.AdaptFuncToEnsure(namespace, privateServiceNameLabels, ports, "", cockroachSelector, true, "None", "")
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(querySPD),
operator.ResourceQueryToZitadelQuery(querySP),
operator.ResourceQueryToZitadelQuery(queryS),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,218 @@
package services
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"testing"
)
func TestService_Adapt1(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "testComponent")
name := "testSvc"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(componentLabels, name)
publicName := "testPublic"
k8sPublicLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": publicName,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
publicNameLabels := labels.MustForName(componentLabels, publicName)
cdbName := "testCdbName"
k8sCdbLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": cdbName,
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
cdbNameLabels := labels.MustForName(componentLabels, cdbName)
cockroachPort := int32(25267)
cockroachHttpPort := int32(8080)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: namespace,
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: "default",
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: true,
ClusterIP: "None",
},
})
query, _, err := AdaptFunc(monitor, namespace, publicNameLabels, nameLabels, labels.DeriveNameSelector(cdbNameLabels, false), cockroachPort, cockroachHttpPort)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestService_Adapt2(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "cockroachdb", "v0"), "testComponent2")
name := "testSvc2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(componentLabels, name)
publicName := "testPublic2"
k8sPublicLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": publicName,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
publicNameLabels := labels.MustForName(componentLabels, publicName)
cdbName := "testCdbName2"
k8sCdbLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": cdbName,
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
cdbNameLabels := labels.MustForName(componentLabels, cdbName)
cockroachPort := int32(23)
cockroachHttpPort := int32(24)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: namespace,
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: "default",
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: true,
ClusterIP: "None",
},
})
query, _, err := AdaptFunc(monitor, namespace, publicNameLabels, nameLabels, labels.DeriveNameSelector(cdbNameLabels, false), cockroachPort, cockroachHttpPort)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,352 @@
package statefulset
import (
"fmt"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/helpers"
"k8s.io/apimachinery/pkg/util/intstr"
"sort"
"strings"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/k8s"
"github.com/caos/orbos/pkg/kubernetes/resources"
"github.com/caos/orbos/pkg/kubernetes/resources/statefulset"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
certPath = "/cockroach/cockroach-certs"
clientCertPath = "/cockroach/cockroach-client-certs"
datadirPath = "/cockroach/cockroach-data"
datadirInternal = "datadir"
certsInternal = "certs"
clientCertsInternal = "client-certs"
defaultMode = int32(256)
nodeSecret = "cockroachdb.node"
rootSecret = "cockroachdb.client.root"
)
type Affinity struct {
key string
value string
}
type Affinitys []metav1.LabelSelectorRequirement
func (a Affinitys) Len() int { return len(a) }
func (a Affinitys) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a Affinitys) Less(i, j int) bool { return a[i].Key < a[j].Key }
func AdaptFunc(
monitor mntr.Monitor,
sfsSelectable *labels.Selectable,
podSelector *labels.Selector,
force bool,
namespace string,
image string,
serviceAccountName string,
replicaCount int,
storageCapacity string,
dbPort int32,
httpPort int32,
storageClass string,
nodeSelector map[string]string,
tolerations []corev1.Toleration,
resourcesSFS *k8s.Resources,
) (
resources.QueryFunc,
resources.DestroyFunc,
operator.EnsureFunc,
operator.EnsureFunc,
func(k8sClient kubernetes.ClientInt) ([]string, error),
error,
) {
internalMonitor := monitor.WithField("component", "statefulset")
quantity, err := resource.ParseQuantity(storageCapacity)
if err != nil {
return nil, nil, nil, nil, nil, err
}
name := sfsSelectable.Name()
k8sSelectable := labels.MustK8sMap(sfsSelectable)
statefulsetDef := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sSelectable,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: helpers.PointerInt32(int32(replicaCount)),
Selector: &metav1.LabelSelector{
MatchLabels: labels.MustK8sMap(podSelector),
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: k8sSelectable,
},
Spec: corev1.PodSpec{
NodeSelector: nodeSelector,
Tolerations: tolerations,
ServiceAccountName: serviceAccountName,
Affinity: getAffinity(k8sSelectable),
Containers: []corev1.Container{{
Name: name,
Image: image,
ImagePullPolicy: "IfNotPresent",
Ports: []corev1.ContainerPort{
{ContainerPort: dbPort, Name: "grpc"},
{ContainerPort: httpPort, Name: "http"},
},
LivenessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 30,
PeriodSeconds: 5,
},
ReadinessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health?ready=1",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 10,
PeriodSeconds: 5,
FailureThreshold: 2,
},
VolumeMounts: []corev1.VolumeMount{{
Name: datadirInternal,
MountPath: datadirPath,
}, {
Name: certsInternal,
MountPath: certPath,
}, {
Name: clientCertsInternal,
MountPath: clientCertPath,
}},
Env: []corev1.EnvVar{{
Name: "COCKROACH_CHANNEL",
Value: "kubernetes-multiregion",
}},
Command: []string{
"/bin/bash",
"-ecx",
getJoinExec(
namespace,
name,
int(dbPort),
replicaCount,
),
},
Resources: getResources(resourcesSFS),
}},
Volumes: []corev1.Volume{{
Name: datadirInternal,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: datadirInternal,
},
},
}, {
Name: certsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: nodeSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: clientCertsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}},
},
},
PodManagementPolicy: appsv1.ParallelPodManagement,
UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
Type: "RollingUpdate",
},
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{{
ObjectMeta: metav1.ObjectMeta{
Name: datadirInternal,
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{
corev1.PersistentVolumeAccessMode("ReadWriteOnce"),
},
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"storage": quantity,
},
},
StorageClassName: &storageClass,
},
}},
},
}
query, err := statefulset.AdaptFuncToEnsure(statefulsetDef, force)
if err != nil {
return nil, nil, nil, nil, nil, err
}
destroy, err := statefulset.AdaptFuncToDestroy(namespace, name)
if err != nil {
return nil, nil, nil, nil, nil, err
}
wrapedQuery, wrapedDestroy, err := resources.WrapFuncs(internalMonitor, query, destroy)
checkDBRunning := func(k8sClient kubernetes.ClientInt) error {
internalMonitor.Info("waiting for statefulset to be running")
if err := k8sClient.WaitUntilStatefulsetIsReady(namespace, name, true, false, 60); err != nil {
internalMonitor.Error(errors.Wrap(err, "error while waiting for statefulset to be running"))
return err
}
internalMonitor.Info("statefulset is running")
return nil
}
checkDBNotReady := func(k8sClient kubernetes.ClientInt) error {
internalMonitor.Info("checking for statefulset to not be ready")
if err := k8sClient.WaitUntilStatefulsetIsReady(namespace, name, true, true, 1); err != nil {
internalMonitor.Info("statefulset is not ready")
return nil
}
internalMonitor.Info("statefulset is ready")
return errors.New("statefulset is ready")
}
ensureInit := func(k8sClient kubernetes.ClientInt) error {
if err := checkDBRunning(k8sClient); err != nil {
return err
}
if err := checkDBNotReady(k8sClient); err != nil {
return nil
}
command := "/cockroach/cockroach init --certs-dir=" + clientCertPath + " --host=" + name + "-0." + name
if err := k8sClient.ExecInPod(namespace, name+"-0", name, command); err != nil {
return err
}
return nil
}
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
internalMonitor.Info("waiting for statefulset to be ready")
if err := k8sClient.WaitUntilStatefulsetIsReady(namespace, name, true, true, 60); err != nil {
internalMonitor.Error(errors.Wrap(err, "error while waiting for statefulset to be ready"))
return err
}
internalMonitor.Info("statefulset is ready")
return nil
}
getAllDBs := func(k8sClient kubernetes.ClientInt) ([]string, error) {
if err := checkDBRunning(k8sClient); err != nil {
return nil, err
}
if err := checkDBReady(k8sClient); err != nil {
return nil, err
}
command := "/cockroach/cockroach sql --certs-dir=" + clientCertPath + " --host=" + name + "-0." + name + " -e 'SHOW DATABASES;'"
databasesStr, err := k8sClient.ExecInPodWithOutput(namespace, name+"-0", name, command)
if err != nil {
return nil, err
}
databases := strings.Split(databasesStr, "\n")
dbAndOwners := databases[1 : len(databases)-1]
dbs := []string{}
for _, dbAndOwner := range dbAndOwners {
parts := strings.Split(dbAndOwner, "\t")
if parts[1] != "node" {
dbs = append(dbs, parts[0])
}
}
return dbs, nil
}
return wrapedQuery, wrapedDestroy, ensureInit, checkDBReady, getAllDBs, err
}
func getJoinExec(namespace string, name string, dbPort int, replicaCount int) string {
joinList := make([]string, 0)
for i := 0; i < replicaCount; i++ {
joinList = append(joinList, fmt.Sprintf("%s-%d.%s.%s:%d", name, i, name, namespace, dbPort))
}
joinListStr := strings.Join(joinList, ",")
locality := "zone=" + namespace
return "exec /cockroach/cockroach start --logtostderr --certs-dir " + certPath + " --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join " + joinListStr + " --locality " + locality + " --cache 25% --max-sql-memory 25%"
}
func getResources(resourcesSFS *k8s.Resources) corev1.ResourceRequirements {
internalResources := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
}
if resourcesSFS != nil {
internalResources = corev1.ResourceRequirements{}
if resourcesSFS.Requests != nil {
internalResources.Requests = resourcesSFS.Requests
}
if resourcesSFS.Limits != nil {
internalResources.Limits = resourcesSFS.Limits
}
}
return internalResources
}
func getAffinity(labels map[string]string) *corev1.Affinity {
affinity := Affinitys{}
for k, v := range labels {
affinity = append(affinity, metav1.LabelSelectorRequirement{
Key: k,
Operator: metav1.LabelSelectorOpIn,
Values: []string{
v,
}})
}
sort.Sort(affinity)
return &corev1.Affinity{
PodAntiAffinity: &corev1.PodAntiAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{{
LabelSelector: &metav1.LabelSelector{
MatchExpressions: affinity,
},
TopologyKey: "kubernetes.io/hostname",
}},
},
}
}

View File

@@ -0,0 +1,506 @@
package statefulset
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes/k8s"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"testing"
)
func TestStatefulset_JoinExec0(t *testing.T) {
namespace := "testNs"
name := "test"
dbPort := 26257
replicaCount := 0
equals := "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join --locality zone=testNs --cache 25% --max-sql-memory 25%"
assert.Equal(t, equals, getJoinExec(namespace, name, dbPort, replicaCount))
}
func TestStatefulset_JoinExec1(t *testing.T) {
namespace := "testNs2"
name := "test2"
dbPort := 26257
replicaCount := 1
equals := "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join test2-0.test2.testNs2:26257 --locality zone=testNs2 --cache 25% --max-sql-memory 25%"
assert.Equal(t, equals, getJoinExec(namespace, name, dbPort, replicaCount))
}
func TestStatefulset_JoinExec2(t *testing.T) {
namespace := "testNs"
name := "test"
dbPort := 23
replicaCount := 2
equals := "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join test-0.test.testNs:23,test-1.test.testNs:23 --locality zone=testNs --cache 25% --max-sql-memory 25%"
assert.Equal(t, equals, getJoinExec(namespace, name, dbPort, replicaCount))
}
func TestStatefulset_Resources0(t *testing.T) {
equals := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
}
assert.Equal(t, equals, getResources(nil))
}
func TestStatefulset_Resources1(t *testing.T) {
res := &k8s.Resources{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("200m"),
"memory": resource.MustParse("600Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("500m"),
"memory": resource.MustParse("126Mi"),
},
}
equals := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("200m"),
"memory": resource.MustParse("600Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("500m"),
"memory": resource.MustParse("126Mi"),
},
}
assert.Equal(t, equals, getResources(res))
}
func TestStatefulset_Resources2(t *testing.T) {
res := &k8s.Resources{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("300m"),
"memory": resource.MustParse("670Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("600m"),
"memory": resource.MustParse("256Mi"),
},
}
equals := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("300m"),
"memory": resource.MustParse("670Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("600m"),
"memory": resource.MustParse("256Mi"),
},
}
assert.Equal(t, equals, getResources(res))
}
func TestStatefulset_Adapt1(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
name := "test"
image := "cockroach"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "testComponent"), name)
k8sSelectableLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
k8sSelectorLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
selector := labels.DeriveNameSelector(nameLabels, false)
selectable := labels.AsSelectable(nameLabels)
serviceAccountName := "testSA"
replicaCount := 1
storageCapacity := "20Gi"
dbPort := int32(26257)
httpPort := int32(8080)
storageClass := "testSC"
nodeSelector := map[string]string{}
tolerations := []corev1.Toleration{}
resourcesSFS := &k8s.Resources{}
quantity, err := resource.ParseQuantity(storageCapacity)
assert.NoError(t, err)
sfs := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sSelectableLabels,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: helpers.PointerInt32(int32(replicaCount)),
Selector: &metav1.LabelSelector{
MatchLabels: k8sSelectorLabels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: k8sSelectableLabels,
},
Spec: corev1.PodSpec{
NodeSelector: nodeSelector,
Tolerations: tolerations,
ServiceAccountName: serviceAccountName,
Affinity: getAffinity(k8sSelectableLabels),
Containers: []corev1.Container{{
Name: name,
Image: image,
ImagePullPolicy: "IfNotPresent",
Ports: []corev1.ContainerPort{
{ContainerPort: dbPort, Name: "grpc"},
{ContainerPort: httpPort, Name: "http"},
},
LivenessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 30,
PeriodSeconds: 5,
},
ReadinessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health?ready=1",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 10,
PeriodSeconds: 5,
FailureThreshold: 2,
},
VolumeMounts: []corev1.VolumeMount{{
Name: datadirInternal,
MountPath: datadirPath,
}, {
Name: certsInternal,
MountPath: certPath,
}, {
Name: clientCertsInternal,
MountPath: clientCertPath,
}},
Env: []corev1.EnvVar{{
Name: "COCKROACH_CHANNEL",
Value: "kubernetes-multiregion",
}},
Command: []string{
"/bin/bash",
"-ecx",
getJoinExec(
namespace,
name,
int(dbPort),
replicaCount,
),
},
Resources: getResources(resourcesSFS),
}},
Volumes: []corev1.Volume{{
Name: datadirInternal,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: datadirInternal,
},
},
}, {
Name: certsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: nodeSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: clientCertsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}},
},
},
PodManagementPolicy: appsv1.PodManagementPolicyType("Parallel"),
UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
Type: "RollingUpdate",
},
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{{
ObjectMeta: metav1.ObjectMeta{
Name: datadirInternal,
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{
corev1.PersistentVolumeAccessMode("ReadWriteOnce"),
},
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"storage": quantity,
},
},
StorageClassName: &storageClass,
},
}},
},
}
k8sClient.EXPECT().ApplyStatefulSet(sfs, false)
query, _, _, _, _, err := AdaptFunc(
monitor,
selectable,
selector,
false,
namespace,
image,
serviceAccountName,
replicaCount,
storageCapacity,
dbPort,
httpPort,
storageClass,
nodeSelector,
tolerations,
resourcesSFS,
)
assert.NoError(t, err)
ensure, err := query(k8sClient)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestStatefulset_Adapt2(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
name := "test2"
image := "cockroach2"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "cockroachdb", "v0"), "testComponent2"), name)
k8sSelectableLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
k8sSelectorLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
selector := labels.DeriveNameSelector(nameLabels, false)
selectable := labels.AsSelectable(nameLabels)
serviceAccountName := "testSA2"
replicaCount := 2
storageCapacity := "40Gi"
dbPort := int32(23)
httpPort := int32(24)
storageClass := "testSC2"
nodeSelector := map[string]string{}
tolerations := []corev1.Toleration{}
resourcesSFS := &k8s.Resources{}
quantity, err := resource.ParseQuantity(storageCapacity)
assert.NoError(t, err)
sfs := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sSelectableLabels,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: helpers.PointerInt32(int32(replicaCount)),
Selector: &metav1.LabelSelector{
MatchLabels: k8sSelectorLabels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: k8sSelectableLabels,
},
Spec: corev1.PodSpec{
NodeSelector: nodeSelector,
Tolerations: tolerations,
ServiceAccountName: serviceAccountName,
Affinity: getAffinity(k8sSelectableLabels),
Containers: []corev1.Container{{
Name: name,
Image: image,
ImagePullPolicy: "IfNotPresent",
Ports: []corev1.ContainerPort{
{ContainerPort: dbPort, Name: "grpc"},
{ContainerPort: httpPort, Name: "http"},
},
LivenessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 30,
PeriodSeconds: 5,
},
ReadinessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health?ready=1",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 10,
PeriodSeconds: 5,
FailureThreshold: 2,
},
VolumeMounts: []corev1.VolumeMount{{
Name: datadirInternal,
MountPath: datadirPath,
}, {
Name: certsInternal,
MountPath: certPath,
}, {
Name: clientCertsInternal,
MountPath: clientCertPath,
}},
Env: []corev1.EnvVar{{
Name: "COCKROACH_CHANNEL",
Value: "kubernetes-multiregion",
}},
Command: []string{
"/bin/bash",
"-ecx",
getJoinExec(
namespace,
name,
int(dbPort),
replicaCount,
),
},
Resources: getResources(resourcesSFS),
}},
Volumes: []corev1.Volume{{
Name: datadirInternal,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: datadirInternal,
},
},
}, {
Name: certsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: nodeSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: clientCertsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}},
},
},
PodManagementPolicy: appsv1.PodManagementPolicyType("Parallel"),
UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
Type: "RollingUpdate",
},
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{{
ObjectMeta: metav1.ObjectMeta{
Name: datadirInternal,
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{
corev1.PersistentVolumeAccessMode("ReadWriteOnce"),
},
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"storage": quantity,
},
},
StorageClassName: &storageClass,
},
}},
},
}
k8sClient.EXPECT().ApplyStatefulSet(sfs, false)
query, _, _, _, _, err := AdaptFunc(
monitor,
selectable,
selector,
false,
namespace,
image,
serviceAccountName,
replicaCount,
storageCapacity,
dbPort,
httpPort,
storageClass,
nodeSelector,
tolerations,
resourcesSFS,
)
assert.NoError(t, err)
ensure, err := query(k8sClient)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,72 @@
package user
import (
"fmt"
"github.com/caos/zitadel/operator"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
deployName string,
containerName string,
certsDir string,
userName string,
password string,
componentLabels *labels.Component,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
cmdSql := fmt.Sprintf("cockroach sql --certs-dir=%s", certsDir)
createSql := fmt.Sprintf("CREATE USER IF NOT EXISTS %s ", userName)
if password != "" {
createSql = fmt.Sprintf("%s WITH PASSWORD %s", createSql, password)
}
deleteSql := fmt.Sprintf("DROP USER IF EXISTS %s", userName)
_, _, addUserFunc, deleteUserFunc, _, err := certificate.AdaptFunc(monitor, namespace, componentLabels, "", false)
if err != nil {
return nil, nil, err
}
addUser, err := addUserFunc(userName)
if err != nil {
return nil, nil, err
}
ensureUser := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, createSql))
}
deleteUser, err := deleteUserFunc(userName)
if err != nil {
return nil, nil, err
}
destoryUser := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, deleteSql))
}
queriers := []operator.QueryFunc{
addUser,
operator.EnsureFuncToQueryFunc(ensureUser),
}
destroyers := []operator.DestroyFunc{
destoryUser,
deleteUser,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}

View File

@@ -0,0 +1,59 @@
package provided
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func AdaptFunc() func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
return func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
currentDB := &Current{
Common: &tree.Common{
Kind: "databases.caos.ch/ProvidedDatabase",
Version: "v0",
},
}
current.Parsed = currentDB
return func(k8sClient kubernetes.ClientInt, _ map[string]interface{}) (operator.EnsureFunc, error) {
currentDB.Current.URL = desiredKind.Spec.URL
currentDB.Current.Port = desiredKind.Spec.Port
return func(k8sClient kubernetes.ClientInt) error {
return nil
}, nil
}, func(k8sClient kubernetes.ClientInt) error {
return nil
},
map[string]*secret.Secret{},
nil
}
}

View File

@@ -0,0 +1,21 @@
package provided
import (
"github.com/caos/orbos/pkg/tree"
)
type Current struct {
Common *tree.Common `yaml:",inline"`
Current struct {
URL string
Port string
}
}
func (c *Current) GetURL() string {
return c.Current.URL
}
func (c *Current) GetPort() string {
return c.Current.Port
}

View File

@@ -0,0 +1,32 @@
package provided
import (
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec Spec
}
type Spec struct {
Verbose bool
Namespace string
URL string
Port string
Users []string
}
func parseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{
Common: desiredTree.Common,
Spec: Spec{},
}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
return desiredKind, nil
}

View File

@@ -0,0 +1,110 @@
package orb
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/namespace"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/orbos/pkg/treelabels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases"
"github.com/pkg/errors"
)
const (
NamespaceStr = "caos-zitadel"
)
func OperatorSelector() *labels.Selector {
return labels.OpenOperatorSelector("ZITADEL", "database.caos.ch")
}
func AdaptFunc(timestamp string, binaryVersion *string, features ...string) operator.AdaptFunc {
return func(monitor mntr.Monitor, orbDesiredTree *tree.Tree, currentTree *tree.Tree) (queryFunc operator.QueryFunc, destroyFunc operator.DestroyFunc, secrets map[string]*secret.Secret, err error) {
defer func() {
err = errors.Wrapf(err, "building %s failed", orbDesiredTree.Common.Kind)
}()
orbMonitor := monitor.WithField("kind", "orb")
desiredKind, err := parseDesiredV0(orbDesiredTree)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
orbDesiredTree.Parsed = desiredKind
currentTree = &tree.Tree{}
if desiredKind.Spec.Verbose && !orbMonitor.IsVerbose() {
orbMonitor = orbMonitor.Verbose()
}
queryNS, err := namespace.AdaptFuncToEnsure(NamespaceStr)
if err != nil {
return nil, nil, nil, err
}
destroyNS, err := namespace.AdaptFuncToDestroy(NamespaceStr)
if err != nil {
return nil, nil, nil, err
}
databaseCurrent := &tree.Tree{}
operatorLabels := mustDatabaseOperator(binaryVersion)
queryDB, destroyDB, secrets, err := databases.GetQueryAndDestroyFuncs(
orbMonitor,
desiredKind.Database,
databaseCurrent,
NamespaceStr,
treelabels.MustForAPI(desiredKind.Database, operatorLabels),
timestamp,
desiredKind.Spec.NodeSelector,
desiredKind.Spec.Tolerations,
desiredKind.Spec.Version,
features,
)
if err != nil {
return nil, nil, nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(queryNS),
queryDB,
}
if desiredKind.Spec.SelfReconciling {
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(Reconcile(monitor, orbDesiredTree)),
)
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyNS),
destroyDB,
}
currentTree.Parsed = &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/Orb",
Version: "v0",
},
Database: databaseCurrent,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
if queried == nil {
queried = map[string]interface{}{}
}
monitor.WithField("queriers", len(queriers)).Info("Querying")
return operator.QueriersToEnsureFunc(monitor, true, queriers, k8sClient, queried)
},
func(k8sClient kubernetes.ClientInt) error {
monitor.WithField("destroyers", len(queriers)).Info("Destroy")
return operator.DestroyersToDestroyFunc(monitor, destroyers)(k8sClient)
},
secrets,
nil
}
}

View File

@@ -0,0 +1,24 @@
package orb
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases"
"github.com/pkg/errors"
)
func BackupListFunc() func(monitor mntr.Monitor, desiredTree *tree.Tree) (strings []string, err error) {
return func(monitor mntr.Monitor, desiredTree *tree.Tree) (strings []string, err error) {
desiredKind, err := parseDesiredV0(desiredTree)
if err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desiredTree.Parsed = desiredKind
if desiredKind.Spec.Verbose && !monitor.IsVerbose() {
monitor = monitor.Verbose()
}
return databases.GetBackupList(monitor, desiredKind.Database)
}
}

View File

@@ -0,0 +1,33 @@
package orb
import (
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec struct {
Verbose bool
NodeSelector map[string]string `yaml:"nodeSelector,omitempty"`
Tolerations []corev1.Toleration `yaml:"tolerations,omitempty"`
Version string `yaml:"version,omitempty"`
SelfReconciling bool `yaml:"selfReconciling"`
//Use this registry to pull the ZITADEL operator image from
//@default: ghcr.io
CustomImageRegistry string `json:"customImageRegistry,omitempty" yaml:"customImageRegistry,omitempty"`
}
Database *tree.Tree
}
func parseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{Common: desiredTree.Common}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desiredKind.Common.Version = "v0"
return desiredKind, nil
}

View File

@@ -0,0 +1,13 @@
package orb
import "github.com/caos/orbos/pkg/labels"
func mustDatabaseOperator(binaryVersion *string) *labels.Operator {
version := "unknown"
if binaryVersion != nil {
version = *binaryVersion
}
return labels.MustForOperator("ZITADEL", "database.caos.ch", version)
}

View File

@@ -0,0 +1,48 @@
package orb
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/orbos/pkg/treelabels"
"github.com/caos/zitadel/operator"
zitadelKubernetes "github.com/caos/zitadel/pkg/kubernetes"
"github.com/pkg/errors"
)
func Reconcile(monitor mntr.Monitor, desiredTree *tree.Tree) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) (err error) {
defer func() {
err = errors.Wrapf(err, "building %s failed", desiredTree.Common.Kind)
}()
desiredKind, err := parseDesiredV0(desiredTree)
if err != nil {
return errors.Wrap(err, "parsing desired state failed")
}
desiredTree.Parsed = desiredKind
recMonitor := monitor.WithField("version", desiredKind.Spec.Version)
if desiredKind.Spec.Version == "" {
err := errors.New("No version set in database.yml")
monitor.Error(err)
return err
}
imageRegistry := desiredKind.Spec.CustomImageRegistry
if imageRegistry == "" {
imageRegistry = "ghcr.io"
}
if err := zitadelKubernetes.EnsureDatabaseArtifacts(monitor, treelabels.MustForAPI(desiredTree, mustDatabaseOperator(&desiredKind.Spec.Version)), k8sClient, desiredKind.Spec.Version, desiredKind.Spec.NodeSelector, desiredKind.Spec.Tolerations, imageRegistry); err != nil {
recMonitor.Error(errors.Wrap(err, "Failed to deploy database-operator into k8s-cluster"))
return err
}
recMonitor.Info("Applied database-operator")
return nil
}
}

View File

@@ -0,0 +1,45 @@
package database
import (
"errors"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
)
func Takeoff(monitor mntr.Monitor, gitClient *git.Client, adapt operator.AdaptFunc, k8sClient *kubernetes.Client) func() {
return func() {
internalMonitor := monitor.WithField("operator", "database")
internalMonitor.Info("Takeoff")
treeDesired, err := operator.Parse(gitClient, "database.yml")
if err != nil {
monitor.Error(err)
return
}
treeCurrent := &tree.Tree{}
if !k8sClient.Available() {
internalMonitor.Error(errors.New("kubeclient is not available"))
return
}
query, _, _, err := adapt(internalMonitor, treeDesired, treeCurrent)
if err != nil {
internalMonitor.Error(err)
return
}
ensure, err := query(k8sClient, map[string]interface{}{})
if err != nil {
internalMonitor.Error(err)
return
}
if err := ensure(k8sClient); err != nil {
internalMonitor.Error(err)
return
}
}
}

View File

@@ -0,0 +1,13 @@
package helpers
import "k8s.io/apimachinery/pkg/util/intstr"
func IntToIntStr(value int) *intstr.IntOrString {
v := intstr.FromInt(value)
return &v
}
func StringToIntStr(value string) *intstr.IntOrString {
v := intstr.FromString(value)
return &v
}

17
operator/helpers/path.go Normal file
View File

@@ -0,0 +1,17 @@
package helpers
import (
"os"
"strings"
)
func PruneHome(pwd string) string {
if strings.HasPrefix(pwd, "~") {
userhome, err := os.UserHomeDir()
if err != nil {
panic(err)
}
pwd = userhome + pwd[1:]
}
return pwd
}

View File

@@ -0,0 +1,16 @@
package helpers
func PointerInt32(value int32) *int32 {
pointer := value
return &pointer
}
func PointerInt64(value int64) *int64 {
pointer := value
return &pointer
}
func PointerBool(value bool) *bool {
pointer := value
return &pointer
}

View File

@@ -0,0 +1,97 @@
package secrets
import (
"errors"
orbdb "github.com/caos/zitadel/operator/database/kinds/orb"
"strings"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/orb"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/api"
zitadelOrb "github.com/caos/zitadel/operator/zitadel/kinds/orb"
)
const (
zitadel string = "zitadel"
database string = "database"
)
func GetAllSecretsFunc(orb *orb.Orb) func(monitor mntr.Monitor, gitClient *git.Client) (map[string]*secret.Secret, map[string]*tree.Tree, error) {
return func(monitor mntr.Monitor, gitClient *git.Client) (map[string]*secret.Secret, map[string]*tree.Tree, error) {
allSecrets := make(map[string]*secret.Secret, 0)
allTrees := make(map[string]*tree.Tree, 0)
foundZitadel, err := api.ExistsZitadelYml(gitClient)
if err != nil {
return nil, nil, err
}
if foundZitadel {
zitadelYML, err := api.ReadZitadelYml(gitClient)
if err != nil {
return nil, nil, err
}
allTrees[zitadel] = zitadelYML
_, _, zitadelSecrets, err := zitadelOrb.AdaptFunc(orb, "secret", nil, []string{})(monitor, zitadelYML, &tree.Tree{})
if err != nil {
return nil, nil, err
}
if zitadelSecrets != nil && len(zitadelSecrets) > 0 {
secret.AppendSecrets(zitadel, allSecrets, zitadelSecrets)
}
} else {
monitor.Info("no file for zitadel found")
}
foundDB, err := api.ExistsDatabaseYml(gitClient)
if err != nil {
return nil, nil, err
}
if foundDB {
dbYML, err := api.ReadDatabaseYml(gitClient)
if err != nil {
return nil, nil, err
}
allTrees[database] = dbYML
_, _, dbSecrets, err := orbdb.AdaptFunc("", nil, "database", "backup")(monitor, dbYML, nil)
if err != nil {
return nil, nil, err
}
if dbSecrets != nil && len(dbSecrets) > 0 {
secret.AppendSecrets(database, allSecrets, dbSecrets)
}
} else {
monitor.Info("no file for database found")
}
return allSecrets, allTrees, nil
}
}
func PushFunc() func(monitor mntr.Monitor, gitClient *git.Client, trees map[string]*tree.Tree, path string) error {
return func(monitor mntr.Monitor, gitClient *git.Client, trees map[string]*tree.Tree, path string) error {
operator := ""
if strings.HasPrefix(path, zitadel) {
operator = zitadel
} else if strings.HasPrefix(path, database) {
operator = database
} else {
return errors.New("Operator unknown")
}
desired, found := trees[operator]
if !found {
return errors.New("Operator file not found")
}
if operator == zitadel {
return api.PushZitadelDesiredFunc(gitClient, desired)(monitor)
} else if operator == database {
return api.PushDatabaseDesiredFunc(gitClient, desired)(monitor)
}
return errors.New("Operator push function unknown")
}
}

155
operator/start/start.go Normal file
View File

@@ -0,0 +1,155 @@
package start
import (
"context"
"github.com/caos/zitadel/operator/database"
orbdb "github.com/caos/zitadel/operator/database/kinds/orb"
"github.com/caos/zitadel/operator/zitadel"
"runtime/debug"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/kubernetes"
orbconfig "github.com/caos/orbos/pkg/orb"
"github.com/caos/zitadel/operator/zitadel/kinds/orb"
"github.com/caos/zitadel/pkg/databases"
kubernetes2 "github.com/caos/zitadel/pkg/kubernetes"
)
func Operator(monitor mntr.Monitor, orbConfigPath string, k8sClient *kubernetes.Client, version *string) error {
takeoffChan := make(chan struct{})
go func() {
takeoffChan <- struct{}{}
}()
for range takeoffChan {
orbConfig, err := orbconfig.ParseOrbConfig(orbConfigPath)
if err != nil {
monitor.Error(err)
return err
}
gitClient := git.New(context.Background(), monitor, "orbos", "orbos@caos.ch")
if err := gitClient.Configure(orbConfig.URL, []byte(orbConfig.Repokey)); err != nil {
monitor.Error(err)
return err
}
takeoff := zitadel.Takeoff(monitor, gitClient, orb.AdaptFunc(orbConfig, "ensure", version, []string{"operator", "iam"}), k8sClient)
go func() {
started := time.Now()
takeoff()
monitor.WithFields(map[string]interface{}{
"took": time.Since(started),
}).Info("Iteration done")
debug.FreeOSMemory()
takeoffChan <- struct{}{}
}()
}
return nil
}
func Restore(monitor mntr.Monitor, gitClient *git.Client, orbCfg *orbconfig.Orb, k8sClient *kubernetes.Client, backup string, version *string) error {
databasesList := []string{
"notification",
"adminapi",
"auth",
"authz",
"eventstore",
"management",
}
if err := kubernetes2.ScaleZitadelOperator(monitor, k8sClient, 0); err != nil {
return err
}
if err := zitadel.Takeoff(monitor, gitClient, orb.AdaptFunc(orbCfg, "scaledown", version, []string{"scaledown"}), k8sClient)(); err != nil {
return err
}
if err := databases.Clear(monitor, k8sClient, gitClient, databasesList); err != nil {
return err
}
if err := zitadel.Takeoff(monitor, gitClient, orb.AdaptFunc(orbCfg, "migration", version, []string{"migration"}), k8sClient)(); err != nil {
return err
}
if err := databases.Restore(
monitor,
k8sClient,
gitClient,
backup,
databasesList,
); err != nil {
return err
}
if err := zitadel.Takeoff(monitor, gitClient, orb.AdaptFunc(orbCfg, "scaleup", version, []string{"scaleup"}), k8sClient)(); err != nil {
return err
}
if err := kubernetes2.ScaleZitadelOperator(monitor, k8sClient, 1); err != nil {
return err
}
return nil
}
func Database(monitor mntr.Monitor, orbConfigPath string, k8sClient *kubernetes.Client, binaryVersion *string) error {
takeoffChan := make(chan struct{})
go func() {
takeoffChan <- struct{}{}
}()
for range takeoffChan {
orbConfig, err := orbconfig.ParseOrbConfig(orbConfigPath)
if err != nil {
monitor.Error(err)
return err
}
gitClient := git.New(context.Background(), monitor, "orbos", "orbos@caos.ch")
if err := gitClient.Configure(orbConfig.URL, []byte(orbConfig.Repokey)); err != nil {
monitor.Error(err)
return err
}
takeoff := database.Takeoff(monitor, gitClient, orbdb.AdaptFunc("", binaryVersion, "database", "backup"), k8sClient)
go func() {
started := time.Now()
takeoff()
monitor.WithFields(map[string]interface{}{
"took": time.Since(started),
}).Info("Iteration done")
takeoffChan <- struct{}{}
}()
}
return nil
}
func Backup(monitor mntr.Monitor, orbConfigPath string, k8sClient *kubernetes.Client, backup string, binaryVersion *string) error {
orbConfig, err := orbconfig.ParseOrbConfig(orbConfigPath)
if err != nil {
monitor.Error(err)
return err
}
gitClient := git.New(context.Background(), monitor, "orbos", "orbos@caos.ch")
if err := gitClient.Configure(orbConfig.URL, []byte(orbConfig.Repokey)); err != nil {
monitor.Error(err)
return err
}
database.Takeoff(monitor, gitClient, orbdb.AdaptFunc(backup, binaryVersion, "instantbackup"), k8sClient)()
return nil
}

View File

@@ -0,0 +1,49 @@
package iam
import (
"fmt"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/orb"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel"
"github.com/pkg/errors"
core "k8s.io/api/core/v1"
)
func GetQueryAndDestroyFuncs(
monitor mntr.Monitor,
operatorLabels *labels.Operator,
desiredTree *tree.Tree,
currentTree *tree.Tree,
nodeselector map[string]string,
tolerations []core.Toleration,
orbconfig *orb.Orb,
action string,
version *string,
features []string,
) (
query operator.QueryFunc,
destroy operator.DestroyFunc,
secrets map[string]*secret.Secret,
err error,
) {
defer func() {
if err != nil {
err = fmt.Errorf("adapting %s failed: %w", desiredTree.Common.Kind, err)
}
}()
switch desiredTree.Common.Kind {
case "zitadel.caos.ch/ZITADEL":
apiLabels := labels.MustForAPI(operatorLabels, "ZITADEL", desiredTree.Common.Version)
return zitadel.AdaptFunc(apiLabels, nodeselector, tolerations, orbconfig, action, version, features)(monitor, desiredTree, currentTree)
default:
return nil, nil, nil, errors.Errorf("unknown iam kind %s", desiredTree.Common.Kind)
}
}

View File

@@ -0,0 +1,280 @@
package zitadel
import (
"strconv"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/orb"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/database"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/setup"
core "k8s.io/api/core/v1"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/namespace"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/deployment"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/migration"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/services"
"github.com/pkg/errors"
)
func AdaptFunc(
apiLabels *labels.API,
nodeselector map[string]string,
tolerations []core.Toleration,
orbconfig *orb.Orb,
action string,
version *string,
features []string,
) operator.AdaptFunc {
return func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
allSecrets := make(map[string]*secret.Secret)
internalMonitor := monitor.WithField("kind", "iam")
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, nil, allSecrets, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
secret.AppendSecrets("", allSecrets, getSecretsMap(desiredKind))
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
internalMonitor.Verbose()
}
namespaceStr := "caos-zitadel"
// shared elements
cmName := "zitadel-vars"
secretName := "zitadel-secret"
consoleCMName := "console-config"
secretVarsName := "zitadel-secrets-vars"
secretPasswordName := "zitadel-passwords"
//paths which are used in the configuration and also are used for mounting the used files
certPath := "/home/zitadel/dbsecrets-zitadel"
secretPath := "/secret"
//services which are kubernetes resources and are used in the ambassador elements
grpcServiceName := "grpc-v1"
var grpcPort uint16 = 80
httpServiceName := "http-v1"
var httpPort uint16 = 80
uiServiceName := "ui-v1"
var uiPort uint16 = 80
// labels := getLabels()
users := getAllUsers(desiredKind)
allZitadelUsers := getZitadelUserList()
dbClient, err := database.NewClient(monitor, orbconfig.URL, orbconfig.Repokey)
if err != nil {
return nil, nil, allSecrets, err
}
queryNS, err := namespace.AdaptFuncToEnsure(namespaceStr)
if err != nil {
return nil, nil, allSecrets, err
}
destroyNS, err := namespace.AdaptFuncToDestroy(namespaceStr)
if err != nil {
return nil, nil, allSecrets, err
}
zitadelComponent := labels.MustForComponent(apiLabels, "ZITADEL")
zitadelDeploymentName := labels.MustForName(zitadelComponent, "zitadel")
zitadelPodSelector := labels.DeriveNameSelector(zitadelDeploymentName, false)
queryS, destroyS, err := services.AdaptFunc(
internalMonitor,
zitadelComponent,
zitadelPodSelector,
namespaceStr,
grpcServiceName,
grpcPort,
httpServiceName,
httpPort,
uiServiceName,
uiPort)
if err != nil {
return nil, nil, allSecrets, err
}
queryC, destroyC, getConfigurationHashes, err := configuration.AdaptFunc(
internalMonitor,
zitadelComponent,
namespaceStr,
desiredKind.Spec.Configuration,
cmName,
certPath,
secretName,
secretPath,
consoleCMName,
secretVarsName,
secretPasswordName,
users,
services.GetClientIDFunc(namespaceStr, httpServiceName, httpPort),
dbClient,
)
if err != nil {
return nil, nil, allSecrets, err
}
queryDB, err := database.AdaptFunc(
monitor,
dbClient,
)
if err != nil {
return nil, nil, allSecrets, err
}
queryM, destroyM, err := migration.AdaptFunc(
internalMonitor,
labels.MustForComponent(apiLabels, "database"),
namespaceStr,
action,
secretPasswordName,
migrationUser,
allZitadelUsers,
nodeselector,
tolerations,
)
if err != nil {
return nil, nil, allSecrets, err
}
querySetup, destroySetup, err := setup.AdaptFunc(
internalMonitor,
zitadelComponent,
namespaceStr,
action,
desiredKind.Spec.NodeSelector,
desiredKind.Spec.Tolerations,
desiredKind.Spec.Resources,
version,
cmName,
certPath,
secretName,
secretPath,
consoleCMName,
secretVarsName,
secretPasswordName,
allZitadelUsers,
migration.GetDoneFunc(monitor, namespaceStr, action),
configuration.GetReadyFunc(monitor, namespaceStr, secretName, secretVarsName, secretPasswordName, cmName, consoleCMName),
getConfigurationHashes,
)
queryD, destroyD, err := deployment.AdaptFunc(
internalMonitor,
zitadelDeploymentName,
zitadelPodSelector,
desiredKind.Spec.Force,
version,
namespaceStr,
desiredKind.Spec.ReplicaCount,
desiredKind.Spec.Affinity,
cmName,
certPath,
secretName,
secretPath,
consoleCMName,
secretVarsName,
secretPasswordName,
allZitadelUsers,
desiredKind.Spec.NodeSelector,
desiredKind.Spec.Tolerations,
desiredKind.Spec.Resources,
migration.GetDoneFunc(monitor, namespaceStr, action),
configuration.GetReadyFunc(monitor, namespaceStr, secretName, secretVarsName, secretPasswordName, cmName, consoleCMName),
setup.GetDoneFunc(monitor, namespaceStr, action),
getConfigurationHashes,
)
if err != nil {
return nil, nil, allSecrets, err
}
queryAmbassador, destroyAmbassador, err := ambassador.AdaptFunc(
internalMonitor,
labels.MustForComponent(apiLabels, "apiGateway"),
namespaceStr,
grpcServiceName+"."+namespaceStr+":"+strconv.Itoa(int(grpcPort)),
"http://"+httpServiceName+"."+namespaceStr+":"+strconv.Itoa(int(httpPort)),
"http://"+uiServiceName+"."+namespaceStr,
desiredKind.Spec.Configuration.DNS,
)
if err != nil {
return nil, nil, allSecrets, err
}
destroyers := make([]operator.DestroyFunc, 0)
queriers := make([]operator.QueryFunc, 0)
for _, feature := range features {
switch feature {
case "migration":
queriers = append(queriers,
queryDB,
//configuration
queryC,
//migration
queryM,
//wait until migration is completed
operator.EnsureFuncToQueryFunc(migration.GetDoneFunc(monitor, namespaceStr, action)),
)
destroyers = append(destroyers,
destroyM,
)
case "iam":
queriers = append(queriers,
operator.ResourceQueryToZitadelQuery(queryNS),
queryDB,
//configuration
queryC,
//migration
queryM,
//services
queryS,
querySetup,
queryD,
operator.EnsureFuncToQueryFunc(deployment.GetReadyFunc(monitor, namespaceStr, zitadelDeploymentName)),
queryAmbassador,
)
destroyers = append(destroyers,
destroyAmbassador,
destroyS,
destroyM,
destroyD,
destroySetup,
destroyC,
operator.ResourceDestroyToZitadelDestroy(destroyNS),
)
case "scaledown":
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(deployment.GetScaleFunc(monitor, namespaceStr, zitadelDeploymentName)(0)),
)
case "scaleup":
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(deployment.GetScaleFunc(monitor, namespaceStr, zitadelDeploymentName)(desiredKind.Spec.ReplicaCount)),
)
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(internalMonitor, true, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
allSecrets,
nil
}
}

View File

@@ -0,0 +1,69 @@
package ambassador
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/grpc"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/hosts"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/http"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/ui"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
)
func AdaptFunc(
monitor mntr.Monitor,
componentLabels *labels.Component,
namespace string,
grpcURL string,
httpURL string,
uiURL string,
dns *configuration.DNS,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("type", "ambassador")
queryGRPC, destroyGRPC, err := grpc.AdaptFunc(internalMonitor, componentLabels, namespace, grpcURL, dns)
if err != nil {
return nil, nil, err
}
queryUI, destroyHTTP, err := ui.AdaptFunc(internalMonitor, componentLabels, namespace, uiURL, dns)
if err != nil {
return nil, nil, err
}
queryHTTP, destroyUI, err := http.AdaptFunc(internalMonitor, componentLabels, namespace, httpURL, dns)
if err != nil {
return nil, nil, err
}
queryHosts, destroyHosts, err := hosts.AdaptFunc(internalMonitor, componentLabels, namespace, dns)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
destroyGRPC,
destroyHTTP,
destroyUI,
destroyHosts,
}
queriers := []operator.QueryFunc{
queryHosts,
queryGRPC,
queryUI,
queryHTTP,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(internalMonitor, true, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,146 @@
package ambassador
import (
"testing"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels/mocklabels"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/grpc"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/hosts"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/http"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/ambassador/ui"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
apixv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func SetReturnResourceVersion(
k8sClient *kubernetesmock.MockClientInt,
group,
version,
kind,
namespace,
name string,
resourceVersion string,
) {
ret := &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"resourceVersion": resourceVersion,
},
},
}
k8sClient.EXPECT().GetNamespacedCRDResource(group, version, kind, namespace, name).Return(ret, nil)
}
func SetMappingsUI(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
) {
group := "getambassador.io"
version := "v2"
kind := "Mapping"
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ui.AccountsName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ui.AccountsName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ui.ConsoleName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ui.ConsoleName, gomock.Any()).Times(1)
}
func SetMappingsHTTP(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
) {
group := "getambassador.io"
version := "v2"
kind := "Mapping"
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.AdminRName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.AdminRName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.AuthorizeName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.AuthorizeName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.AuthRName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.AuthRName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.EndsessionName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.EndsessionName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.IssuerName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.IssuerName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.MgmtName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.MgmtName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, http.OauthName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, http.OauthName, gomock.Any()).Times(1)
}
func SetMappingsGRPC(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
) {
group := "getambassador.io"
version := "v2"
kind := "Mapping"
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, grpc.AdminMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, grpc.AdminMName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, grpc.AuthMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, grpc.AuthMName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, grpc.MgmtMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, grpc.MgmtMName, gomock.Any()).Times(1)
}
func SetHosts(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
) {
group := "getambassador.io"
version := "v2"
kind := "Host"
k8sClient.EXPECT().CheckCRD("hosts.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, hosts.AccountsHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, hosts.AccountsHostName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, hosts.ApiHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, hosts.ApiHostName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, hosts.ConsoleHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, hosts.ConsoleHostName, gomock.Any()).Times(1)
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, hosts.IssuerHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, hosts.IssuerHostName, gomock.Any()).Times(1)
}
func TestAmbassador_Adapt(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
grpcURL := "grpc"
httpURL := "http"
uiURL := "ui"
dns := &configuration.DNS{
Domain: "",
TlsSecret: "",
Subdomains: &configuration.Subdomains{
Accounts: "",
API: "",
Console: "",
Issuer: "",
},
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
SetMappingsUI(k8sClient, namespace)
SetMappingsHTTP(k8sClient, namespace)
SetMappingsGRPC(k8sClient, namespace)
SetHosts(k8sClient, namespace)
query, _, err := AdaptFunc(monitor, mocklabels.Component, namespace, grpcURL, httpURL, uiURL, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,127 @@
package grpc
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/ambassador/mapping"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
)
const (
AdminMName = "admin-grpc-v1"
AuthMName = "auth-grpc-v1"
MgmtMName = "mgmt-grpc-v1"
)
func AdaptFunc(
monitor mntr.Monitor,
componentLabels *labels.Component,
namespace string,
grpcURL string,
dns *configuration.DNS,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("part", "grpc")
destroyAdminG, err := mapping.AdaptFuncToDestroy(namespace, AdminMName)
if err != nil {
return nil, nil, err
}
destroyAuthG, err := mapping.AdaptFuncToDestroy(namespace, AuthMName)
if err != nil {
return nil, nil, err
}
destroyMgmtGRPC, err := mapping.AdaptFuncToDestroy(namespace, MgmtMName)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyAdminG),
operator.ResourceDestroyToZitadelDestroy(destroyAuthG),
operator.ResourceDestroyToZitadelDestroy(destroyMgmtGRPC),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
crd, err := k8sClient.CheckCRD("mappings.getambassador.io")
if crd == nil || err != nil {
return func(k8sClient kubernetes.ClientInt) error { return nil }, nil
}
apiDomain := dns.Subdomains.API + "." + dns.Domain
consoleDomain := dns.Subdomains.Console + "." + dns.Domain
_ = consoleDomain
cors := &mapping.CORS{
Origins: "*",
Methods: "POST, GET, OPTIONS, DELETE, PUT",
Headers: "*",
Credentials: true,
ExposedHeaders: "*",
MaxAge: "86400",
}
queryAdminG, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, AdminMName),
true,
apiDomain,
"/caos.zitadel.admin.api.v1.AdminService/",
"",
grpcURL,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryAuthG, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, AuthMName),
true,
apiDomain,
"/caos.zitadel.auth.api.v1.AuthService/",
"",
grpcURL,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryMgmtGRPC, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, MgmtMName),
true,
apiDomain,
"/caos.zitadel.management.api.v1.ManagementService/",
"",
grpcURL,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(queryAdminG),
operator.ResourceQueryToZitadelQuery(queryAuthG),
operator.ResourceQueryToZitadelQuery(queryMgmtGRPC),
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,268 @@
package grpc
import (
"testing"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels/mocklabels"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
apixv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func SetReturnResourceVersion(
k8sClient *kubernetesmock.MockClientInt,
group,
version,
kind,
namespace,
name string,
resourceVersion string,
) {
ret := &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"resourceVersion": resourceVersion,
},
},
}
k8sClient.EXPECT().GetNamespacedCRDResource(group, version, kind, namespace, name).Return(ret, nil)
}
func TestGrpc_Adapt(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
url := "url"
dns := &configuration.DNS{
Domain: "",
TlsSecret: "",
Subdomains: &configuration.Subdomains{
Accounts: "",
API: "",
Console: "",
Issuer: "",
},
}
componentLabels := mocklabels.Component
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
group := "getambassador.io"
version := "v2"
kind := "Mapping"
cors := map[string]interface{}{
"origins": "*",
"methods": "POST, GET, OPTIONS, DELETE, PUT",
"headers": "*",
"credentials": true,
"exposed_headers": "*",
"max_age": "86400",
}
adminMName := labels.MustForName(componentLabels, AdminMName)
adminM := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(adminMName),
"name": adminMName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/caos.zitadel.admin.api.v1.AdminService/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
"grpc": true,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AdminMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AdminMName, adminM).Times(1)
authMName := labels.MustForName(componentLabels, AuthMName)
authM := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(authMName),
"name": authMName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/caos.zitadel.auth.api.v1.AuthService/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
"grpc": true,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AuthMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AuthMName, authM).Times(1)
mgmtMName := labels.MustForName(componentLabels, MgmtMName)
mgmtM := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(mgmtMName),
"name": mgmtMName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/caos.zitadel.management.api.v1.ManagementService/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
"grpc": true,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, MgmtMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, MgmtMName, mgmtM).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, url, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}
func TestGrpc_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
url := "url"
dns := &configuration.DNS{
Domain: "domain",
TlsSecret: "tls",
Subdomains: &configuration.Subdomains{
Accounts: "accounts",
API: "api",
Console: "console",
Issuer: "issuer",
},
}
componentLabels := mocklabels.Component
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
group := "getambassador.io"
version := "v2"
kind := "Mapping"
cors := map[string]interface{}{
"origins": "*",
"methods": "POST, GET, OPTIONS, DELETE, PUT",
"headers": "*",
"credentials": true,
"exposed_headers": "*",
"max_age": "86400",
}
adminMName := labels.MustForName(componentLabels, AdminMName)
adminM := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(adminMName),
"name": adminMName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/caos.zitadel.admin.api.v1.AdminService/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
"grpc": true,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AdminMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AdminMName, adminM).Times(1)
authMName := labels.MustForName(componentLabels, AuthMName)
authM := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(authMName),
"name": authMName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/caos.zitadel.auth.api.v1.AuthService/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
"grpc": true,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AuthMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AuthMName, authM).Times(1)
mgmtMName := labels.MustForName(componentLabels, MgmtMName)
mgmtM := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(mgmtMName),
"name": mgmtMName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/caos.zitadel.management.api.v1.ManagementService/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
"grpc": true,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, MgmtMName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, MgmtMName, mgmtM).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, url, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,113 @@
package hosts
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/ambassador/host"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
)
const (
AccountsHostName = "accounts"
ApiHostName = "api"
ConsoleHostName = "console"
IssuerHostName = "issuer"
)
func AdaptFunc(
monitor mntr.Monitor,
componentLabels *labels.Component,
namespace string,
dns *configuration.DNS,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("part", "hosts")
destroyAccounts, err := host.AdaptFuncToDestroy(namespace, AccountsHostName)
if err != nil {
return nil, nil, err
}
destroyAPI, err := host.AdaptFuncToDestroy(namespace, ApiHostName)
if err != nil {
return nil, nil, err
}
destroyConsole, err := host.AdaptFuncToDestroy(namespace, ConsoleHostName)
if err != nil {
return nil, nil, err
}
destroyIssuer, err := host.AdaptFuncToDestroy(namespace, IssuerHostName)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyAccounts),
operator.ResourceDestroyToZitadelDestroy(destroyAPI),
operator.ResourceDestroyToZitadelDestroy(destroyConsole),
operator.ResourceDestroyToZitadelDestroy(destroyIssuer),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
crd, err := k8sClient.CheckCRD("hosts.getambassador.io")
if crd == nil || err != nil {
return func(k8sClient kubernetes.ClientInt) error { return nil }, nil
}
accountsDomain := dns.Subdomains.Accounts + "." + dns.Domain
apiDomain := dns.Subdomains.API + "." + dns.Domain
consoleDomain := dns.Subdomains.Console + "." + dns.Domain
issuerDomain := dns.Subdomains.Issuer + "." + dns.Domain
originCASecretName := dns.TlsSecret
accountsSelector := map[string]string{
"hostname": accountsDomain,
}
queryAccounts, err := host.AdaptFuncToEnsure(namespace, AccountsHostName, labels.MustForNameK8SMap(componentLabels, AccountsHostName), accountsDomain, "none", "", accountsSelector, originCASecretName)
if err != nil {
return nil, err
}
apiSelector := map[string]string{
"hostname": apiDomain,
}
queryAPI, err := host.AdaptFuncToEnsure(namespace, ApiHostName, labels.MustForNameK8SMap(componentLabels, ApiHostName), apiDomain, "none", "", apiSelector, originCASecretName)
if err != nil {
return nil, err
}
consoleSelector := map[string]string{
"hostname": consoleDomain,
}
queryConsole, err := host.AdaptFuncToEnsure(namespace, ConsoleHostName, labels.MustForNameK8SMap(componentLabels, ConsoleHostName), consoleDomain, "none", "", consoleSelector, originCASecretName)
if err != nil {
return nil, err
}
issuerSelector := map[string]string{
"hostname": issuerDomain,
}
queryIssuer, err := host.AdaptFuncToEnsure(namespace, IssuerHostName, labels.MustForNameK8SMap(componentLabels, IssuerHostName), issuerDomain, "none", "", issuerSelector, originCASecretName)
if err != nil {
return nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(queryAccounts),
operator.ResourceQueryToZitadelQuery(queryAPI),
operator.ResourceQueryToZitadelQuery(queryConsole),
operator.ResourceQueryToZitadelQuery(queryIssuer),
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,379 @@
package hosts
import (
"testing"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/labels/mocklabels"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
apixv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func SetReturnResourceVersion(
k8sClient *kubernetesmock.MockClientInt,
group,
version,
kind,
namespace,
name string,
resourceVersion string,
) {
ret := &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"resourceVersion": resourceVersion,
},
},
}
k8sClient.EXPECT().GetNamespacedCRDResource(group, version, kind, namespace, name).Return(ret, nil)
}
func TestHosts_AdaptFunc(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
dns := &configuration.DNS{
Domain: "",
TlsSecret: "",
Subdomains: &configuration.Subdomains{
Accounts: "",
API: "",
Console: "",
Issuer: "",
},
}
componentLabels := mocklabels.Component
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
k8sClient.EXPECT().CheckCRD("hosts.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
group := "getambassador.io"
version := "v2"
kind := "Host"
issuerHostName := labels.MustForName(componentLabels, IssuerHostName)
issuerHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": issuerHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(issuerHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": ".",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": ".",
},
},
"tlsSecret": map[string]interface{}{
"name": "",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, IssuerHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, IssuerHostName, issuerHost).Times(1)
consoleHostName := labels.MustForName(componentLabels, ConsoleHostName)
consoleHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": consoleHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(consoleHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": ".",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": ".",
},
},
"tlsSecret": map[string]interface{}{
"name": "",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ConsoleHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ConsoleHostName, consoleHost).Times(1)
apiHostName := labels.MustForName(componentLabels, ApiHostName)
apiHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": apiHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(apiHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": ".",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": ".",
},
},
"tlsSecret": map[string]interface{}{
"name": "",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ApiHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ApiHostName, apiHost).Times(1)
accountsHostName := labels.MustForName(componentLabels, AccountsHostName)
accountsHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": accountsHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(accountsHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": ".",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": ".",
},
},
"tlsSecret": map[string]interface{}{
"name": "",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AccountsHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AccountsHostName, accountsHost).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}
func TestHosts_AdaptFunc2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
dns := &configuration.DNS{
Domain: "domain",
TlsSecret: "tls",
Subdomains: &configuration.Subdomains{
Accounts: "accounts",
API: "api",
Console: "console",
Issuer: "issuer",
},
}
componentLabels := mocklabels.Component
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
k8sClient.EXPECT().CheckCRD("hosts.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
group := "getambassador.io"
version := "v2"
kind := "Host"
issuerHostName := labels.MustForName(componentLabels, IssuerHostName)
issuerHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": issuerHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(issuerHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": "issuer.domain",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": "issuer.domain",
},
},
"tlsSecret": map[string]interface{}{
"name": "tls",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, IssuerHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, IssuerHostName, issuerHost).Times(1)
consoleHostName := labels.MustForName(componentLabels, ConsoleHostName)
consoleHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": consoleHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(consoleHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": "console.domain",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": "console.domain",
},
},
"tlsSecret": map[string]interface{}{
"name": "tls",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ConsoleHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ConsoleHostName, consoleHost).Times(1)
apiHostName := labels.MustForName(componentLabels, ApiHostName)
apiHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": apiHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(apiHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": "api.domain",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": "api.domain",
},
},
"tlsSecret": map[string]interface{}{
"name": "tls",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ApiHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ApiHostName, apiHost).Times(1)
accountsHostName := labels.MustForName(componentLabels, AccountsHostName)
accountsHost := &unstructured.Unstructured{
Object: map[string]interface{}{
"kind": kind,
"apiVersion": group + "/" + version,
"metadata": map[string]interface{}{
"name": accountsHostName.Name(),
"namespace": namespace,
"labels": labels.MustK8sMap(accountsHostName),
"annotations": map[string]interface{}{
"aes_res_changed": "true",
},
},
"spec": map[string]interface{}{
"hostname": "accounts.domain",
"acmeProvider": map[string]interface{}{
"authority": "none",
},
"ambassadorId": []string{
"default",
},
"selector": map[string]interface{}{
"matchLabels": map[string]interface{}{
"hostname": "accounts.domain",
},
},
"tlsSecret": map[string]interface{}{
"name": "tls",
},
},
}}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AccountsHostName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AccountsHostName, accountsHost).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,225 @@
package http
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/ambassador/mapping"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
)
const (
AdminRName = "admin-rest-v1"
MgmtName = "mgmt-v1"
OauthName = "oauth-v1"
AuthRName = "auth-rest-v1"
AuthorizeName = "authorize-v1"
EndsessionName = "endsession-v1"
IssuerName = "issuer-v1"
)
func AdaptFunc(
monitor mntr.Monitor,
componentLabels *labels.Component,
namespace string,
httpUrl string,
dns *configuration.DNS,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("part", "http")
destroyAdminR, err := mapping.AdaptFuncToDestroy(namespace, AdminRName)
if err != nil {
return nil, nil, err
}
destroyMgmtRest, err := mapping.AdaptFuncToDestroy(namespace, MgmtName)
if err != nil {
return nil, nil, err
}
destroyOAuthv2, err := mapping.AdaptFuncToDestroy(namespace, OauthName)
if err != nil {
return nil, nil, err
}
destroyAuthR, err := mapping.AdaptFuncToDestroy(namespace, AuthRName)
if err != nil {
return nil, nil, err
}
destroyAuthorize, err := mapping.AdaptFuncToDestroy(namespace, AuthorizeName)
if err != nil {
return nil, nil, err
}
destroyEndsession, err := mapping.AdaptFuncToDestroy(namespace, EndsessionName)
if err != nil {
return nil, nil, err
}
destroyIssuer, err := mapping.AdaptFuncToDestroy(namespace, IssuerName)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyAdminR),
operator.ResourceDestroyToZitadelDestroy(destroyMgmtRest),
operator.ResourceDestroyToZitadelDestroy(destroyOAuthv2),
operator.ResourceDestroyToZitadelDestroy(destroyAuthR),
operator.ResourceDestroyToZitadelDestroy(destroyAuthorize),
operator.ResourceDestroyToZitadelDestroy(destroyEndsession),
operator.ResourceDestroyToZitadelDestroy(destroyIssuer),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
crd, err := k8sClient.CheckCRD("mappings.getambassador.io")
if crd == nil || err != nil {
return func(k8sClient kubernetes.ClientInt) error { return nil }, nil
}
accountsDomain := dns.Subdomains.Accounts + "." + dns.Domain
apiDomain := dns.Subdomains.API + "." + dns.Domain
issuerDomain := dns.Subdomains.Issuer + "." + dns.Domain
cors := &mapping.CORS{
Origins: "*",
Methods: "POST, GET, OPTIONS, DELETE, PUT",
Headers: "*",
Credentials: true,
ExposedHeaders: "*",
MaxAge: "86400",
}
queryAdminR, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, AdminRName),
false,
apiDomain,
"/admin/v1",
"",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryMgmtRest, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, MgmtName),
false,
apiDomain,
"/management/v1/",
"",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryOAuthv2, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, OauthName),
false,
apiDomain,
"/oauth/v2/",
"",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryAuthR, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, AuthRName),
false,
apiDomain,
"/auth/v1/",
"",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryAuthorize, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, AuthorizeName),
false,
accountsDomain,
"/oauth/v2/authorize",
"",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryEndsession, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, EndsessionName),
false,
accountsDomain,
"/oauth/v2/endsession",
"",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queryIssuer, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, IssuerName),
false,
issuerDomain,
"/.well-known/openid-configuration",
"/oauth/v2/.well-known/openid-configuration",
httpUrl,
30000,
30000,
cors,
)
if err != nil {
return nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(queryAdminR),
operator.ResourceQueryToZitadelQuery(queryMgmtRest),
operator.ResourceQueryToZitadelQuery(queryOAuthv2),
operator.ResourceQueryToZitadelQuery(queryAuthR),
operator.ResourceQueryToZitadelQuery(queryAuthorize),
operator.ResourceQueryToZitadelQuery(queryEndsession),
operator.ResourceQueryToZitadelQuery(queryIssuer),
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,451 @@
package http
import (
"testing"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/labels/mocklabels"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
apixv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func SetReturnResourceVersion(
k8sClient *kubernetesmock.MockClientInt,
group,
version,
kind,
namespace,
name string,
resourceVersion string,
) {
ret := &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"resourceVersion": resourceVersion,
},
},
}
k8sClient.EXPECT().GetNamespacedCRDResource(group, version, kind, namespace, name).Return(ret, nil)
}
func TestHttp_Adapt(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
url := "url"
dns := &configuration.DNS{
Domain: "",
TlsSecret: "",
Subdomains: &configuration.Subdomains{
Accounts: "",
API: "",
Console: "",
Issuer: "",
},
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
group := "getambassador.io"
version := "v2"
kind := "Mapping"
cors := map[string]interface{}{
"origins": "*",
"methods": "POST, GET, OPTIONS, DELETE, PUT",
"headers": "*",
"credentials": true,
"exposed_headers": "*",
"max_age": "86400",
}
componentLabels := mocklabels.Component
endSessionName := labels.MustForName(componentLabels, EndsessionName)
endsession := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(endSessionName),
"name": endSessionName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/oauth/v2/endsession",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, EndsessionName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, EndsessionName, endsession).Times(1)
issuerName := labels.MustForName(componentLabels, IssuerName)
issuer := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(issuerName),
"name": issuerName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/.well-known/openid-configuration",
"rewrite": "/oauth/v2/.well-known/openid-configuration",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, IssuerName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, IssuerName, issuer).Times(1)
authorizeName := labels.MustForName(componentLabels, AuthorizeName)
authorize := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(authorizeName),
"name": authorizeName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/oauth/v2/authorize",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AuthorizeName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AuthorizeName, authorize).Times(1)
oauthName := labels.MustForName(componentLabels, OauthName)
oauth := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(oauthName),
"name": oauthName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/oauth/v2/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, OauthName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, OauthName, oauth).Times(1)
mgmtName := labels.MustForName(componentLabels, MgmtName)
mgmt := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(mgmtName),
"name": mgmtName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/management/v1/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, MgmtName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, MgmtName, mgmt).Times(1)
adminRName := labels.MustForName(componentLabels, AdminRName)
adminR := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(adminRName),
"name": adminRName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/admin/v1",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AdminRName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AdminRName, adminR).Times(1)
authRName := labels.MustForName(componentLabels, AuthRName)
authR := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(authRName),
"name": authRName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/auth/v1/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AuthRName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AuthRName, authR).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, url, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}
func TestHttp_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
url := "url"
dns := &configuration.DNS{
Domain: "domain",
TlsSecret: "tls",
Subdomains: &configuration.Subdomains{
Accounts: "accounts",
API: "api",
Console: "console",
Issuer: "issuer",
},
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
group := "getambassador.io"
version := "v2"
kind := "Mapping"
cors := map[string]interface{}{
"origins": "*",
"methods": "POST, GET, OPTIONS, DELETE, PUT",
"headers": "*",
"credentials": true,
"exposed_headers": "*",
"max_age": "86400",
}
componentLabels := mocklabels.Component
endsessionName := labels.MustForName(componentLabels, EndsessionName)
endsession := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(endsessionName),
"name": endsessionName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "accounts.domain",
"prefix": "/oauth/v2/endsession",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, EndsessionName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, EndsessionName, endsession).Times(1)
issuerName := labels.MustForName(componentLabels, IssuerName)
issuer := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(issuerName),
"name": issuerName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "issuer.domain",
"prefix": "/.well-known/openid-configuration",
"rewrite": "/oauth/v2/.well-known/openid-configuration",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, IssuerName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, IssuerName, issuer).Times(1)
authorizeName := labels.MustForName(componentLabels, AuthorizeName)
authorize := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(authorizeName),
"name": authorizeName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "accounts.domain",
"prefix": "/oauth/v2/authorize",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AuthorizeName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AuthorizeName, authorize).Times(1)
oauthName := labels.MustForName(componentLabels, OauthName)
oauth := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(oauthName),
"name": oauthName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/oauth/v2/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, OauthName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, OauthName, oauth).Times(1)
mgmtName := labels.MustForName(componentLabels, MgmtName)
mgmt := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(mgmtName),
"name": mgmtName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/management/v1/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, MgmtName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, MgmtName, mgmt).Times(1)
adminRName := labels.MustForName(componentLabels, AdminRName)
adminR := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(adminRName),
"name": adminRName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/admin/v1",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AdminRName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AdminRName, adminR).Times(1)
authRName := labels.MustForName(componentLabels, AuthRName)
authR := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(authRName),
"name": authRName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "api.domain",
"prefix": "/auth/v1/",
"rewrite": "",
"service": url,
"timeout_ms": 30000,
"cors": cors,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AuthRName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AuthRName, authR).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, url, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,95 @@
package ui
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/ambassador/mapping"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
)
const (
ConsoleName = "console-v1"
AccountsName = "accounts-v1"
)
func AdaptFunc(
monitor mntr.Monitor,
componentLabels *labels.Component,
namespace string,
uiURL string,
dns *configuration.DNS,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("part", "ui")
destroyAcc, err := mapping.AdaptFuncToDestroy(namespace, AccountsName)
if err != nil {
return nil, nil, err
}
destroyConsole, err := mapping.AdaptFuncToDestroy(namespace, ConsoleName)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyAcc),
operator.ResourceDestroyToZitadelDestroy(destroyConsole),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
crd, err := k8sClient.CheckCRD("mappings.getambassador.io")
if crd == nil || err != nil {
return func(k8sClient kubernetes.ClientInt) error { return nil }, nil
}
accountsDomain := dns.Subdomains.Accounts + "." + dns.Domain
consoleDomain := dns.Subdomains.Console + "." + dns.Domain
queryConsole, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, ConsoleName),
false,
consoleDomain,
"/",
"/console/",
uiURL,
0,
0,
nil,
)
if err != nil {
return nil, err
}
queryAcc, err := mapping.AdaptFuncToEnsure(
namespace,
labels.MustForName(componentLabels, AccountsName),
false,
accountsDomain,
"/",
"/login/",
uiURL,
30000,
30000,
nil,
)
if err != nil {
return nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(queryConsole),
operator.ResourceQueryToZitadelQuery(queryAcc),
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,203 @@
package ui
import (
"testing"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/labels/mocklabels"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
apixv1beta1 "k8s.io/apiextensions-apiserver/pkg/apis/apiextensions/v1beta1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
)
func SetReturnResourceVersion(
k8sClient *kubernetesmock.MockClientInt,
group,
version,
kind,
namespace,
name string,
resourceVersion string,
) {
ret := &unstructured.Unstructured{
Object: map[string]interface{}{
"metadata": map[string]interface{}{
"resourceVersion": resourceVersion,
},
},
}
k8sClient.EXPECT().GetNamespacedCRDResource(group, version, kind, namespace, name).Return(ret, nil)
}
func SetCheckCRD(k8sClient *kubernetesmock.MockClientInt) {
k8sClient.EXPECT().CheckCRD("mappings.getambassador.io").Times(1).Return(&apixv1beta1.CustomResourceDefinition{}, nil)
}
func SetMappingsEmpty(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
accountsLabels *labels.Name,
consoleLabels *labels.Name,
url string,
) {
group := "getambassador.io"
version := "v2"
kind := "Mapping"
accounts := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(accountsLabels),
"name": accountsLabels.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": ".",
"prefix": "/",
"rewrite": "/login/",
"service": url,
"timeout_ms": 30000,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, accountsLabels.Name(), "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, accountsLabels.Name(), accounts).Times(1)
console := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(consoleLabels),
"name": consoleLabels.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"host": ".",
"prefix": "/",
"rewrite": "/console/",
"service": url,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, consoleLabels.Name(), "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, consoleLabels.Name(), console).Times(1)
}
func TestUi_Adapt(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
uiURL := "url"
dns := &configuration.DNS{
Domain: "",
TlsSecret: "",
Subdomains: &configuration.Subdomains{
Accounts: "",
API: "",
Console: "",
Issuer: "",
},
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
componentLabels := mocklabels.Component
SetCheckCRD(k8sClient)
SetMappingsEmpty(
k8sClient,
namespace,
labels.MustForName(componentLabels, AccountsName),
labels.MustForName(componentLabels, ConsoleName),
uiURL,
)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, uiURL, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}
func TestUi_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "test"
uiURL := "url"
dns := &configuration.DNS{
Domain: "domain",
TlsSecret: "tls",
Subdomains: &configuration.Subdomains{
Accounts: "accounts",
API: "api",
Console: "console",
Issuer: "issuer",
},
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
SetCheckCRD(k8sClient)
group := "getambassador.io"
version := "v2"
kind := "Mapping"
componentLabels := mocklabels.Component
accountsName := labels.MustForName(componentLabels, AccountsName)
accounts := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(accountsName),
"name": accountsName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"connect_timeout_ms": 30000,
"host": "accounts.domain",
"prefix": "/",
"rewrite": "/login/",
"service": uiURL,
"timeout_ms": 30000,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, AccountsName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, AccountsName, accounts).Times(1)
consoleName := labels.MustForName(componentLabels, ConsoleName)
console := &unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": group + "/" + version,
"kind": kind,
"metadata": map[string]interface{}{
"labels": labels.MustK8sMap(consoleName),
"name": consoleName.Name(),
"namespace": namespace,
},
"spec": map[string]interface{}{
"host": "console.domain",
"prefix": "/",
"rewrite": "/console/",
"service": uiURL,
},
},
}
SetReturnResourceVersion(k8sClient, group, version, kind, namespace, ConsoleName, "")
k8sClient.EXPECT().ApplyNamespacedCRDResource(group, version, kind, namespace, ConsoleName, console).Times(1)
query, _, err := AdaptFunc(monitor, componentLabels, namespace, uiURL, dns)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,182 @@
package configuration
import (
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/configmap"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/configuration/users"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/database"
)
type ConsoleEnv struct {
AuthServiceURL string `json:"authServiceUrl"`
MgmtServiceURL string `json:"mgmtServiceUrl"`
Issuer string `json:"issuer"`
ClientID string `json:"clientid"`
}
const (
googleServiceAccountJSONPath = "google-serviceaccount-key.json"
zitadelKeysPath = "zitadel-keys.yaml"
timeout time.Duration = 60
)
func AdaptFunc(
monitor mntr.Monitor,
componentLabels *labels.Component,
namespace string,
desired *Configuration,
cmName string,
certPath string,
secretName string,
secretPath string,
consoleCMName string,
secretVarsName string,
secretPasswordName string,
necessaryUsers map[string]string,
getClientID func() string,
dbClient database.ClientInt,
) (
operator.QueryFunc,
operator.DestroyFunc,
func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) map[string]string,
error,
) {
internalMonitor := monitor.WithField("component", "configuration")
literalsSecret := literalsSecret(desired, googleServiceAccountJSONPath, zitadelKeysPath)
literalsSecretVars := literalsSecretVars(desired)
destroyCM, err := configmap.AdaptFuncToDestroy(namespace, cmName)
if err != nil {
return nil, nil, nil, err
}
destroyS, err := secret.AdaptFuncToDestroy(namespace, secretName)
if err != nil {
return nil, nil, nil, err
}
destroyCCM, err := configmap.AdaptFuncToDestroy(namespace, consoleCMName)
if err != nil {
return nil, nil, nil, err
}
destroySV, err := secret.AdaptFuncToDestroy(namespace, secretVarsName)
if err != nil {
return nil, nil, nil, err
}
destroySP, err := secret.AdaptFuncToDestroy(namespace, secretPasswordName)
if err != nil {
return nil, nil, nil, err
}
_, destroyUser, err := users.AdaptFunc(internalMonitor, necessaryUsers, dbClient)
if err != nil {
return nil, nil, nil, err
}
destroyers := []operator.DestroyFunc{
destroyUser,
operator.ResourceDestroyToZitadelDestroy(destroyS),
operator.ResourceDestroyToZitadelDestroy(destroyCM),
operator.ResourceDestroyToZitadelDestroy(destroyCCM),
operator.ResourceDestroyToZitadelDestroy(destroySV),
operator.ResourceDestroyToZitadelDestroy(destroySP),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
queryUser, _, err := users.AdaptFunc(internalMonitor, necessaryUsers, dbClient)
if err != nil {
return nil, err
}
queryS, err := secret.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, secretName), literalsSecret)
if err != nil {
return nil, err
}
querySV, err := secret.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, secretVarsName), literalsSecretVars)
if err != nil {
return nil, err
}
querySP, err := secret.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, secretPasswordName), necessaryUsers)
if err != nil {
return nil, err
}
queryCCM, err := configmap.AdaptFuncToEnsure(
namespace,
consoleCMName,
labels.MustForNameK8SMap(componentLabels, consoleCMName),
literalsConsoleCM(
getClientID(),
desired.DNS,
k8sClient,
namespace,
consoleCMName,
),
)
if err != nil {
return nil, err
}
queryCM, err := configmap.AdaptFuncToEnsure(
namespace,
cmName,
labels.MustForNameK8SMap(componentLabels, cmName),
literalsConfigMap(
desired,
necessaryUsers,
certPath,
secretPath,
googleServiceAccountJSONPath,
zitadelKeysPath,
queried,
),
)
if err != nil {
return nil, err
}
queriers := []operator.QueryFunc{
queryUser,
operator.ResourceQueryToZitadelQuery(queryS),
operator.ResourceQueryToZitadelQuery(queryCCM),
operator.ResourceQueryToZitadelQuery(querySV),
operator.ResourceQueryToZitadelQuery(querySP),
operator.ResourceQueryToZitadelQuery(queryCM),
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) map[string]string {
return map[string]string{
secretName: getHash(literalsSecret),
secretVarsName: getHash(literalsSecretVars),
secretPasswordName: getHash(necessaryUsers),
cmName: getHash(
literalsConfigMap(
desired,
necessaryUsers,
certPath,
secretPath,
googleServiceAccountJSONPath,
zitadelKeysPath,
queried,
),
),
consoleCMName: getHash(
literalsConsoleCM(
getClientID(),
desired.DNS,
k8sClient,
namespace,
consoleCMName,
),
),
}
},
nil
}

View File

@@ -0,0 +1,325 @@
package configuration
import (
"errors"
"testing"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/labels/mocklabels"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/database"
databasemock "github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/database/mock"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func SetConfigMap(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
cmName string,
labels map[string]string,
queried map[string]interface{},
desired *Configuration,
users map[string]string,
certPath, secretPath, googleServiceAccountJSONPath, zitadelKeysPath string,
) {
k8sClient.EXPECT().ApplyConfigmap(&corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: cmName,
Labels: labels,
},
Data: literalsConfigMap(desired, users, certPath, secretPath, googleServiceAccountJSONPath, zitadelKeysPath, queried),
})
}
func SetSecretVars(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
secretVarsName string,
labels map[string]string,
desired *Configuration,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: secretVarsName,
Labels: labels,
},
Type: "Opaque",
StringData: literalsSecretVars(desired),
}).Times(1)
}
func SetConsoleCM(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
consoleCMName string,
labels map[string]string,
getClientID func() string,
desired *Configuration,
) {
k8sClient.EXPECT().GetConfigMap(namespace, consoleCMName).Times(2).Return(nil, errors.New("Not Found"))
consoleCM := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: consoleCMName,
Labels: labels,
},
Data: literalsConsoleCM(getClientID(), desired.DNS, k8sClient, namespace, consoleCMName),
}
k8sClient.EXPECT().ApplyConfigmap(consoleCM).Times(1)
}
func SetSecrets(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
secretName string,
labels map[string]string,
desired *Configuration,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: secretName,
Labels: labels,
},
Type: "Opaque",
StringData: literalsSecret(desired, googleServiceAccountJSONPath, zitadelKeysPath),
}).Times(1)
}
func SetSecretPasswords(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
secretPasswordName string,
labels map[string]string,
users map[string]string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: secretPasswordName,
Labels: labels,
},
Type: "Opaque",
StringData: users,
}).Times(1)
}
func TestConfiguration_Adapt(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
dbClient := databasemock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{Fields: map[string]interface{}{"component": "configuration"}}
namespace := "test"
cmName := "cm"
secretName := "secret"
consoleCMName := "console"
secretVarsName := "vars"
secretPasswordName := "passwords"
getClientID := func() string { return "test" }
certPath := "test"
secretPath := "test"
users := map[string]string{
"migration": "migration",
"management": "management",
"auth": "auth",
"authz": "authz",
"adminapi": "adminapi",
"notification": "notification",
"eventstore": "eventstore",
}
componentLabels := mocklabels.Component
queried := map[string]interface{}{}
database.SetDatabaseInQueried(queried, &database.Current{
Host: "host",
Port: "port",
Users: []string{},
})
for user := range users {
dbClient.EXPECT().AddUser(monitor, user, k8sClient).Times(1)
}
SetConfigMap(
k8sClient,
namespace,
cmName,
labels.MustForNameK8SMap(componentLabels, cmName),
queried,
desiredEmpty,
users,
certPath,
secretPath,
googleServiceAccountJSONPath,
zitadelKeysPath)
SetSecretVars(
k8sClient,
namespace,
secretVarsName,
labels.MustForNameK8SMap(componentLabels, secretVarsName),
desiredEmpty,
)
SetConsoleCM(
k8sClient,
namespace,
consoleCMName,
labels.MustForNameK8SMap(componentLabels, consoleCMName),
getClientID,
desiredEmpty,
)
SetSecrets(
k8sClient,
namespace,
secretName,
labels.MustForNameK8SMap(componentLabels, secretName),
desiredEmpty,
)
SetSecretPasswords(
k8sClient,
namespace,
secretPasswordName,
labels.MustForNameK8SMap(componentLabels, secretPasswordName),
users,
)
query, _, _, err := AdaptFunc(
monitor,
componentLabels,
namespace,
desiredEmpty,
cmName,
certPath,
secretName,
secretPath,
consoleCMName,
secretVarsName,
secretPasswordName,
users,
getClientID,
dbClient,
)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}
func TestConfiguration_AdaptFull(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
dbClient := databasemock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{Fields: map[string]interface{}{"component": "configuration"}}
namespace := "test2"
cmName := "cm2"
secretName := "secret2"
consoleCMName := "console2"
secretVarsName := "vars2"
secretPasswordName := "passwords2"
getClientID := func() string { return "test2" }
certPath := "test2"
secretPath := "test2"
users := map[string]string{
"migration": "migration",
"management": "management",
"auth": "auth",
"authz": "authz",
"adminapi": "adminapi",
"notification": "notification",
"eventstore": "eventstore",
}
componentLabels := mocklabels.Component
queried := map[string]interface{}{}
database.SetDatabaseInQueried(queried, &database.Current{
Host: "host2",
Port: "port2",
Users: []string{},
})
for user := range users {
dbClient.EXPECT().AddUser(monitor, user, k8sClient).Times(1)
}
SetConfigMap(
k8sClient,
namespace,
cmName,
labels.MustForNameK8SMap(componentLabels, cmName),
queried,
desiredFull,
users,
certPath,
secretPath,
googleServiceAccountJSONPath,
zitadelKeysPath)
SetSecretVars(
k8sClient,
namespace,
secretVarsName,
labels.MustForNameK8SMap(componentLabels, secretVarsName),
desiredFull,
)
SetConsoleCM(
k8sClient,
namespace,
consoleCMName,
labels.MustForNameK8SMap(componentLabels, consoleCMName),
getClientID,
desiredFull,
)
SetSecrets(
k8sClient,
namespace,
secretName,
labels.MustForNameK8SMap(componentLabels, secretName),
desiredFull,
)
SetSecretPasswords(
k8sClient,
namespace,
secretPasswordName,
labels.MustForNameK8SMap(componentLabels, secretPasswordName),
users,
)
query, _, _, err := AdaptFunc(
monitor,
componentLabels,
namespace,
desiredFull,
cmName,
certPath,
secretName,
secretPath,
consoleCMName,
secretVarsName,
secretPasswordName,
users,
getClientID,
dbClient,
)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,83 @@
package configuration
import "github.com/caos/orbos/pkg/secret"
type Configuration struct {
Tracing *Tracing `yaml:"tracing,omitempty"`
Cache *Cache `yaml:"cache,omitempty"`
Secrets *Secrets `yaml:"secrets,omitempty"`
Notifications *Notifications `yaml:"notifications,omitempty"`
Passwords *Passwords `yaml:"passwords,omitempty"`
DebugMode bool `yaml:"debugMode"`
LogLevel string `yaml:"logLevel"`
DNS *DNS `yaml:"dns"`
ClusterDNS string `yaml:"clusterdns"`
}
type DNS struct {
Domain string `yaml:"domain"`
TlsSecret string `yaml:"tlsSecret"`
Subdomains *Subdomains `yaml:"subdomains"`
}
type Subdomains struct {
Accounts string `yaml:"accounts"`
API string `yaml:"api"`
Console string `yaml:"console"`
Issuer string `yaml:"issuer"`
}
type Passwords struct {
Migration *secret.Secret `yaml:"migration"`
Management *secret.Secret `yaml:"management"`
Auth *secret.Secret `yaml:"auth"`
Authz *secret.Secret `yaml:"authz"`
Adminapi *secret.Secret `yaml:"adminapi"`
Notification *secret.Secret `yaml:"notification"`
Eventstore *secret.Secret `yaml:"eventstore"`
}
type Secrets struct {
Keys *secret.Secret `yaml:"keys,omitempty"`
UserVerificationID string `yaml:"userVerificationID,omitempty"`
OTPVerificationID string `yaml:"otpVerificationID,omitempty"`
OIDCKeysID string `yaml:"oidcKeysID,omitempty"`
CookieID string `yaml:"cookieID,omitempty"`
CSRFID string `yaml:"csrfID,omitempty"`
DomainVerificationID string `yaml:"domainVerificationID,omitempty"`
IDPConfigVerificationID string `yaml:"idpConfigVerificationID,omitempty"`
}
type Notifications struct {
GoogleChatURL *secret.Secret `yaml:"googleChatURL,omitempty"`
Email *Email `yaml:"email,omitempty"`
Twilio *Twilio `yaml:"twilio,omitempty"`
}
type Tracing struct {
ServiceAccountJSON *secret.Secret `yaml:"serviceAccountJSON,omitempty"`
ProjectID string `yaml:"projectID,omitempty"`
Fraction string `yaml:"fraction,omitempty"`
Type string `yaml:"type,omitempty"`
}
type Twilio struct {
SenderName string `yaml:"senderName,omitempty"`
AuthToken *secret.Secret `yaml:"authToken,omitempty"`
SID *secret.Secret `yaml:"sid,omitempty"`
}
type Email struct {
SMTPHost string `yaml:"smtpHost,omitempty"`
SMTPUser string `yaml:"smtpUser,omitempty"`
SenderAddress string `yaml:"senderAddress,omitempty"`
SenderName string `yaml:"senderName,omitempty"`
TLS bool `yaml:"tls,omitempty"`
AppKey *secret.Secret `yaml:"appKey,omitempty"`
}
type Cache struct {
MaxAge string `yaml:"maxAge,omitempty"`
SharedMaxAge string `yaml:"sharedMaxAge,omitempty"`
ShortMaxAge string `yaml:"shortMaxAge,omitempty"`
ShortSharedMaxAge string `yaml:"shortSharedMaxAge,omitempty"`
}

View File

@@ -0,0 +1,16 @@
package configuration
import (
"crypto/sha512"
"encoding/base64"
"encoding/json"
)
func getHash(dataMap map[string]string) string {
data, err := json.Marshal(dataMap)
if err != nil {
return ""
}
h := sha512.New()
return base64.URLEncoding.EncodeToString(h.Sum(data))
}

View File

@@ -0,0 +1,183 @@
package configuration
import (
"encoding/json"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/database"
"strconv"
"strings"
)
const (
jsonName = "environment.json"
)
func literalsConfigMap(
desired *Configuration,
users map[string]string,
certPath, secretPath, googleServiceAccountJSONPath, zitadelKeysPath string,
queried map[string]interface{},
) map[string]string {
tls := ""
if desired.Notifications.Email.TLS {
tls = "TRUE"
} else {
tls = "FALSE"
}
literalsConfigMap := map[string]string{
"GOOGLE_APPLICATION_CREDENTIALS": secretPath + "/" + googleServiceAccountJSONPath,
"ZITADEL_KEY_PATH": secretPath + "/" + zitadelKeysPath,
"ZITADEL_LOG_LEVEL": "info",
"DEBUG_MODE": strconv.FormatBool(desired.DebugMode),
"SMTP_TLS": tls,
"CAOS_OIDC_DEV": "true",
"CR_SSL_MODE": "require",
"CR_ROOT_CERT": certPath + "/ca.crt",
}
if users != nil {
for user := range users {
literalsConfigMap["CR_"+strings.ToUpper(user)+"_CERT"] = certPath + "/client." + user + ".crt"
literalsConfigMap["CR_"+strings.ToUpper(user)+"_KEY"] = certPath + "/client." + user + ".key"
}
}
if desired != nil {
if desired.Tracing != nil {
literalsConfigMap["ZITADEL_TRACING_PROJECT_ID"] = desired.Tracing.ProjectID
literalsConfigMap["ZITADEL_TRACING_FRACTION"] = desired.Tracing.Fraction
literalsConfigMap["ZITADEL_TRACING_TYPE"] = desired.Tracing.Type
}
if desired.Secrets != nil {
literalsConfigMap["ZITADEL_USER_VERIFICATION_KEY"] = desired.Secrets.UserVerificationID
literalsConfigMap["ZITADEL_OTP_VERIFICATION_KEY"] = desired.Secrets.OTPVerificationID
literalsConfigMap["ZITADEL_OIDC_KEYS_ID"] = desired.Secrets.OIDCKeysID
literalsConfigMap["ZITADEL_COOKIE_KEY"] = desired.Secrets.CookieID
literalsConfigMap["ZITADEL_CSRF_KEY"] = desired.Secrets.CSRFID
literalsConfigMap["ZITADEL_DOMAIN_VERIFICATION_KEY"] = desired.Secrets.DomainVerificationID
literalsConfigMap["ZITADEL_IDP_CONFIG_VERIFICATION_KEY"] = desired.Secrets.IDPConfigVerificationID
}
if desired.Notifications != nil {
literalsConfigMap["TWILIO_SENDER_NAME"] = desired.Notifications.Twilio.SenderName
literalsConfigMap["SMTP_HOST"] = desired.Notifications.Email.SMTPHost
literalsConfigMap["SMTP_USER"] = desired.Notifications.Email.SMTPUser
literalsConfigMap["EMAIL_SENDER_ADDRESS"] = desired.Notifications.Email.SenderAddress
literalsConfigMap["EMAIL_SENDER_NAME"] = desired.Notifications.Email.SenderName
}
if desired.Cache != nil {
literalsConfigMap["ZITADEL_CACHE_MAXAGE"] = desired.Cache.MaxAge
literalsConfigMap["ZITADEL_CACHE_SHARED_MAXAGE"] = desired.Cache.SharedMaxAge
literalsConfigMap["ZITADEL_SHORT_CACHE_MAXAGE"] = desired.Cache.ShortMaxAge
literalsConfigMap["ZITADEL_SHORT_CACHE_SHARED_MAXAGE"] = desired.Cache.ShortSharedMaxAge
}
if desired.LogLevel != "" {
literalsConfigMap["ZITADEL_LOG_LEVEL"] = desired.LogLevel
}
if desired.DNS != nil {
defaultDomain := desired.DNS.Domain
accountsDomain := desired.DNS.Subdomains.Accounts + "." + defaultDomain
accounts := "https://" + accountsDomain
issuer := "https://" + desired.DNS.Subdomains.Issuer + "." + defaultDomain
oauth := "https://" + desired.DNS.Subdomains.API + "." + defaultDomain + "/oauth/v2"
authorize := "https://" + desired.DNS.Subdomains.Accounts + "." + defaultDomain + "/oauth/v2"
console := "https://" + desired.DNS.Subdomains.Console + "." + defaultDomain
literalsConfigMap["ZITADEL_ISSUER"] = issuer
literalsConfigMap["ZITADEL_ACCOUNTS"] = accounts
literalsConfigMap["ZITADEL_OAUTH"] = oauth
literalsConfigMap["ZITADEL_AUTHORIZE"] = authorize
literalsConfigMap["ZITADEL_CONSOLE"] = console
literalsConfigMap["ZITADEL_ACCOUNTS_DOMAIN"] = accountsDomain
literalsConfigMap["ZITADEL_COOKIE_DOMAIN"] = accountsDomain
literalsConfigMap["ZITADEL_DEFAULT_DOMAIN"] = defaultDomain
}
}
db, err := database.GetDatabaseInQueried(queried)
if err == nil {
literalsConfigMap["ZITADEL_EVENTSTORE_HOST"] = db.Host
literalsConfigMap["ZITADEL_EVENTSTORE_PORT"] = db.Port
}
return literalsConfigMap
}
func literalsSecret(desired *Configuration, googleServiceAccountJSONPath, zitadelKeysPath string) map[string]string {
literalsSecret := map[string]string{}
if desired != nil {
if desired.Tracing != nil && desired.Tracing.ServiceAccountJSON != nil {
literalsSecret[googleServiceAccountJSONPath] = desired.Tracing.ServiceAccountJSON.Value
}
if desired.Secrets != nil && desired.Secrets.Keys != nil {
literalsSecret[zitadelKeysPath] = desired.Secrets.Keys.Value
}
}
return literalsSecret
}
func literalsSecretVars(desired *Configuration) map[string]string {
literalsSecretVars := map[string]string{}
if desired != nil {
if desired.Notifications != nil {
if desired.Notifications.Email.AppKey != nil {
literalsSecretVars["ZITADEL_EMAILAPPKEY"] = desired.Notifications.Email.AppKey.Value
}
if desired.Notifications.GoogleChatURL != nil {
literalsSecretVars["ZITADEL_GOOGLE_CHAT_URL"] = desired.Notifications.GoogleChatURL.Value
}
if desired.Notifications.Twilio.AuthToken != nil {
literalsSecretVars["ZITADEL_TWILIO_AUTH_TOKEN"] = desired.Notifications.Twilio.AuthToken.Value
}
if desired.Notifications.Twilio.SID != nil {
literalsSecretVars["ZITADEL_TWILIO_SID"] = desired.Notifications.Twilio.SID.Value
}
}
}
return literalsSecretVars
}
func literalsConsoleCM(
clientID string,
dns *DNS,
k8sClient kubernetes.ClientInt,
namespace string,
cmName string,
) map[string]string {
literalsConsoleCM := map[string]string{}
consoleEnv := ConsoleEnv{
ClientID: clientID,
}
cm, err := k8sClient.GetConfigMap(namespace, cmName)
//only try to use the old CM when there is a CM found
if cm != nil && err == nil {
jsonData, ok := cm.Data[jsonName]
if ok {
literalsData := map[string]string{}
err := json.Unmarshal([]byte(jsonData), &literalsData)
if err == nil {
oldClientID, ok := literalsData["clientid"]
//only use the old ClientID if no new clientID is provided
if ok && consoleEnv.ClientID == "" {
consoleEnv.ClientID = oldClientID
}
}
}
}
consoleEnv.Issuer = "https://" + dns.Subdomains.Issuer + "." + dns.Domain
consoleEnv.AuthServiceURL = "https://" + dns.Subdomains.API + "." + dns.Domain
consoleEnv.MgmtServiceURL = "https://" + dns.Subdomains.API + "." + dns.Domain
data, err := json.Marshal(consoleEnv)
if err != nil {
return map[string]string{}
}
literalsConsoleCM[jsonName] = string(data)
return literalsConsoleCM
}

View File

@@ -0,0 +1,425 @@
package configuration
import (
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/zitadel/operator/zitadel/kinds/iam/zitadel/database"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
var (
desiredEmpty = &Configuration{
Tracing: &Tracing{
ServiceAccountJSON: &secret.Secret{Value: ""},
ProjectID: "",
Fraction: "",
Type: "",
},
Secrets: &Secrets{
Keys: &secret.Secret{Value: ""},
UserVerificationID: "",
OTPVerificationID: "",
OIDCKeysID: "",
CookieID: "",
CSRFID: "",
DomainVerificationID: "",
IDPConfigVerificationID: "",
},
Notifications: &Notifications{
GoogleChatURL: &secret.Secret{Value: ""},
Email: &Email{
SMTPHost: "",
SMTPUser: "",
SenderAddress: "",
SenderName: "",
TLS: false,
AppKey: &secret.Secret{Value: ""},
},
Twilio: &Twilio{
SenderName: "",
AuthToken: &secret.Secret{Value: ""},
SID: &secret.Secret{Value: ""},
},
},
Passwords: &Passwords{
Migration: &secret.Secret{Value: ""},
Management: &secret.Secret{Value: ""},
Auth: &secret.Secret{Value: ""},
Authz: &secret.Secret{Value: ""},
Adminapi: &secret.Secret{Value: ""},
Notification: &secret.Secret{Value: ""},
Eventstore: &secret.Secret{Value: ""},
},
DebugMode: false,
LogLevel: "info",
DNS: &DNS{
Domain: "",
TlsSecret: "",
Subdomains: &Subdomains{
Accounts: "",
API: "",
Console: "",
Issuer: "",
},
},
ClusterDNS: "",
}
desiredFull = &Configuration{
Tracing: &Tracing{
ServiceAccountJSON: &secret.Secret{Value: "sajson"},
ProjectID: "projectid",
Fraction: "fraction",
Type: "type",
},
Secrets: &Secrets{
Keys: &secret.Secret{Value: "keys"},
UserVerificationID: "userid",
OTPVerificationID: "otpid",
OIDCKeysID: "oidcid",
CookieID: "cookieid",
CSRFID: "csrfid",
DomainVerificationID: "domainid",
IDPConfigVerificationID: "idpid",
},
Notifications: &Notifications{
GoogleChatURL: &secret.Secret{Value: "chat"},
Email: &Email{
SMTPHost: "smtphost",
SMTPUser: "smtpuser",
SenderAddress: "sender",
SenderName: "sendername",
TLS: true,
AppKey: &secret.Secret{Value: "appkey"},
},
Twilio: &Twilio{
SenderName: "sendername",
AuthToken: &secret.Secret{Value: "authtoken"},
SID: &secret.Secret{Value: "sid"},
},
},
Passwords: &Passwords{
Migration: &secret.Secret{Value: "migration"},
Management: &secret.Secret{Value: "management"},
Auth: &secret.Secret{Value: "auth"},
Authz: &secret.Secret{Value: "authz"},
Adminapi: &secret.Secret{Value: "adminapi"},
Notification: &secret.Secret{Value: "notification"},
Eventstore: &secret.Secret{Value: "eventstore"},
},
DebugMode: true,
LogLevel: "debug",
DNS: &DNS{
Domain: "domain",
TlsSecret: "tls",
Subdomains: &Subdomains{
Accounts: "accounts",
API: "api",
Console: "console",
Issuer: "issuer",
},
},
ClusterDNS: "cluster",
}
)
func TestConfiguration_LiteralsConfigMap(t *testing.T) {
certPath := "test"
secretPath := "test"
googleSA := "test"
zitadelKeyPath := "test"
users := map[string]string{
"migration": "migration",
"management": "management",
"auth": "auth",
"authz": "authz",
"adminapi": "adminapi",
"notification": "notification",
"eventstore": "eventstore",
}
queried := map[string]interface{}{}
database.SetDatabaseInQueried(queried, &database.Current{
Host: "test",
Port: "test",
Users: []string{"test"},
})
equals := map[string]string{
"ZITADEL_LOG_LEVEL": "info",
"CR_NOTIFICATION_KEY": "test/client.notification.key",
"CR_AUTHZ_KEY": "test/client.authz.key",
"ZITADEL_OTP_VERIFICATION_KEY": "",
"ZITADEL_COOKIE_KEY": "",
"SMTP_USER": "",
"EMAIL_SENDER_NAME": "",
"ZITADEL_COOKIE_DOMAIN": ".",
"ZITADEL_EVENTSTORE_HOST": "test",
"CR_ADMINAPI_CERT": "test/client.adminapi.crt",
"ZITADEL_IDP_CONFIG_VERIFICATION_KEY": "",
"ZITADEL_ACCOUNTS": "https://.",
"ZITADEL_OAUTH": "https://./oauth/v2",
"ZITADEL_EVENTSTORE_PORT": "test",
"SMTP_TLS": "FALSE",
"CR_ROOT_CERT": "test/ca.crt",
"CR_NOTIFICATION_CERT": "test/client.notification.crt",
"CR_EVENTSTORE_CERT": "test/client.eventstore.crt",
"ZITADEL_USER_VERIFICATION_KEY": "",
"ZITADEL_DEFAULT_DOMAIN": "",
"CR_SSL_MODE": "require",
"ZITADEL_KEY_PATH": "test/test",
"CR_MANAGEMENT_CERT": "test/client.management.crt",
"CR_AUTH_KEY": "test/client.auth.key",
"CR_AUTHZ_CERT": "test/client.authz.crt",
"CR_ADMINAPI_KEY": "test/client.adminapi.key",
"ZITADEL_TRACING_PROJECT_ID": "",
"ZITADEL_DOMAIN_VERIFICATION_KEY": "",
"CR_EVENTSTORE_KEY": "test/client.eventstore.key",
"ZITADEL_CSRF_KEY": "",
"TWILIO_SENDER_NAME": "",
"EMAIL_SENDER_ADDRESS": "",
"ZITADEL_ISSUER": "https://.",
"ZITADEL_CONSOLE": "https://.",
"ZITADEL_ACCOUNTS_DOMAIN": ".",
"GOOGLE_APPLICATION_CREDENTIALS": "test/test",
"CR_MIGRATION_KEY": "test/client.migration.key",
"ZITADEL_TRACING_FRACTION": "",
"SMTP_HOST": "",
"CAOS_OIDC_DEV": "true",
"DEBUG_MODE": "false",
"CR_AUTH_CERT": "test/client.auth.crt",
"ZITADEL_OIDC_KEYS_ID": "",
"CR_MIGRATION_CERT": "test/client.migration.crt",
"CR_MANAGEMENT_KEY": "test/client.management.key",
"ZITADEL_TRACING_TYPE": "",
"ZITADEL_AUTHORIZE": "https://./oauth/v2",
}
literals := literalsConfigMap(desiredEmpty, users, certPath, secretPath, googleSA, zitadelKeyPath, queried)
assert.Equal(t, equals, literals)
}
func TestConfiguration_LiteralsConfigMapFull(t *testing.T) {
certPath := "test"
secretPath := "test"
googleSA := "test"
zitadelKeyPath := "test"
users := map[string]string{
"migration": "migration2",
"management": "management2",
"auth": "auth2",
"authz": "authz2",
"adminapi": "adminapi2",
"notification": "notification2",
"eventstore": "eventstore2",
}
queried := map[string]interface{}{}
database.SetDatabaseInQueried(queried, &database.Current{
Host: "test",
Port: "test",
Users: []string{"test"},
})
equals := map[string]string{
"CAOS_OIDC_DEV": "true",
"CR_ADMINAPI_CERT": "test/client.adminapi.crt",
"CR_ADMINAPI_KEY": "test/client.adminapi.key",
"CR_AUTHZ_CERT": "test/client.authz.crt",
"CR_AUTHZ_KEY": "test/client.authz.key",
"CR_AUTH_CERT": "test/client.auth.crt",
"CR_AUTH_KEY": "test/client.auth.key",
"CR_EVENTSTORE_CERT": "test/client.eventstore.crt",
"CR_EVENTSTORE_KEY": "test/client.eventstore.key",
"CR_MANAGEMENT_CERT": "test/client.management.crt",
"CR_MANAGEMENT_KEY": "test/client.management.key",
"CR_MIGRATION_CERT": "test/client.migration.crt",
"CR_MIGRATION_KEY": "test/client.migration.key",
"CR_NOTIFICATION_CERT": "test/client.notification.crt",
"CR_NOTIFICATION_KEY": "test/client.notification.key",
"CR_ROOT_CERT": "test/ca.crt",
"CR_SSL_MODE": "require",
"DEBUG_MODE": "true",
"EMAIL_SENDER_ADDRESS": "sender",
"EMAIL_SENDER_NAME": "sendername",
"GOOGLE_APPLICATION_CREDENTIALS": "test/test",
"SMTP_HOST": "smtphost",
"SMTP_TLS": "TRUE",
"SMTP_USER": "smtpuser",
"TWILIO_SENDER_NAME": "sendername",
"ZITADEL_ACCOUNTS": "https://accounts.domain",
"ZITADEL_ACCOUNTS_DOMAIN": "accounts.domain",
"ZITADEL_AUTHORIZE": "https://accounts.domain/oauth/v2",
"ZITADEL_CONSOLE": "https://console.domain",
"ZITADEL_COOKIE_DOMAIN": "accounts.domain",
"ZITADEL_COOKIE_KEY": "cookieid",
"ZITADEL_CSRF_KEY": "csrfid",
"ZITADEL_DEFAULT_DOMAIN": "domain",
"ZITADEL_DOMAIN_VERIFICATION_KEY": "domainid",
"ZITADEL_EVENTSTORE_HOST": "test",
"ZITADEL_EVENTSTORE_PORT": "test",
"ZITADEL_IDP_CONFIG_VERIFICATION_KEY": "idpid",
"ZITADEL_ISSUER": "https://issuer.domain",
"ZITADEL_KEY_PATH": "test/test",
"ZITADEL_LOG_LEVEL": "debug",
"ZITADEL_OAUTH": "https://api.domain/oauth/v2",
"ZITADEL_OIDC_KEYS_ID": "oidcid",
"ZITADEL_OTP_VERIFICATION_KEY": "otpid",
"ZITADEL_TRACING_FRACTION": "fraction",
"ZITADEL_TRACING_PROJECT_ID": "projectid",
"ZITADEL_TRACING_TYPE": "type",
"ZITADEL_USER_VERIFICATION_KEY": "userid",
}
literals := literalsConfigMap(desiredFull, users, certPath, secretPath, googleSA, zitadelKeyPath, queried)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsSecrets(t *testing.T) {
googleSA := "sajson"
zitadelKeyPath := "zitadel"
equals := map[string]string{
googleSA: "",
zitadelKeyPath: "",
}
literals := literalsSecret(desiredEmpty, googleSA, zitadelKeyPath)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsSecretsFull(t *testing.T) {
googleSA := "sajson"
zitadelKeyPath := "zitadel"
equals := map[string]string{
googleSA: "sajson",
zitadelKeyPath: "keys",
}
literals := literalsSecret(desiredFull, googleSA, zitadelKeyPath)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsSecretVars(t *testing.T) {
equals := map[string]string{
"ZITADEL_EMAILAPPKEY": "",
"ZITADEL_GOOGLE_CHAT_URL": "",
"ZITADEL_TWILIO_AUTH_TOKEN": "",
"ZITADEL_TWILIO_SID": "",
}
literals := literalsSecretVars(desiredEmpty)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsSecretVarsFull(t *testing.T) {
equals := map[string]string{
"ZITADEL_EMAILAPPKEY": "appkey",
"ZITADEL_GOOGLE_CHAT_URL": "chat",
"ZITADEL_TWILIO_AUTH_TOKEN": "authtoken",
"ZITADEL_TWILIO_SID": "sid",
}
literals := literalsSecretVars(desiredFull)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsConsoleCM(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
clientID := ""
namespace := "test"
cmName := "cm"
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: cmName,
},
Data: map[string]string{"environment.json": "{\"authServiceUrl\":\"https://.\",\"mgmtServiceUrl\":\"https://.\",\"issuer\":\"https://.\",\"clientid\":\"\"}"},
}
equals := map[string]string{"environment.json": "{\"authServiceUrl\":\"https://.\",\"mgmtServiceUrl\":\"https://.\",\"issuer\":\"https://.\",\"clientid\":\"\"}"}
k8sClient.EXPECT().GetConfigMap(namespace, cmName).Times(1).Return(cm, nil)
literals := literalsConsoleCM(clientID, desiredEmpty.DNS, k8sClient, namespace, cmName)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsConsoleCMFull(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
clientID := "test"
namespace := "test"
cmName := "cm"
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: cmName,
},
Data: map[string]string{},
}
equals := map[string]string{
"environment.json": "{\"authServiceUrl\":\"https://api.domain\",\"mgmtServiceUrl\":\"https://api.domain\",\"issuer\":\"https://issuer.domain\",\"clientid\":\"test\"}",
}
k8sClient.EXPECT().GetConfigMap(namespace, cmName).Times(1).Return(cm, nil)
literals := literalsConsoleCM(clientID, desiredFull.DNS, k8sClient, namespace, cmName)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsConsoleCMWithCM(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
clientID := "test"
namespace := "test"
cmName := "cm"
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: cmName,
},
Data: map[string]string{
"environment.json": "{\"authServiceUrl\":\"https://api.domain\",\"mgmtServiceUrl\":\"https://api.domain\",\"issuer\":\"https://issuer.domain\",\"clientid\":\"\"}",
},
}
equals := map[string]string{
"environment.json": "{\"authServiceUrl\":\"https://api.domain\",\"mgmtServiceUrl\":\"https://api.domain\",\"issuer\":\"https://issuer.domain\",\"clientid\":\"test\"}",
}
k8sClient.EXPECT().GetConfigMap(namespace, cmName).Times(1).Return(cm, nil)
literals := literalsConsoleCM(clientID, desiredFull.DNS, k8sClient, namespace, cmName)
assert.EqualValues(t, equals, literals)
}
func TestConfiguration_LiteralsConsoleCMWithCMFull(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
clientID := ""
namespace := "test"
cmName := "cm"
cm := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Namespace: namespace,
Name: cmName,
},
Data: map[string]string{
"environment.json": "{\"authServiceUrl\":\"https://api.domain\",\"mgmtServiceUrl\":\"https://api.domain\",\"issuer\":\"https://issuer.domain\",\"clientid\":\"test\"}",
},
}
equals := map[string]string{
"environment.json": "{\"authServiceUrl\":\"https://api.domain\",\"mgmtServiceUrl\":\"https://api.domain\",\"issuer\":\"https://issuer.domain\",\"clientid\":\"test\"}",
}
k8sClient.EXPECT().GetConfigMap(namespace, cmName).Times(1).Return(cm, nil)
literals := literalsConsoleCM(clientID, desiredFull.DNS, k8sClient, namespace, cmName)
assert.EqualValues(t, equals, literals)
}

Some files were not shown because too many files have changed in this diff Show More