feat: Merge master (#1260)

* chore(site): dependabot deps (#1148)

* chore(deps): bump highlight.js from 10.4.1 to 10.5.0 in /site (#1143)

Bumps [highlight.js](https://github.com/highlightjs/highlight.js) from 10.4.1 to 10.5.0.
- [Release notes](https://github.com/highlightjs/highlight.js/releases)
- [Changelog](https://github.com/highlightjs/highlight.js/blob/master/CHANGES.md)
- [Commits](https://github.com/highlightjs/highlight.js/compare/10.4.1...10.5.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/plugin-transform-runtime in /site (#1144)

Bumps [@babel/plugin-transform-runtime](https://github.com/babel/babel/tree/HEAD/packages/babel-plugin-transform-runtime) from 7.12.1 to 7.12.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.10/packages/babel-plugin-transform-runtime)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump sirv from 1.0.7 to 1.0.10 in /site (#1145)

Bumps [sirv](https://github.com/lukeed/sirv) from 1.0.7 to 1.0.10.
- [Release notes](https://github.com/lukeed/sirv/releases)
- [Commits](https://github.com/lukeed/sirv/compare/v1.0.7...v1.0.10)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump rollup from 2.34.0 to 2.35.1 in /site (#1142)

Bumps [rollup](https://github.com/rollup/rollup) from 2.34.0 to 2.35.1.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v2.34.0...v2.35.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-node-resolve in /site (#1141)

Bumps [@rollup/plugin-node-resolve](https://github.com/rollup/plugins) from 10.0.0 to 11.0.1.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/node-resolve-v10.0.0...commonjs-v11.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump marked from 1.2.5 to 1.2.7 in /site (#1140)

Bumps [marked](https://github.com/markedjs/marked) from 1.2.5 to 1.2.7.
- [Release notes](https://github.com/markedjs/marked/releases)
- [Changelog](https://github.com/markedjs/marked/blob/master/release.config.js)
- [Commits](https://github.com/markedjs/marked/compare/v1.2.5...v1.2.7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/core from 7.12.9 to 7.12.10 in /site (#1139)

Bumps [@babel/core](https://github.com/babel/babel/tree/HEAD/packages/babel-core) from 7.12.9 to 7.12.10.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.10/packages/babel-core)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump rollup-plugin-svelte from 6.1.1 to 7.0.0 in /site (#1138)

Bumps [rollup-plugin-svelte](https://github.com/sveltejs/rollup-plugin-svelte) from 6.1.1 to 7.0.0.
- [Release notes](https://github.com/sveltejs/rollup-plugin-svelte/releases)
- [Changelog](https://github.com/sveltejs/rollup-plugin-svelte/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sveltejs/rollup-plugin-svelte/compare/v6.1.1...v7.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/preset-env from 7.12.1 to 7.12.11 in /site (#1137)

Bumps [@babel/preset-env](https://github.com/babel/babel/tree/HEAD/packages/babel-preset-env) from 7.12.1 to 7.12.11.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.11/packages/babel-preset-env)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* downgrade svelte plugin

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(console): dependabot deps (#1147)

* chore(deps-dev): bump @types/node from 14.14.13 to 14.14.19 in /console (#1146)

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.13 to 14.14.19.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump ts-protoc-gen from 0.13.0 to 0.14.0 in /console (#1129)

Bumps [ts-protoc-gen](https://github.com/improbable-eng/ts-protoc-gen) from 0.13.0 to 0.14.0.
- [Release notes](https://github.com/improbable-eng/ts-protoc-gen/releases)
- [Changelog](https://github.com/improbable-eng/ts-protoc-gen/blob/master/CHANGELOG.md)
- [Commits](https://github.com/improbable-eng/ts-protoc-gen/compare/0.13.0...0.14.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/language-service in /console (#1128)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.0.4 to 11.0.5.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.0.5/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.0.4 to 11.0.5 in /console (#1127)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.0.4 to 11.0.5.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.0.4...v11.0.5)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1126)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1100.4 to 0.1100.5.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* audit

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* feat: e-mail templates (#1158)

* View definition added

* Get templates and texts from the database.

* Fill in texts in templates

* Fill in texts in templates

* Client API added

* Weekly backup

* Weekly backup

* Daily backup

* Weekly backup

* Tests added

* Corrections from merge branch

* Fixes from pull request review

* chore(console): dependencies (#1189)

* chore(deps-dev): bump @angular/language-service in /console (#1187)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.0.5 to 11.0.9.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.0.9/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump google-proto-files from 2.3.0 to 2.4.0 in /console (#1186)

Bumps [google-proto-files](https://github.com/googleapis/nodejs-proto-files) from 2.3.0 to 2.4.0.
- [Release notes](https://github.com/googleapis/nodejs-proto-files/releases)
- [Changelog](https://github.com/googleapis/nodejs-proto-files/blob/master/CHANGELOG.md)
- [Commits](https://github.com/googleapis/nodejs-proto-files/compare/v2.3.0...v2.4.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @types/node from 14.14.19 to 14.14.21 in /console (#1185)

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.19 to 14.14.21.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.0.5 to 11.0.7 in /console (#1184)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.0.5 to 11.0.7.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.0.5...v11.0.7)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump karma from 5.2.3 to 6.0.0 in /console (#1183)

Bumps [karma](https://github.com/karma-runner/karma) from 5.2.3 to 6.0.0.
- [Release notes](https://github.com/karma-runner/karma/releases)
- [Changelog](https://github.com/karma-runner/karma/blob/master/CHANGELOG.md)
- [Commits](https://github.com/karma-runner/karma/compare/v5.2.3...v6.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1182)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1100.5 to 0.1100.7.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix(console): trigger unauthenticated dialog only once (#1170)

* fix: trigger dialog once

* remove log

* typed trigger

* chore(console): dependencies (#1205)

* chore(deps-dev): bump stylelint from 13.8.0 to 13.9.0 in /console (#1204)

Bumps [stylelint](https://github.com/stylelint/stylelint) from 13.8.0 to 13.9.0.
- [Release notes](https://github.com/stylelint/stylelint/releases)
- [Changelog](https://github.com/stylelint/stylelint/blob/master/CHANGELOG.md)
- [Commits](https://github.com/stylelint/stylelint/compare/13.8.0...13.9.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/language-service in /console (#1203)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.0.9 to 11.1.0.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.1.0/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump karma from 6.0.0 to 6.0.1 in /console (#1202)

Bumps [karma](https://github.com/karma-runner/karma) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/karma-runner/karma/releases)
- [Changelog](https://github.com/karma-runner/karma/blob/master/CHANGELOG.md)
- [Commits](https://github.com/karma-runner/karma/compare/v6.0.0...v6.0.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.0.7 to 11.1.1 in /console (#1201)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.0.7 to 11.1.1.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.0.7...v11.1.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @types/jasmine from 3.6.2 to 3.6.3 in /console (#1200)

Bumps [@types/jasmine](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/jasmine) from 3.6.2 to 3.6.3.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/jasmine)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* chore(deps-dev): bump @types/node from 14.14.21 to 14.14.22 in /console (#1199)

Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 14.14.21 to 14.14.22.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1198)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1100.7 to 0.1101.1.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* chore(deps): bump angularx-qrcode from 10.0.11 to 11.0.0 in /console (#1197)

Bumps [angularx-qrcode](https://github.com/cordobo/angularx-qrcode) from 10.0.11 to 11.0.0.
- [Release notes](https://github.com/cordobo/angularx-qrcode/releases)
- [Commits](https://github.com/cordobo/angularx-qrcode/compare/10.0.11...11.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix pack lock

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: handle sequence correctly in subscription (#1209)

* fix: correct master after merges again (#1230)

* chore(docs): correct `iss` claim of jwt profile (#1229)

* core(docs): correct `iss` claim of jwt profile

* fix: correct master after merges again (#1230)

* feat(login): new palette based styles (#1149)

* chore(deps-dev): bump rollup from 2.33.2 to 2.34.0 in /site (#1040)

Bumps [rollup](https://github.com/rollup/rollup) from 2.33.2 to 2.34.0.
- [Release notes](https://github.com/rollup/rollup/releases)
- [Changelog](https://github.com/rollup/rollup/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rollup/rollup/compare/v2.33.2...v2.34.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump svelte-i18n from 3.2.5 to 3.3.0 in /site (#1039)

Bumps [svelte-i18n](https://github.com/kaisermann/svelte-i18n) from 3.2.5 to 3.3.0.
- [Release notes](https://github.com/kaisermann/svelte-i18n/releases)
- [Changelog](https://github.com/kaisermann/svelte-i18n/blob/main/CHANGELOG.md)
- [Commits](https://github.com/kaisermann/svelte-i18n/compare/v3.2.5...v3.3.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-url from 5.0.1 to 6.0.0 in /site (#1038)

Bumps [@rollup/plugin-url](https://github.com/rollup/plugins) from 5.0.1 to 6.0.0.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/url-v5.0.1...url-v6.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump svelte from 3.29.7 to 3.30.1 in /site (#1037)

Bumps [svelte](https://github.com/sveltejs/svelte) from 3.29.7 to 3.30.1.
- [Release notes](https://github.com/sveltejs/svelte/releases)
- [Changelog](https://github.com/sveltejs/svelte/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sveltejs/svelte/compare/v3.29.7...v3.30.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps): bump marked from 1.2.4 to 1.2.5 in /site (#1036)

Bumps [marked](https://github.com/markedjs/marked) from 1.2.4 to 1.2.5.
- [Release notes](https://github.com/markedjs/marked/releases)
- [Changelog](https://github.com/markedjs/marked/blob/master/release.config.js)
- [Commits](https://github.com/markedjs/marked/compare/v1.2.4...v1.2.5)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/core from 7.12.3 to 7.12.9 in /site (#1035)

Bumps [@babel/core](https://github.com/babel/babel/tree/HEAD/packages/babel-core) from 7.12.3 to 7.12.9.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.9/packages/babel-core)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump rollup-plugin-svelte from 6.1.1 to 7.0.0 in /site (#1034)

Bumps [rollup-plugin-svelte](https://github.com/sveltejs/rollup-plugin-svelte) from 6.1.1 to 7.0.0.
- [Release notes](https://github.com/sveltejs/rollup-plugin-svelte/releases)
- [Changelog](https://github.com/sveltejs/rollup-plugin-svelte/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sveltejs/rollup-plugin-svelte/compare/v6.1.1...v7.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-commonjs in /site (#1033)

Bumps [@rollup/plugin-commonjs](https://github.com/rollup/plugins) from 15.1.0 to 17.0.0.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/commonjs-v15.1.0...commonjs-v17.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @rollup/plugin-node-resolve in /site (#1032)

Bumps [@rollup/plugin-node-resolve](https://github.com/rollup/plugins) from 10.0.0 to 11.0.0.
- [Release notes](https://github.com/rollup/plugins/releases)
- [Commits](https://github.com/rollup/plugins/compare/node-resolve-v10.0.0...commonjs-v11.0.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @babel/preset-env from 7.12.1 to 7.12.7 in /site (#1031)

Bumps [@babel/preset-env](https://github.com/babel/babel/tree/HEAD/packages/babel-preset-env) from 7.12.1 to 7.12.7.
- [Release notes](https://github.com/babel/babel/releases)
- [Changelog](https://github.com/babel/babel/blob/main/CHANGELOG.md)
- [Commits](https://github.com/babel/babel/commits/v7.12.7/packages/babel-preset-env)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* go

* bundle files, lgn-color, legacy theme

* remove old references

* light dark context, button styles, zitadel brand

* button theme, edit templates

* typography theme mixins

* input styles, container, extend light dark palette

* footer, palette, container

* container, label, assets, header

* action container, input, typography label, adapt button theme

* a and footer styles, adapt palette

* user log profile, resourcetempurl

* postinstall againnn

* wrochage

* rm local grpc

* button elevation, helper for components

* radio

* radio button mixins, bundle

* qr code styles, secret clipboard, icon pack

* stroked buttons, icon buttons, header action, typography

* fix password policy styles

* account selection

* account selection, lgn avatar

* mocks

* template fixes, animations scss

* checkbox, register temp

* checkbox appr

* fix checkbox, remove input interference

* select theme

* avatar script, user selection, password policy validation fix

* fix formfield state for register and change pwd

* footer, main style, qr code fix, mfa type fix, account sel, checkbox

* fotter tos, user select

* reverse buttons for intial submit action

* theme script, themed error messages, header img source

* content wrapper, i18n, mobile

* emptyline

* idp mixins, fix unstyled html

* register container

* register layout, list themes, policy theme, register org

* massive asset cleanup

* fix source path, add missing icon, fix complexity refs, prefix

* remove material icons, unused assets, fix icon font

* move icon pack

* avatar, contrast theme, error fix

* zitadel css map

* revert go mod

* fix mfa verify actions

* add idp styles

* fix google colors, idp styles

* fix: bugs

* fix register options, google

* fix script, mobile layout

* precompile font selection

* go mod tidy

* assets and cleanup

* input suffix, fix alignment, actions, add progress bar themes

* progress bar mixins, layout fixes

* remove test from loginname

* cleanup comments, scripts

* clear comments

* fix external back button

* fix mfa alignment

* fix actions layout, on dom change listener for suffix

* free tier change, success label

* fix: button font line-height

* remove tabindex

* remove comment

* remove comment

* Update internal/ui/login/handler/password_handler.go

Co-authored-by: Livio Amstutz <livio.a@gmail.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Maximilian Peintner <csaq7175@uibk.ac.at>
Co-authored-by: Livio Amstutz <livio.a@gmail.com>

* chore(console): dependencies (#1233)

* chore(deps-dev): bump @angular-devkit/build-angular in /console (#1214)

Bumps [@angular-devkit/build-angular](https://github.com/angular/angular-cli) from 0.1101.1 to 0.1101.2.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/commits)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump karma from 6.0.1 to 6.0.3 in /console (#1215)

Bumps [karma](https://github.com/karma-runner/karma) from 6.0.1 to 6.0.3.
- [Release notes](https://github.com/karma-runner/karma/releases)
- [Changelog](https://github.com/karma-runner/karma/blob/master/CHANGELOG.md)
- [Commits](https://github.com/karma-runner/karma/compare/v6.0.1...v6.0.3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/language-service in /console (#1216)

Bumps [@angular/language-service](https://github.com/angular/angular/tree/HEAD/packages/language-service) from 11.1.0 to 11.1.1.
- [Release notes](https://github.com/angular/angular/releases)
- [Changelog](https://github.com/angular/angular/blob/master/CHANGELOG.md)
- [Commits](https://github.com/angular/angular/commits/11.1.1/packages/language-service)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* chore(deps-dev): bump @angular/cli from 11.1.1 to 11.1.2 in /console (#1217)

Bumps [@angular/cli](https://github.com/angular/angular-cli) from 11.1.1 to 11.1.2.
- [Release notes](https://github.com/angular/angular-cli/releases)
- [Commits](https://github.com/angular/angular-cli/compare/v11.1.1...v11.1.2)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Max Peintner <max@caos.ch>

* lock

* site deps

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* fix: get email texts with default language (#1238)

* fix(login): mail verification (#1237)

* fix: mail verification

* not block, stroked

* fix: issues of new login ui (#1241)

* fix: i18n of register

* fix: autofocus

* feat(operator): zitadel and database operator (#1208)

* feat(operator): add base for zitadel operator

* fix(operator): changed pipeline to release operator

* fix(operator): fmt with only one parameter

* fix(operator): corrected workflow job name

* fix(zitadelctl): added restore and backuplist command

* fix(zitadelctl): scale for restore

* chore(container): use scratch for deploy container

* fix(zitadelctl): limit image to scratch

* fix(migration): added migration scripts for newer version

* fix(operator): changed handling of kubeconfig in operator logic

* fix(operator): changed handling of secrets in operator logic

* fix(operator): use new version of zitadel

* fix(operator): added path for migrations

* fix(operator): delete doublets of migration scripts

* fix(operator): delete subpaths and integrate logic into init container

* fix(operator): corrected path in dockerfile for local migrations

* fix(operator): added migrations for cockroachdb-secure

* fix(operator): delete logic for ambassador module

* fix(operator): added read and write secret commands

* fix(operator): correct and align operator pipeline with zitadel pipeline

* fix(operator): correct yaml error in operator pipeline

* fix(operator): correct action name in operator pipeline

* fix(operator): correct case-sensitive filename in operator pipeline

* fix(operator): upload artifacts from buildx output

* fix(operator): corrected attribute spelling error

* fix(operator): combined jobs for operator binary and image

* fix(operator): added missing comma in operator pipeline

* fix(operator): added codecov for operator image

* fix(operator): added codecov for operator image

* fix(testing): code changes for testing and several unit-tests (#1009)

* fix(operator): usage of interface of kubernetes client for testing and several unit-tests

* fix(operator): several unit-tests

* fix(operator): several unit-tests

* fix(operator): changed order for the operator logic

* fix(operator): added version of zitadelctl from semantic release

* fix(operator): corrected function call with version of zitadelctl

* fix(operator): corrected function call with version of zitadelctl

* fix(operator): add check output to operator release pipeline

* fix(operator): set --short length everywhere to 12

* fix(operator): zitadel setup in job instead of exec with several unit tests

* fix(operator): fixes to combine newest zitadel and testing branch

* fix(operator): corrected path in Dockerfile

* fix(operator): fixed unit-test that was ignored during changes

* fix(operator): fixed unit-test that was ignored during changes

* fix(operator): corrected Dockerfile to correctly use env variable

* fix(operator): quickfix takeoff deployment

* fix(operator): corrected the clusterrolename in the applied artifacts

* fix: update secure migrations

* fix(operator): migrations (#1057)

* fix(operator): copied migrations from orbos repository

* fix(operator): newest migrations

* chore: use cockroach-secure

* fix: rename migration

* fix: remove insecure cockroach migrations

Co-authored-by: Stefan Benz <stefan@caos.ch>

* fix: finalize labels

* fix(operator): cli logging concurrent and fixe deployment of operator during restore

* fix: finalize labels and cli commands

* fix: restore

* chore: cockroachdb is always secure

* chore: use orbos consistent-labels latest commit

* test: make tests compatible with new labels

* fix: default to sa token for start command

* fix: use cockroachdb v12.02

* fix: don't delete flyway user

* test: fix migration test

* fix: use correct table qualifiers

* fix: don't alter sequence ownership

* fix: upgrade flyway

* fix: change ownership of all dbs and tables to admin user

* fix: change defaultdb user

* fix: treat clientid status codes >= 400 as errors

* fix: reconcile specified ZITADEL version, not binary version

* fix: add ca-certs

* fix: use latest orbos code

* fix: use orbos with fixed race condition

* fix: use latest ORBOS code

* fix: use latest ORBOS code

* fix: make migration and scaling around restoring work

* fix(operator): move zitadel operator

* chore(migrations): include owner change migration

* feat(db): add code base for database operator

* fix(db): change used image registry for database operator

* fix(db): generated mock

* fix(db): add accidentally ignored file

* fix(db): add cockroachdb backup image to pipeline

* fix(db): correct pipeline and image versions

* fix(db): correct version of used orbos

* fix(db): correct database import

* fix(db): go mod tidy

* fix(db): use new version for orbos

* fix(migrations): include migrations into zitadelctl binary (#1211)

* fix(db): use statik to integrate migrations into binary

* fix(migrations): corrections unit tests and pipeline for integrated migrations into zitadelctl binary

* fix(migrations): correction in dockerfile for pipeline build

* fix(migrations): correction in dockerfile for pipeline build

* fix(migrations):  dockerfile changes for cache optimization

* fix(database): correct used part-of label in database operator

* fix(database): correct used selectable label in zitadel operator

* fix(operator): correct lables for user secrets in zitadel operator

* fix(operator): correct lables for service test in zitadel operator

* fix: don't enable database features for user operations (#1227)

* fix: don't enable database features for user operations

* fix: omit database feature for connection info adapter

* fix: use latest orbos version

* fix: update ORBOS (#1240)

Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: Elio Bischof <eliobischof@gmail.com>

* Merge branch 'new-eventstore' into cascades

# Conflicts:
#	internal/auth/repository/auth_request.go
#	internal/auth/repository/eventsourcing/eventstore/auth_request.go
#	internal/management/repository/eventsourcing/eventstore/user_grant.go
#	internal/management/repository/user_grant.go
#	internal/ui/login/handler/external_login_handler.go
#	internal/ui/login/handler/external_register_handler.go
#	internal/ui/login/handler/init_password_handler.go
#	internal/ui/login/handler/register_handler.go
#	internal/user/repository/view/model/notify_user.go
#	internal/v2/command/org_policy_login.go
#	internal/v2/command/project.go
#	internal/v2/command/user.go
#	internal/v2/command/user_human.go
#	internal/v2/command/user_human_externalidp.go
#	internal/v2/command/user_human_init.go
#	internal/v2/command/user_human_password.go
#	internal/v2/command/user_human_webauthn.go
#	internal/v2/domain/next_step.go
#	internal/v2/domain/policy_login.go
#	internal/v2/domain/request.go

* chore: add local migrate_local.go again (#1261)

Co-authored-by: Max Peintner <max@caos.ch>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Michael Waeger <49439088+michaelulrichwaeger@users.noreply.github.com>
Co-authored-by: Livio Amstutz <livio.a@gmail.com>
Co-authored-by: Maximilian Peintner <csaq7175@uibk.ac.at>
Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com>
Co-authored-by: Florian Forster <florian@caos.ch>
Co-authored-by: Elio Bischof <eliobischof@gmail.com>
This commit is contained in:
Fabi
2021-02-08 16:48:41 +01:00
committed by GitHub
parent 320679467b
commit db11cf1da3
646 changed files with 34637 additions and 6507 deletions

View File

@@ -0,0 +1,71 @@
package backups
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
func GetQueryAndDestroyFuncs(
monitor mntr.Monitor,
desiredTree *tree.Tree,
currentTree *tree.Tree,
name string,
namespace string,
componentLabels *labels.Component,
checkDBReady operator.EnsureFunc,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
version string,
features []string,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
switch desiredTree.Common.Kind {
case "databases.caos.ch/BucketBackup":
return bucket.AdaptFunc(
name,
namespace,
labels.MustForComponent(
labels.MustReplaceAPI(
labels.GetAPIFromComponent(componentLabels),
"BucketBackup",
desiredTree.Common.Version,
),
"backup"),
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(monitor, desiredTree, currentTree)
default:
return nil, nil, nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}
func GetBackupList(
monitor mntr.Monitor,
name string,
desiredTree *tree.Tree,
) (
[]string,
error,
) {
switch desiredTree.Common.Kind {
case "databases.caos.ch/BucketBackup":
return bucket.BackupList()(monitor, name, desiredTree)
default:
return nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}

View File

@@ -0,0 +1,230 @@
package bucket
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/orbos/pkg/labels"
secretpkg "github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
coreDB "github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
const (
secretName = "backup-serviceaccountjson"
secretKey = "serviceaccountjson"
)
func AdaptFunc(
name string,
namespace string,
componentLabels *labels.Component,
checkDBReady operator.EnsureFunc,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
version string,
features []string,
) operator.AdaptFunc {
return func(monitor mntr.Monitor, desired *tree.Tree, current *tree.Tree) (queryFunc operator.QueryFunc, destroyFunc operator.DestroyFunc, secrets map[string]*secretpkg.Secret, err error) {
internalMonitor := monitor.WithField("component", "backup")
desiredKind, err := ParseDesiredV0(desired)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
internalMonitor.Verbose()
}
destroyS, err := secret.AdaptFuncToDestroy(namespace, secretName)
if err != nil {
return nil, nil, nil, err
}
queryS, err := secret.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, secretName), map[string]string{secretKey: desiredKind.Spec.ServiceAccountJSON.Value})
if err != nil {
return nil, nil, nil, err
}
_, destroyB, err := backup.AdaptFunc(
internalMonitor,
name,
namespace,
componentLabels,
[]string{},
checkDBReady,
desiredKind.Spec.Bucket,
desiredKind.Spec.Cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
if err != nil {
return nil, nil, nil, err
}
_, destroyR, err := restore.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
[]string{},
desiredKind.Spec.Bucket,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, nil, nil, err
}
_, destroyC, err := clean.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
[]string{},
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, nil, nil, err
}
destroyers := make([]operator.DestroyFunc, 0)
for _, feature := range features {
switch feature {
case backup.Normal, backup.Instant:
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyS),
destroyB,
)
case clean.Instant:
destroyers = append(destroyers,
destroyC,
)
case restore.Instant:
destroyers = append(destroyers,
destroyR,
)
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
currentDB, err := coreDB.ParseQueriedForDatabase(queried)
if err != nil {
return nil, err
}
databases, err := currentDB.GetListDatabasesFunc()(k8sClient)
if err != nil {
databases = []string{}
}
queryB, _, err := backup.AdaptFunc(
internalMonitor,
name,
namespace,
componentLabels,
databases,
checkDBReady,
desiredKind.Spec.Bucket,
desiredKind.Spec.Cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
if err != nil {
return nil, err
}
queryR, _, err := restore.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
databases,
desiredKind.Spec.Bucket,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, err
}
queryC, _, err := clean.AdaptFunc(
monitor,
name,
namespace,
componentLabels,
databases,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
if err != nil {
return nil, err
}
queriers := make([]operator.QueryFunc, 0)
if databases != nil && len(databases) != 0 {
for _, feature := range features {
switch feature {
case backup.Normal, backup.Instant:
queriers = append(queriers,
operator.ResourceQueryToZitadelQuery(queryS),
queryB,
)
case clean.Instant:
queriers = append(queriers,
queryC,
)
case restore.Instant:
queriers = append(queriers,
queryR,
)
}
}
}
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
getSecretsMap(desiredKind),
nil
}
}

View File

@@ -0,0 +1,372 @@
package bucket
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
"testing"
)
func TestBucket_Secrets(t *testing.T) {
masterkey := "testMk"
features := []string{backup.Normal}
saJson := "testSA"
bucketName := "testBucket2"
cron := "testCron2"
monitor := mntr.Monitor{}
namespace := "testNs2"
kindVersion := "v0"
kind := "BucketBackup"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", kindVersion), "testComponent")
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/" + kind,
Version: kindVersion,
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
allSecrets := map[string]string{
"serviceaccountjson": saJson,
}
_, _, secrets, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
for key, value := range allSecrets {
assert.Contains(t, secrets, key)
assert.Equal(t, value, secrets[key].Value)
}
}
func TestBucket_AdaptBackup(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{backup.Normal}
saJson := "testSA"
bucketName := "testBucket2"
cron := "testCron2"
monitor := mntr.Monitor{}
namespace := "testNs2"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetBackup(client, namespace, k8sLabels, saJson)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(client))
}
func TestBucket_AdaptInstantBackup(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{backup.Instant}
bucketName := "testBucket1"
cron := "testCron"
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
saJson := "testSA"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetInstantBackup(client, namespace, backupName, k8sLabels, saJson)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NotNil(t, ensure)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBucket_AdaptRestore(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{restore.Instant}
bucketName := "testBucket1"
cron := "testCron"
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
saJson := "testSA"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetRestore(client, namespace, backupName, k8sLabels, saJson)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NotNil(t, ensure)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBucket_AdaptClean(t *testing.T) {
masterkey := "testMk"
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{clean.Instant}
bucketName := "testBucket1"
cron := "testCron"
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "BucketBackup", "v0"), "testComponent")
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
saJson := "testSA"
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
SetClean(client, namespace, backupName)
query, _, _, err := AdaptFunc(
backupName,
namespace,
componentLabels,
checkDBReady,
timestamp,
nodeselector,
tolerations,
version,
features,
)(
monitor,
desired,
&tree.Tree{},
)
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := SetQueriedForDatabases(databases)
ensure, err := query(client, queried)
assert.NotNil(t, ensure)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,136 @@
package backup
import (
"github.com/caos/zitadel/operator"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/cronjob"
"github.com/caos/orbos/pkg/kubernetes/resources/job"
"github.com/caos/orbos/pkg/labels"
corev1 "k8s.io/api/core/v1"
)
const (
defaultMode int32 = 256
certPath = "/cockroach/cockroach-certs"
secretPath = "/secrets/sa.json"
backupPath = "/cockroach"
backupNameEnv = "BACKUP_NAME"
cronJobNamePrefix = "backup-"
internalSecretName = "client-certs"
image = "ghcr.io/caos/zitadel-crbackup"
rootSecretName = "cockroachdb.client.root"
timeout time.Duration = 60
Normal = "backup"
Instant = "instantbackup"
)
func AdaptFunc(
monitor mntr.Monitor,
backupName string,
namespace string,
componentLabels *labels.Component,
databases []string,
checkDBReady operator.EnsureFunc,
bucketName string,
cron string,
secretName string,
secretKey string,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
features []string,
version string,
) (
queryFunc operator.QueryFunc,
destroyFunc operator.DestroyFunc,
err error,
) {
command := getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
)
jobSpecDef := getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
command,
)
destroyers := []operator.DestroyFunc{}
queriers := []operator.QueryFunc{}
cronJobDef := getCronJob(
namespace,
labels.MustForName(componentLabels, GetJobName(backupName)),
cron,
jobSpecDef,
)
destroyCJ, err := cronjob.AdaptFuncToDestroy(cronJobDef.Namespace, cronJobDef.Name)
if err != nil {
return nil, nil, err
}
queryCJ, err := cronjob.AdaptFuncToEnsure(cronJobDef)
if err != nil {
return nil, nil, err
}
jobDef := getJob(
namespace,
labels.MustForName(componentLabels, cronJobNamePrefix+backupName),
jobSpecDef,
)
destroyJ, err := job.AdaptFuncToDestroy(jobDef.Namespace, jobDef.Name)
if err != nil {
return nil, nil, err
}
queryJ, err := job.AdaptFuncToEnsure(jobDef)
if err != nil {
return nil, nil, err
}
for _, feature := range features {
switch feature {
case Normal:
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyCJ),
)
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryCJ),
)
case Instant:
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyJ),
)
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryJ),
operator.EnsureFuncToQueryFunc(getCleanupFunc(monitor, jobDef.Namespace, jobDef.Name)),
)
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}
func GetJobName(backupName string) string {
return cronJobNamePrefix + backupName
}

View File

@@ -0,0 +1,307 @@
package backup
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"testing"
)
func TestBackup_AdaptInstantBackup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Instant}
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
bucketName := "testBucket"
cron := "testCron"
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_AdaptInstantBackup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Instant}
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb2"}
bucketName := "testBucket2"
cron := "testCron2"
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_AdaptBackup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Normal}
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
bucketName := "testBucket"
cron := "testCron"
timestamp := "test"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getCronJob(
namespace,
nameLabels,
cron,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyCronJob(jobDef).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_AdaptBackup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
features := []string{Normal}
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb2"}
bucketName := "testBucket2"
cron := "testCron2"
timestamp := "test2"
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getCronJob(
namespace,
nameLabels,
cron,
getJobSpecDef(
nodeselector,
tolerations,
secretName,
secretKey,
backupName,
version,
getBackupCommand(
timestamp,
databases,
bucketName,
backupName,
),
),
)
client.EXPECT().ApplyCronJob(jobDef).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
checkDBReady,
bucketName,
cron,
secretName,
secretKey,
timestamp,
nodeselector,
tolerations,
features,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,25 @@
package backup
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func getCleanupFunc(monitor mntr.Monitor, namespace string, name string) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("waiting for backup to be completed")
if err := k8sClient.WaitUntilJobCompleted(namespace, name, timeout); err != nil {
monitor.Error(errors.Wrap(err, "error while waiting for backup to be completed"))
return err
}
monitor.Info("backup is completed, cleanup")
if err := k8sClient.DeleteJob(namespace, name); err != nil {
monitor.Error(errors.Wrap(err, "error while trying to cleanup backup"))
return err
}
monitor.Info("restore backup is completed")
return nil
}
}

View File

@@ -0,0 +1,40 @@
package backup
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/golang/mock/gomock"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Cleanup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test"
namespace := "testNs"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}
func TestBackup_Cleanup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test2"
namespace := "testNs2"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}

View File

@@ -0,0 +1,33 @@
package backup
import "strings"
func getBackupCommand(
timestamp string,
databases []string,
bucketName string,
backupName string,
) string {
backupCommands := make([]string, 0)
if timestamp != "" {
backupCommands = append(backupCommands, "export "+backupNameEnv+"="+timestamp)
} else {
backupCommands = append(backupCommands, "export "+backupNameEnv+"=$(date +%Y-%m-%dT%H:%M:%SZ)")
}
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/backup.sh",
backupName,
bucketName,
database,
backupPath,
secretPath,
certPath,
"${" + backupNameEnv + "}",
}, " "))
}
return strings.Join(backupCommands, " && ")
}

View File

@@ -0,0 +1,53 @@
package backup
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Command1(t *testing.T) {
timestamp := ""
databases := []string{}
bucketName := "test"
backupName := "test"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=$(date +%Y-%m-%dT%H:%M:%SZ)"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command2(t *testing.T) {
timestamp := "test"
databases := []string{}
bucketName := "test"
backupName := "test"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=test"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command3(t *testing.T) {
timestamp := ""
databases := []string{"testDb"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=$(date +%Y-%m-%dT%H:%M:%SZ) && /scripts/backup.sh testBackup testBucket testDb " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "}"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command4(t *testing.T) {
timestamp := "test"
databases := []string{"test1", "test2", "test3"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getBackupCommand(timestamp, databases, bucketName, backupName)
equals := "export " + backupNameEnv + "=test && " +
"/scripts/backup.sh testBackup testBucket test1 " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "} && " +
"/scripts/backup.sh testBackup testBucket test2 " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "} && " +
"/scripts/backup.sh testBackup testBucket test3 " + backupPath + " " + secretPath + " " + certPath + " ${" + backupNameEnv + "}"
assert.Equal(t, equals, cmd)
}

View File

@@ -0,0 +1,101 @@
package backup
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
batchv1 "k8s.io/api/batch/v1"
"k8s.io/api/batch/v1beta1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func getCronJob(
namespace string,
nameLabels *labels.Name,
cron string,
jobSpecDef batchv1.JobSpec,
) *v1beta1.CronJob {
return &v1beta1.CronJob{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: v1beta1.CronJobSpec{
Schedule: cron,
ConcurrencyPolicy: v1beta1.ForbidConcurrent,
JobTemplate: v1beta1.JobTemplateSpec{
Spec: jobSpecDef,
},
},
}
}
func getJob(
namespace string,
nameLabels *labels.Name,
jobSpecDef batchv1.JobSpec,
) *batchv1.Job {
return &batchv1.Job{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: jobSpecDef,
}
}
func getJobSpecDef(
nodeselector map[string]string,
tolerations []corev1.Toleration,
secretName string,
secretKey string,
backupName string,
version string,
command string,
) batchv1.JobSpec {
return batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: backupName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
}
}

View File

@@ -0,0 +1,123 @@
package backup
import (
"github.com/caos/zitadel/operator/helpers"
"github.com/stretchr/testify/assert"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
"testing"
)
func TestBackup_JobSpec1(t *testing.T) {
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
command := "test"
secretKey := "testKey"
secretName := "testSecretName"
equals := batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: backupName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
}
assert.Equal(t, equals, getJobSpecDef(nodeselector, tolerations, secretName, secretKey, backupName, version, command))
}
func TestBackup_JobSpec2(t *testing.T) {
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
command := "test2"
secretKey := "testKey2"
secretName := "testSecretName2"
equals := batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: backupName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
}
assert.Equal(t, equals, getJobSpecDef(nodeselector, tolerations, secretName, secretKey, backupName, version, command))
}

View File

@@ -0,0 +1,86 @@
package clean
import (
"github.com/caos/zitadel/operator"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/job"
"github.com/caos/orbos/pkg/labels"
corev1 "k8s.io/api/core/v1"
)
const (
Instant = "clean"
defaultMode = int32(256)
certPath = "/cockroach/cockroach-certs"
secretPath = "/secrets/sa.json"
internalSecretName = "client-certs"
image = "ghcr.io/caos/zitadel-crbackup"
rootSecretName = "cockroachdb.client.root"
jobPrefix = "backup-"
jobSuffix = "-clean"
timeout time.Duration = 60
)
func AdaptFunc(
monitor mntr.Monitor,
backupName string,
namespace string,
componentLabels *labels.Component,
databases []string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
checkDBReady operator.EnsureFunc,
secretName string,
secretKey string,
version string,
) (
queryFunc operator.QueryFunc,
destroyFunc operator.DestroyFunc,
err error,
) {
command := getCommand(databases)
jobDef := getJob(
namespace,
labels.MustForName(componentLabels, GetJobName(backupName)),
nodeselector,
tolerations,
secretName,
secretKey,
version,
command)
destroyJ, err := job.AdaptFuncToDestroy(jobDef.Namespace, jobDef.Name)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyJ),
}
queryJ, err := job.AdaptFuncToEnsure(jobDef)
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryJ),
operator.EnsureFuncToQueryFunc(getCleanupFunc(monitor, jobDef.Namespace, jobDef.Name)),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}
func GetJobName(backupName string) string {
return jobPrefix + backupName + jobSuffix
}

View File

@@ -0,0 +1,134 @@
package clean
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"testing"
)
func TestBackup_Adapt1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
backupName := "testName"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
databases,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_Adapt2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb1", "testDb2"}
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
backupName := "testName2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
databases,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,29 @@
package clean
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func getCleanupFunc(
monitor mntr.Monitor,
namespace string,
jobName string,
) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("waiting for clean to be completed")
if err := k8sClient.WaitUntilJobCompleted(namespace, jobName, 60); err != nil {
monitor.Error(errors.Wrap(err, "error while waiting for clean to be completed"))
return err
}
monitor.Info("clean is completed, cleanup")
if err := k8sClient.DeleteJob(namespace, jobName); err != nil {
monitor.Error(errors.Wrap(err, "error while trying to cleanup clean"))
return err
}
monitor.Info("clean cleanup is completed")
return nil
}
}

View File

@@ -0,0 +1,40 @@
package clean
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/golang/mock/gomock"
"github.com/pkg/errors"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Cleanup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test"
namespace := "testNs"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}
func TestBackup_Cleanup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test2"
namespace := "testNs2"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}

View File

@@ -0,0 +1,32 @@
package clean
import "strings"
func getCommand(
databases []string,
) string {
backupCommands := make([]string, 0)
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/clean-db.sh",
certPath,
database,
}, " "))
}
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/clean-user.sh",
certPath,
database,
}, " "))
}
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/clean-migration.sh",
certPath,
}, " "))
return strings.Join(backupCommands, " && ")
}

View File

@@ -0,0 +1,35 @@
package clean
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestClean_Command1(t *testing.T) {
databases := []string{}
cmd := getCommand(databases)
equals := "/scripts/clean-migration.sh " + certPath
assert.Equal(t, equals, cmd)
}
func TestClean_Command2(t *testing.T) {
databases := []string{"test"}
cmd := getCommand(databases)
equals := "/scripts/clean-db.sh " + certPath + " test && /scripts/clean-user.sh " + certPath + " test && /scripts/clean-migration.sh " + certPath
assert.Equal(t, equals, cmd)
}
func TestClean_Command3(t *testing.T) {
databases := []string{"test1", "test2", "test3"}
cmd := getCommand(databases)
equals := "/scripts/clean-db.sh " + certPath + " test1 && /scripts/clean-db.sh " + certPath + " test2 && /scripts/clean-db.sh " + certPath + " test3 && " +
"/scripts/clean-user.sh " + certPath + " test1 && /scripts/clean-user.sh " + certPath + " test2 && /scripts/clean-user.sh " + certPath + " test3 && " +
"/scripts/clean-migration.sh " + certPath
assert.Equal(t, equals, cmd)
}

View File

@@ -0,0 +1,73 @@
package clean
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func getJob(
namespace string,
nameLabels *labels.Name,
nodeselector map[string]string,
tolerations []corev1.Toleration,
secretName string,
secretKey string,
version string,
command string,
) *batchv1.Job {
return &batchv1.Job{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
NodeSelector: nodeselector,
Tolerations: tolerations,
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{{
Name: nameLabels.Name(),
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
}

View File

@@ -0,0 +1,164 @@
package clean
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
"github.com/stretchr/testify/assert"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestBackup_Job1(t *testing.T) {
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
version := "testVersion"
command := "test"
secretKey := "testKey"
secretName := "testSecretName"
jobName := "testJob"
namespace := "testNs"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testOpVersion",
"caos.ch/apiversion": "testVersion",
"caos.ch/kind": "testKind"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testOpVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}
func TestBackup_Job2(t *testing.T) {
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
version := "testVersion2"
command := "test2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := "testJob2"
namespace := "testNs2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testOpVersion2",
"caos.ch/apiversion": "testVersion2",
"caos.ch/kind": "testKind2"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testOpVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}

View File

@@ -0,0 +1,42 @@
package bucket
import (
secret2 "github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec *Spec
}
type Spec struct {
Verbose bool
Cron string `yaml:"cron,omitempty"`
Bucket string `yaml:"bucket,omitempty"`
ServiceAccountJSON *secret2.Secret `yaml:"serviceAccountJSON,omitempty"`
}
func (s *Spec) IsZero() bool {
if (s.ServiceAccountJSON == nil || s.ServiceAccountJSON.IsZero()) &&
!s.Verbose &&
s.Cron == "" &&
s.Bucket == "" {
return true
}
return false
}
func ParseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{
Common: desiredTree.Common,
Spec: &Spec{},
}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
return desiredKind, nil
}

View File

@@ -0,0 +1,129 @@
package bucket
import (
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/stretchr/testify/assert"
"gopkg.in/yaml.v3"
"testing"
)
const (
masterkey = "testMk"
cron = "testCron"
bucketName = "testBucket"
saJson = "testSa"
yamlFile = `kind: databases.caos.ch/BucketBackup
version: v0
spec:
verbose: true
cron: testCron
bucket: testBucket
serviceAccountJSON:
encryption: AES256
encoding: Base64
value: luyAqtopzwLcaIhJj7KhWmbUsA7cQg==
`
yamlFileWithoutSecret = `kind: databases.caos.ch/BucketBackup
version: v0
spec:
verbose: true
cron: testCron
bucket: testBucket
`
yamlEmpty = `kind: databases.caos.ch/BucketBackup
version: v0`
)
var (
desired = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
ServiceAccountJSON: &secret.Secret{
Value: saJson,
Encryption: "AES256",
Encoding: "Base64",
},
},
}
desiredWithoutSecret = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: true,
Cron: cron,
Bucket: bucketName,
},
}
desiredEmpty = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &Spec{
Verbose: false,
Cron: "",
Bucket: "",
ServiceAccountJSON: &secret.Secret{
Value: "",
},
},
}
desiredNil = DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
}
)
func marshalYaml(t *testing.T, masterkey string, struc *DesiredV0) []byte {
secret.Masterkey = masterkey
data, err := yaml.Marshal(struc)
assert.NoError(t, err)
return data
}
func unmarshalYaml(t *testing.T, masterkey string, yamlFile []byte) *tree.Tree {
secret.Masterkey = masterkey
desiredTree := &tree.Tree{}
assert.NoError(t, yaml.Unmarshal(yamlFile, desiredTree))
return desiredTree
}
func getDesiredTree(t *testing.T, masterkey string, desired *DesiredV0) *tree.Tree {
return unmarshalYaml(t, masterkey, marshalYaml(t, masterkey, desired))
}
func TestBucket_DesiredParse(t *testing.T) {
assert.Equal(t, yamlFileWithoutSecret, string(marshalYaml(t, masterkey, &desiredWithoutSecret)))
desiredTree := unmarshalYaml(t, masterkey, []byte(yamlFile))
desiredKind, err := ParseDesiredV0(desiredTree)
assert.NoError(t, err)
assert.Equal(t, &desired, desiredKind)
}
func TestBucket_DesiredNotZero(t *testing.T) {
desiredTree := unmarshalYaml(t, masterkey, []byte(yamlFile))
desiredKind, err := ParseDesiredV0(desiredTree)
assert.NoError(t, err)
assert.False(t, desiredKind.Spec.IsZero())
}
func TestBucket_DesiredZero(t *testing.T) {
desiredTree := unmarshalYaml(t, masterkey, []byte(yamlEmpty))
desiredKind, err := ParseDesiredV0(desiredTree)
assert.NoError(t, err)
assert.True(t, desiredKind.Spec.IsZero())
}

View File

@@ -0,0 +1,63 @@
package bucket
import (
"cloud.google.com/go/storage"
"context"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups/core"
"github.com/pkg/errors"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"strings"
)
func BackupList() core.BackupListFunc {
return func(monitor mntr.Monitor, name string, desired *tree.Tree) ([]string, error) {
desiredKind, err := ParseDesiredV0(desired)
if err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
monitor.Verbose()
}
return listFilesWithFilter(desiredKind.Spec.ServiceAccountJSON.Value, desiredKind.Spec.Bucket, name)
}
}
func listFilesWithFilter(serviceAccountJSON string, bucketName, name string) ([]string, error) {
ctx := context.Background()
client, err := storage.NewClient(ctx, option.WithCredentialsJSON([]byte(serviceAccountJSON)))
if err != nil {
return nil, err
}
bkt := client.Bucket(bucketName)
names := make([]string, 0)
it := bkt.Objects(ctx, &storage.Query{Prefix: name + "/"})
for {
attrs, err := it.Next()
if err == iterator.Done {
break
}
if err != nil {
return nil, err
}
parts := strings.Split(attrs.Name, "/")
found := false
for _, name := range names {
if name == parts[1] {
found = true
}
}
if !found {
names = append(names, parts[1])
}
}
return names, nil
}

View File

@@ -0,0 +1,97 @@
package bucket
import (
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/golang/mock/gomock"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func SetQueriedForDatabases(databases []string) map[string]interface{} {
queried := map[string]interface{}{}
core.SetQueriedForDatabaseDBList(queried, databases)
return queried
}
func SetInstantBackup(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
backupName string,
labels map[string]string,
saJson string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespace,
Labels: labels,
},
StringData: map[string]string{secretKey: saJson},
Type: "Opaque",
}).Times(1).Return(nil)
k8sClient.EXPECT().ApplyJob(gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().GetJob(namespace, backup.GetJobName(backupName)).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, backup.GetJobName(backupName)))
k8sClient.EXPECT().WaitUntilJobCompleted(namespace, backup.GetJobName(backupName), gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().DeleteJob(namespace, backup.GetJobName(backupName)).Times(1).Return(nil)
}
func SetBackup(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
labels map[string]string,
saJson string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespace,
Labels: labels,
},
StringData: map[string]string{secretKey: saJson},
Type: "Opaque",
}).Times(1).Return(nil)
k8sClient.EXPECT().ApplyCronJob(gomock.Any()).Times(1).Return(nil)
}
func SetClean(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
backupName string,
) {
k8sClient.EXPECT().ApplyJob(gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().GetJob(namespace, clean.GetJobName(backupName)).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, clean.GetJobName(backupName)))
k8sClient.EXPECT().WaitUntilJobCompleted(namespace, clean.GetJobName(backupName), gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().DeleteJob(namespace, clean.GetJobName(backupName)).Times(1).Return(nil)
}
func SetRestore(
k8sClient *kubernetesmock.MockClientInt,
namespace string,
backupName string,
labels map[string]string,
saJson string,
) {
k8sClient.EXPECT().ApplySecret(&corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: namespace,
Labels: labels,
},
StringData: map[string]string{secretKey: saJson},
Type: "Opaque",
}).Times(1).Return(nil)
k8sClient.EXPECT().ApplyJob(gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().GetJob(namespace, restore.GetJobName(backupName)).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, restore.GetJobName(backupName)))
k8sClient.EXPECT().WaitUntilJobCompleted(namespace, restore.GetJobName(backupName), gomock.Any()).Times(1).Return(nil)
k8sClient.EXPECT().DeleteJob(namespace, restore.GetJobName(backupName)).Times(1).Return(nil)
}

View File

@@ -0,0 +1,95 @@
package restore
import (
"github.com/caos/zitadel/operator"
"time"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/job"
"github.com/caos/orbos/pkg/labels"
corev1 "k8s.io/api/core/v1"
)
const (
Instant = "restore"
defaultMode = int32(256)
certPath = "/cockroach/cockroach-certs"
secretPath = "/secrets/sa.json"
jobPrefix = "backup-"
jobSuffix = "-restore"
image = "ghcr.io/caos/zitadel-crbackup"
internalSecretName = "client-certs"
rootSecretName = "cockroachdb.client.root"
timeout time.Duration = 60
)
func AdaptFunc(
monitor mntr.Monitor,
backupName string,
namespace string,
componentLabels *labels.Component,
databases []string,
bucketName string,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
checkDBReady operator.EnsureFunc,
secretName string,
secretKey string,
version string,
) (
queryFunc operator.QueryFunc,
destroyFunc operator.DestroyFunc,
err error,
) {
jobName := jobPrefix + backupName + jobSuffix
command := getCommand(
timestamp,
databases,
bucketName,
backupName,
)
jobdef := getJob(
namespace,
labels.MustForName(componentLabels, GetJobName(backupName)),
nodeselector,
tolerations,
secretName,
secretKey,
version,
command)
destroyJ, err := job.AdaptFuncToDestroy(jobName, namespace)
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyJ),
}
queryJ, err := job.AdaptFuncToEnsure(jobdef)
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
operator.EnsureFuncToQueryFunc(checkDBReady),
operator.ResourceQueryToZitadelQuery(queryJ),
operator.EnsureFuncToQueryFunc(getCleanupFunc(monitor, jobdef.Namespace, jobdef.Name)),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}
func GetJobName(backupName string) string {
return jobPrefix + backupName + jobSuffix
}

View File

@@ -0,0 +1,148 @@
package restore
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
macherrs "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime/schema"
"testing"
)
func TestBackup_Adapt1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
databases := []string{"testDb"}
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
timestamp := "testTs"
backupName := "testName2"
bucketName := "testBucket2"
version := "testVersion"
secretKey := "testKey"
secretName := "testSecretName"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
timestamp,
databases,
bucketName,
backupName,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
bucketName,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}
func TestBackup_Adapt2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
databases := []string{"testDb1", "testDb2"}
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
timestamp := "testTs"
backupName := "testName2"
bucketName := "testBucket2"
version := "testVersion2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := GetJobName(backupName)
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
return nil
}
jobDef := getJob(
namespace,
nameLabels,
nodeselector,
tolerations,
secretName,
secretKey,
version,
getCommand(
timestamp,
databases,
bucketName,
backupName,
),
)
client.EXPECT().ApplyJob(jobDef).Times(1).Return(nil)
client.EXPECT().GetJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil, macherrs.NewNotFound(schema.GroupResource{"batch", "jobs"}, jobName))
client.EXPECT().WaitUntilJobCompleted(jobDef.Namespace, jobDef.Name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(jobDef.Namespace, jobDef.Name).Times(1).Return(nil)
query, _, err := AdaptFunc(
monitor,
backupName,
namespace,
componentLabels,
databases,
bucketName,
timestamp,
nodeselector,
tolerations,
checkDBReady,
secretName,
secretKey,
version,
)
assert.NoError(t, err)
queried := map[string]interface{}{}
ensure, err := query(client, queried)
assert.NoError(t, err)
assert.NoError(t, ensure(client))
}

View File

@@ -0,0 +1,25 @@
package restore
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func getCleanupFunc(monitor mntr.Monitor, namespace, jobName string) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) error {
monitor.Info("waiting for restore to be completed")
if err := k8sClient.WaitUntilJobCompleted(namespace, jobName, timeout); err != nil {
monitor.Error(errors.Wrap(err, "error while waiting for restore to be completed"))
return err
}
monitor.Info("restore is completed, cleanup")
if err := k8sClient.DeleteJob(namespace, jobName); err != nil {
monitor.Error(errors.Wrap(err, "error while trying to cleanup restore"))
return err
}
monitor.Info("restore cleanup is completed")
return nil
}
}

View File

@@ -0,0 +1,40 @@
package restore
import (
"errors"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Cleanup1(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test"
namespace := "testNs"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}
func TestBackup_Cleanup2(t *testing.T) {
client := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
name := "test2"
namespace := "testNs2"
cleanupFunc := getCleanupFunc(monitor, namespace, name)
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(nil)
client.EXPECT().DeleteJob(namespace, name).Times(1)
assert.NoError(t, cleanupFunc(client))
client.EXPECT().WaitUntilJobCompleted(namespace, name, timeout).Times(1).Return(errors.New("fail"))
assert.Error(t, cleanupFunc(client))
}

View File

@@ -0,0 +1,28 @@
package restore
import "strings"
func getCommand(
timestamp string,
databases []string,
bucketName string,
backupName string,
) string {
backupCommands := make([]string, 0)
for _, database := range databases {
backupCommands = append(backupCommands,
strings.Join([]string{
"/scripts/restore.sh",
bucketName,
backupName,
timestamp,
database,
secretPath,
certPath,
}, " "))
}
return strings.Join(backupCommands, " && ")
}

View File

@@ -0,0 +1,50 @@
package restore
import (
"github.com/stretchr/testify/assert"
"testing"
)
func TestBackup_Command1(t *testing.T) {
timestamp := ""
databases := []string{}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := ""
assert.Equal(t, equals, cmd)
}
func TestBackup_Command2(t *testing.T) {
timestamp := ""
databases := []string{"testDb"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := "/scripts/restore.sh testBucket testBackup testDb /secrets/sa.json /cockroach/cockroach-certs"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command3(t *testing.T) {
timestamp := "test"
databases := []string{"testDb"}
bucketName := "testBucket"
backupName := "testBackup"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := "/scripts/restore.sh testBucket testBackup test testDb /secrets/sa.json /cockroach/cockroach-certs"
assert.Equal(t, equals, cmd)
}
func TestBackup_Command4(t *testing.T) {
timestamp := ""
databases := []string{}
bucketName := "test"
backupName := "test"
cmd := getCommand(timestamp, databases, bucketName, backupName)
equals := ""
assert.Equal(t, equals, cmd)
}

View File

@@ -0,0 +1,72 @@
package restore
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func getJob(
namespace string,
nameLabels *labels.Name,
nodeselector map[string]string,
tolerations []corev1.Toleration,
secretName string,
secretKey string,
version string,
command string,
) *batchv1.Job {
return &batchv1.Job{
ObjectMeta: v1.ObjectMeta{
Name: nameLabels.Name(),
Namespace: namespace,
Labels: labels.MustK8sMap(nameLabels),
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
NodeSelector: nodeselector,
Tolerations: tolerations,
RestartPolicy: corev1.RestartPolicyNever,
Containers: []corev1.Container{{
Name: nameLabels.Name(),
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
}

View File

@@ -0,0 +1,163 @@
package restore
import (
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
"github.com/stretchr/testify/assert"
batchv1 "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestBackup_Job1(t *testing.T) {
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{
{Key: "testKey", Operator: "testOp"}}
version := "testVersion"
command := "test"
secretKey := "testKey"
secretName := "testSecretName"
jobName := "testJob"
namespace := "testNs"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testOpVersion",
"caos.ch/apiversion": "testVersion",
"caos.ch/kind": "testKind"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testOpVersion"), "testKind", "testVersion"), "testComponent")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}
func TestBackup_Job2(t *testing.T) {
nodeselector := map[string]string{"test2": "test2"}
tolerations := []corev1.Toleration{
{Key: "testKey2", Operator: "testOp2"}}
version := "testVersion2"
command := "test2"
secretKey := "testKey2"
secretName := "testSecretName2"
jobName := "testJob2"
namespace := "testNs2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": jobName,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testOpVersion2",
"caos.ch/apiversion": "testVersion2",
"caos.ch/kind": "testKind2"}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testOpVersion2"), "testKind2", "testVersion2"), "testComponent2")
nameLabels := labels.MustForName(componentLabels, jobName)
equals :=
&batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: jobName,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: batchv1.JobSpec{
Template: corev1.PodTemplateSpec{
Spec: corev1.PodSpec{
RestartPolicy: corev1.RestartPolicyNever,
NodeSelector: nodeselector,
Tolerations: tolerations,
Containers: []corev1.Container{{
Name: jobName,
Image: image + ":" + version,
Command: []string{
"/bin/bash",
"-c",
command,
},
VolumeMounts: []corev1.VolumeMount{{
Name: internalSecretName,
MountPath: certPath,
}, {
Name: secretKey,
SubPath: secretKey,
MountPath: secretPath,
}},
ImagePullPolicy: corev1.PullAlways,
}},
Volumes: []corev1.Volume{{
Name: internalSecretName,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecretName,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: secretKey,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: secretName,
},
},
}},
},
},
},
}
assert.Equal(t, equals, getJob(namespace, nameLabels, nodeselector, tolerations, secretName, secretKey, version, command))
}

View File

@@ -0,0 +1,19 @@
package bucket
import (
"github.com/caos/orbos/pkg/secret"
)
func getSecretsMap(desiredKind *DesiredV0) map[string]*secret.Secret {
secrets := make(map[string]*secret.Secret, 0)
if desiredKind.Spec == nil {
desiredKind.Spec = &Spec{}
}
if desiredKind.Spec.ServiceAccountJSON == nil {
desiredKind.Spec.ServiceAccountJSON = &secret.Secret{}
}
secrets["serviceaccountjson"] = desiredKind.Spec.ServiceAccountJSON
return secrets
}

View File

@@ -0,0 +1,22 @@
package bucket
import (
"github.com/caos/orbos/pkg/secret"
"github.com/stretchr/testify/assert"
"testing"
)
func TestBucket_getSecretsFull(t *testing.T) {
secrets := getSecretsMap(&desired)
assert.Equal(t, desired.Spec.ServiceAccountJSON, secrets["serviceaccountjson"])
}
func TestBucket_getSecretsEmpty(t *testing.T) {
secrets := getSecretsMap(&desiredWithoutSecret)
assert.Equal(t, &secret.Secret{}, secrets["serviceaccountjson"])
}
func TestBucket_getSecretsNil(t *testing.T) {
secrets := getSecretsMap(&desiredNil)
assert.Equal(t, &secret.Secret{}, secrets["serviceaccountjson"])
}

View File

@@ -0,0 +1,8 @@
package core
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
)
type BackupListFunc func(monitor mntr.Monitor, name string, desired *tree.Tree) ([]string, error)

View File

@@ -0,0 +1,64 @@
package core
import (
"crypto/rsa"
"errors"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
)
const queriedName = "database"
type DatabaseCurrent interface {
GetURL() string
GetPort() string
GetReadyQuery() operator.EnsureFunc
GetCertificateKey() *rsa.PrivateKey
SetCertificateKey(*rsa.PrivateKey)
GetCertificate() []byte
SetCertificate([]byte)
GetAddUserFunc() func(user string) (operator.QueryFunc, error)
GetDeleteUserFunc() func(user string) (operator.DestroyFunc, error)
GetListUsersFunc() func(k8sClient kubernetes.ClientInt) ([]string, error)
GetListDatabasesFunc() func(k8sClient kubernetes.ClientInt) ([]string, error)
}
func ParseQueriedForDatabase(queried map[string]interface{}) (DatabaseCurrent, error) {
queriedDB, ok := queried[queriedName]
if !ok {
return nil, errors.New("no current state for database found")
}
currentDBTree, ok := queriedDB.(*tree.Tree)
if !ok {
return nil, errors.New("current state does not fullfil interface")
}
currentDB, ok := currentDBTree.Parsed.(DatabaseCurrent)
if !ok {
return nil, errors.New("current state does not fullfil interface")
}
return currentDB, nil
}
func SetQueriedForDatabase(queried map[string]interface{}, databaseCurrent *tree.Tree) {
queried[queriedName] = databaseCurrent
}
func SetQueriedForDatabaseDBList(queried map[string]interface{}, databases []string) {
currentDBList := &CurrentDBList{
Common: &tree.Common{
Kind: "DBList",
Version: "V0",
},
Current: &DatabaseCurrentDBList{
Databases: databases,
},
}
currentDB := &tree.Tree{
Parsed: currentDBList,
}
SetQueriedForDatabase(queried, currentDB)
}

View File

@@ -0,0 +1,65 @@
package core
import (
"crypto/rsa"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
)
var current DatabaseCurrent = &CurrentDBList{}
type CurrentDBList struct {
Common *tree.Common `yaml:",inline"`
Current *DatabaseCurrentDBList
}
type DatabaseCurrentDBList struct {
Databases []string
}
func (c *CurrentDBList) GetURL() string {
return ""
}
func (c *CurrentDBList) GetPort() string {
return ""
}
func (c *CurrentDBList) GetReadyQuery() operator.EnsureFunc {
return nil
}
func (c *CurrentDBList) GetCertificateKey() *rsa.PrivateKey {
return nil
}
func (c *CurrentDBList) SetCertificateKey(key *rsa.PrivateKey) {
return
}
func (c *CurrentDBList) GetCertificate() []byte {
return nil
}
func (c *CurrentDBList) SetCertificate(cert []byte) {
return
}
func (c *CurrentDBList) GetListDatabasesFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return func(k8sClient kubernetes.ClientInt) ([]string, error) {
return c.Current.Databases, nil
}
}
func (c *CurrentDBList) GetListUsersFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return nil
}
func (c *CurrentDBList) GetAddUserFunc() func(user string) (operator.QueryFunc, error) {
return nil
}
func (c *CurrentDBList) GetDeleteUserFunc() func(user string) (operator.DestroyFunc, error) {
return nil
}

View File

@@ -0,0 +1,3 @@
package core
//go:generate mockgen -source current.go -package coremock -destination mock/current.mock.go github.com/caos/internal/operator/database/kinds/databases/core DatabaseCurrent

View File

@@ -0,0 +1,186 @@
// Code generated by MockGen. DO NOT EDIT.
// Source: current.go
// Package coremock is a generated GoMock package.
package coremock
import (
rsa "crypto/rsa"
kubernetes "github.com/caos/orbos/pkg/kubernetes"
operator "github.com/caos/zitadel/operator"
gomock "github.com/golang/mock/gomock"
reflect "reflect"
)
// MockDatabaseCurrent is a mock of DatabaseCurrent interface
type MockDatabaseCurrent struct {
ctrl *gomock.Controller
recorder *MockDatabaseCurrentMockRecorder
}
// MockDatabaseCurrentMockRecorder is the mock recorder for MockDatabaseCurrent
type MockDatabaseCurrentMockRecorder struct {
mock *MockDatabaseCurrent
}
// NewMockDatabaseCurrent creates a new mock instance
func NewMockDatabaseCurrent(ctrl *gomock.Controller) *MockDatabaseCurrent {
mock := &MockDatabaseCurrent{ctrl: ctrl}
mock.recorder = &MockDatabaseCurrentMockRecorder{mock}
return mock
}
// EXPECT returns an object that allows the caller to indicate expected use
func (m *MockDatabaseCurrent) EXPECT() *MockDatabaseCurrentMockRecorder {
return m.recorder
}
// GetURL mocks base method
func (m *MockDatabaseCurrent) GetURL() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetURL")
ret0, _ := ret[0].(string)
return ret0
}
// GetURL indicates an expected call of GetURL
func (mr *MockDatabaseCurrentMockRecorder) GetURL() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetURL", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetURL))
}
// GetPort mocks base method
func (m *MockDatabaseCurrent) GetPort() string {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetPort")
ret0, _ := ret[0].(string)
return ret0
}
// GetPort indicates an expected call of GetPort
func (mr *MockDatabaseCurrentMockRecorder) GetPort() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetPort", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetPort))
}
// GetReadyQuery mocks base method
func (m *MockDatabaseCurrent) GetReadyQuery() operator.EnsureFunc {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetReadyQuery")
ret0, _ := ret[0].(operator.EnsureFunc)
return ret0
}
// GetReadyQuery indicates an expected call of GetReadyQuery
func (mr *MockDatabaseCurrentMockRecorder) GetReadyQuery() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetReadyQuery", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetReadyQuery))
}
// GetCertificateKey mocks base method
func (m *MockDatabaseCurrent) GetCertificateKey() *rsa.PrivateKey {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCertificateKey")
ret0, _ := ret[0].(*rsa.PrivateKey)
return ret0
}
// GetCertificateKey indicates an expected call of GetCertificateKey
func (mr *MockDatabaseCurrentMockRecorder) GetCertificateKey() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCertificateKey", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetCertificateKey))
}
// SetCertificateKey mocks base method
func (m *MockDatabaseCurrent) SetCertificateKey(arg0 *rsa.PrivateKey) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetCertificateKey", arg0)
}
// SetCertificateKey indicates an expected call of SetCertificateKey
func (mr *MockDatabaseCurrentMockRecorder) SetCertificateKey(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetCertificateKey", reflect.TypeOf((*MockDatabaseCurrent)(nil).SetCertificateKey), arg0)
}
// GetCertificate mocks base method
func (m *MockDatabaseCurrent) GetCertificate() []byte {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetCertificate")
ret0, _ := ret[0].([]byte)
return ret0
}
// GetCertificate indicates an expected call of GetCertificate
func (mr *MockDatabaseCurrentMockRecorder) GetCertificate() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetCertificate", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetCertificate))
}
// SetCertificate mocks base method
func (m *MockDatabaseCurrent) SetCertificate(arg0 []byte) {
m.ctrl.T.Helper()
m.ctrl.Call(m, "SetCertificate", arg0)
}
// SetCertificate indicates an expected call of SetCertificate
func (mr *MockDatabaseCurrentMockRecorder) SetCertificate(arg0 interface{}) *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "SetCertificate", reflect.TypeOf((*MockDatabaseCurrent)(nil).SetCertificate), arg0)
}
// GetAddUserFunc mocks base method
func (m *MockDatabaseCurrent) GetAddUserFunc() func(string) (operator.QueryFunc, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetAddUserFunc")
ret0, _ := ret[0].(func(string) (operator.QueryFunc, error))
return ret0
}
// GetAddUserFunc indicates an expected call of GetAddUserFunc
func (mr *MockDatabaseCurrentMockRecorder) GetAddUserFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetAddUserFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetAddUserFunc))
}
// GetDeleteUserFunc mocks base method
func (m *MockDatabaseCurrent) GetDeleteUserFunc() func(string) (operator.DestroyFunc, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetDeleteUserFunc")
ret0, _ := ret[0].(func(string) (operator.DestroyFunc, error))
return ret0
}
// GetDeleteUserFunc indicates an expected call of GetDeleteUserFunc
func (mr *MockDatabaseCurrentMockRecorder) GetDeleteUserFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetDeleteUserFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetDeleteUserFunc))
}
// GetListUsersFunc mocks base method
func (m *MockDatabaseCurrent) GetListUsersFunc() func(kubernetes.ClientInt) ([]string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetListUsersFunc")
ret0, _ := ret[0].(func(kubernetes.ClientInt) ([]string, error))
return ret0
}
// GetListUsersFunc indicates an expected call of GetListUsersFunc
func (mr *MockDatabaseCurrentMockRecorder) GetListUsersFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetListUsersFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetListUsersFunc))
}
// GetListDatabasesFunc mocks base method
func (m *MockDatabaseCurrent) GetListDatabasesFunc() func(kubernetes.ClientInt) ([]string, error) {
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "GetListDatabasesFunc")
ret0, _ := ret[0].(func(kubernetes.ClientInt) ([]string, error))
return ret0
}
// GetListDatabasesFunc indicates an expected call of GetListDatabasesFunc
func (mr *MockDatabaseCurrentMockRecorder) GetListDatabasesFunc() *gomock.Call {
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "GetListDatabasesFunc", reflect.TypeOf((*MockDatabaseCurrent)(nil).GetListDatabasesFunc))
}

View File

@@ -0,0 +1,68 @@
package databases
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases/managed"
"github.com/caos/zitadel/operator/database/kinds/databases/provided"
"github.com/pkg/errors"
core "k8s.io/api/core/v1"
)
const (
component = "database"
)
func ComponentSelector() *labels.Selector {
return labels.OpenComponentSelector("ZITADEL", component)
}
func GetQueryAndDestroyFuncs(
monitor mntr.Monitor,
desiredTree *tree.Tree,
currentTree *tree.Tree,
namespace string,
apiLabels *labels.API,
timestamp string,
nodeselector map[string]string,
tolerations []core.Toleration,
version string,
features []string,
) (
query operator.QueryFunc,
destroy operator.DestroyFunc,
secrets map[string]*secret.Secret,
err error,
) {
componentLabels := labels.MustForComponent(apiLabels, component)
internalMonitor := monitor.WithField("component", component)
switch desiredTree.Common.Kind {
case "databases.caos.ch/CockroachDB":
return managed.AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(internalMonitor, desiredTree, currentTree)
case "databases.caos.ch/ProvidedDatabase":
return provided.AdaptFunc()(internalMonitor, desiredTree, currentTree)
default:
return nil, nil, nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}
func GetBackupList(
monitor mntr.Monitor,
desiredTree *tree.Tree,
) (
[]string,
error,
) {
switch desiredTree.Common.Kind {
case "databases.caos.ch/CockroachDB":
return managed.BackupList()(monitor, desiredTree)
case "databases.caos.ch/ProvidedDatabse":
return nil, errors.Errorf("no backups supported for database kind %s", desiredTree.Common.Kind)
default:
return nil, errors.Errorf("unknown database kind %s", desiredTree.Common.Kind)
}
}

View File

@@ -0,0 +1,256 @@
package managed
import (
"github.com/caos/zitadel/operator"
"strconv"
"strings"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate"
corev1 "k8s.io/api/core/v1"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/pdb"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/rbac"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/services"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/statefulset"
"github.com/pkg/errors"
)
const (
SfsName = "cockroachdb"
pdbName = SfsName + "-budget"
serviceAccountName = SfsName
PublicServiceName = SfsName + "-public"
privateServiceName = SfsName
cockroachPort = int32(26257)
cockroachHTTPPort = int32(8080)
image = "cockroachdb/cockroach:v20.2.3"
)
func AdaptFunc(
componentLabels *labels.Component,
namespace string,
timestamp string,
nodeselector map[string]string,
tolerations []corev1.Toleration,
version string,
features []string,
) func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
return func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
internalMonitor := monitor.WithField("kind", "cockroachdb")
allSecrets := map[string]*secret.Secret{}
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
internalMonitor.Verbose()
}
var (
isFeatureDatabase bool
isFeatureRestore bool
)
for _, feature := range features {
switch feature {
case "database":
isFeatureDatabase = true
case "restore":
isFeatureRestore = true
}
}
queryCert, destroyCert, addUser, deleteUser, listUsers, err := certificate.AdaptFunc(internalMonitor, namespace, componentLabels, desiredKind.Spec.ClusterDns, isFeatureDatabase)
if err != nil {
return nil, nil, nil, err
}
addRoot, err := addUser("root")
if err != nil {
return nil, nil, nil, err
}
destroyRoot, err := deleteUser("root")
if err != nil {
return nil, nil, nil, err
}
queryRBAC, destroyRBAC, err := rbac.AdaptFunc(internalMonitor, namespace, labels.MustForName(componentLabels, serviceAccountName))
cockroachNameLabels := labels.MustForName(componentLabels, SfsName)
cockroachSelector := labels.DeriveNameSelector(cockroachNameLabels, false)
cockroachSelectabel := labels.AsSelectable(cockroachNameLabels)
querySFS, destroySFS, ensureInit, checkDBReady, listDatabases, err := statefulset.AdaptFunc(
internalMonitor,
cockroachSelectabel,
cockroachSelector,
desiredKind.Spec.Force,
namespace,
image,
serviceAccountName,
desiredKind.Spec.ReplicaCount,
desiredKind.Spec.StorageCapacity,
cockroachPort,
cockroachHTTPPort,
desiredKind.Spec.StorageClass,
desiredKind.Spec.NodeSelector,
desiredKind.Spec.Tolerations,
desiredKind.Spec.Resources,
)
if err != nil {
return nil, nil, nil, err
}
queryS, destroyS, err := services.AdaptFunc(
internalMonitor,
namespace,
labels.MustForName(componentLabels, PublicServiceName),
labels.MustForName(componentLabels, privateServiceName),
cockroachSelector,
cockroachPort,
cockroachHTTPPort,
)
//externalName := "cockroachdb-public." + namespaceStr + ".svc.cluster.local"
//queryES, destroyES, err := service.AdaptFunc("cockroachdb-public", "default", labels, []service.Port{}, "ExternalName", map[string]string{}, false, "", externalName)
//if err != nil {
// return nil, nil, err
//}
queryPDB, err := pdb.AdaptFuncToEnsure(namespace, labels.MustForName(componentLabels, pdbName), cockroachSelector, "1")
if err != nil {
return nil, nil, nil, err
}
destroyPDB, err := pdb.AdaptFuncToDestroy(namespace, pdbName)
if err != nil {
return nil, nil, nil, err
}
currentDB := &Current{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Current: &CurrentDB{
CA: &certificate.Current{},
},
}
current.Parsed = currentDB
queriers := make([]operator.QueryFunc, 0)
if isFeatureDatabase {
queriers = append(queriers,
queryRBAC,
queryCert,
addRoot,
operator.ResourceQueryToZitadelQuery(querySFS),
operator.ResourceQueryToZitadelQuery(queryPDB),
queryS,
operator.EnsureFuncToQueryFunc(ensureInit),
)
}
destroyers := make([]operator.DestroyFunc, 0)
if isFeatureDatabase {
destroyers = append(destroyers,
operator.ResourceDestroyToZitadelDestroy(destroyPDB),
destroyS,
operator.ResourceDestroyToZitadelDestroy(destroySFS),
destroyRBAC,
destroyCert,
destroyRoot,
)
}
if desiredKind.Spec.Backups != nil {
oneBackup := false
for backupName := range desiredKind.Spec.Backups {
if timestamp != "" && strings.HasPrefix(timestamp, backupName) {
oneBackup = true
}
}
for backupName, desiredBackup := range desiredKind.Spec.Backups {
currentBackup := &tree.Tree{}
if timestamp == "" || !oneBackup || (timestamp != "" && strings.HasPrefix(timestamp, backupName)) {
queryB, destroyB, secrets, err := backups.GetQueryAndDestroyFuncs(
internalMonitor,
desiredBackup,
currentBackup,
backupName,
namespace,
componentLabels,
checkDBReady,
strings.TrimPrefix(timestamp, backupName+"."),
nodeselector,
tolerations,
version,
features,
)
if err != nil {
return nil, nil, nil, err
}
secret.AppendSecrets(backupName, allSecrets, secrets)
destroyers = append(destroyers, destroyB)
queriers = append(queriers, queryB)
}
}
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
if !isFeatureRestore {
queriedCurrentDB, err := core.ParseQueriedForDatabase(queried)
if err != nil || queriedCurrentDB == nil {
// TODO: query system state
currentDB.Current.Port = strconv.Itoa(int(cockroachPort))
currentDB.Current.URL = PublicServiceName
currentDB.Current.ReadyFunc = checkDBReady
currentDB.Current.AddUserFunc = addUser
currentDB.Current.DeleteUserFunc = deleteUser
currentDB.Current.ListUsersFunc = listUsers
currentDB.Current.ListDatabasesFunc = listDatabases
core.SetQueriedForDatabase(queried, current)
internalMonitor.Info("set current state of managed database")
}
}
ensure, err := operator.QueriersToEnsureFunc(internalMonitor, true, queriers, k8sClient, queried)
return ensure, err
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
allSecrets,
nil
}
}

View File

@@ -0,0 +1,177 @@
package managed
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/backup"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/clean"
"github.com/caos/zitadel/operator/database/kinds/backups/bucket/restore"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
"testing"
"time"
)
func getTreeWithDBAndBackup(t *testing.T, masterkey string, saJson string, backupName string) *tree.Tree {
bucketDesired := getDesiredTree(t, masterkey, &bucket.DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/BucketBackup",
Version: "v0",
},
Spec: &bucket.Spec{
Verbose: true,
Cron: "testCron",
Bucket: "testBucket",
ServiceAccountJSON: &secret.Secret{
Value: saJson,
},
},
})
bucketDesiredKind, err := bucket.ParseDesiredV0(bucketDesired)
assert.NoError(t, err)
bucketDesired.Parsed = bucketDesiredKind
return getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Spec: Spec{
Verbose: false,
ReplicaCount: 1,
StorageCapacity: "368Gi",
StorageClass: "testSC",
NodeSelector: map[string]string{},
ClusterDns: "testDns",
Backups: map[string]*tree.Tree{backupName: bucketDesired},
},
})
}
func TestManaged_AdaptBucketBackup(t *testing.T) {
monitor := mntr.Monitor{}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
labels := map[string]string{
"app.kubernetes.io/component": "backup",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
backupName := "testBucket"
saJson := "testSA"
masterkey := "testMk"
desired := getTreeWithDBAndBackup(t, masterkey, saJson, backupName)
features := []string{backup.Normal}
bucket.SetBackup(k8sClient, namespace, labels, saJson)
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := bucket.SetQueriedForDatabases(databases)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestManaged_AdaptBucketInstantBackup(t *testing.T) {
monitor := mntr.Monitor{}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
labels := map[string]string{
"app.kubernetes.io/component": "backup",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
masterkey := "testMk"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
saJson := "testSA"
backupName := "testBucket"
features := []string{backup.Instant}
bucket.SetInstantBackup(k8sClient, namespace, backupName, labels, saJson)
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
desired := getTreeWithDBAndBackup(t, masterkey, saJson, backupName)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := bucket.SetQueriedForDatabases(databases)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestManaged_AdaptBucketCleanAndRestore(t *testing.T) {
monitor := mntr.Monitor{}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
labels := map[string]string{
"app.kubernetes.io/component": "backup",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "backup-serviceaccountjson",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "BucketBackup",
}
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
masterkey := "testMk"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
saJson := "testSA"
backupName := "testBucket"
features := []string{restore.Instant, clean.Instant}
bucket.SetRestore(k8sClient, namespace, backupName, labels, saJson)
bucket.SetClean(k8sClient, namespace, backupName)
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60)).Times(2)
desired := getTreeWithDBAndBackup(t, masterkey, saJson, backupName)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
databases := []string{"test1", "test2"}
queried := bucket.SetQueriedForDatabases(databases)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,255 @@
package managed
import (
"gopkg.in/yaml.v3"
"testing"
"time"
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
policy "k8s.io/api/policy/v1beta1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
)
func getDesiredTree(t *testing.T, masterkey string, desired interface{}) *tree.Tree {
secret.Masterkey = masterkey
desiredTree := &tree.Tree{}
data, err := yaml.Marshal(desired)
assert.NoError(t, err)
assert.NoError(t, yaml.Unmarshal(data, desiredTree))
return desiredTree
}
func TestManaged_Adapt1(t *testing.T) {
monitor := mntr.Monitor{}
nodeLabels := map[string]string{
"app.kubernetes.io/component": "database",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
cockroachLabels := map[string]string{
"app.kubernetes.io/component": "database",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb-budget",
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "testKind",
}
cockroachSelectorLabels := map[string]string{
"app.kubernetes.io/component": "database",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "database")
namespace := "testNs"
timestamp := "testTs"
nodeselector := map[string]string{"test": "test"}
tolerations := []corev1.Toleration{}
version := "testVersion"
features := []string{"database"}
masterkey := "testMk"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
queried := map[string]interface{}{}
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Spec: Spec{
Verbose: false,
ReplicaCount: 1,
StorageCapacity: "368Gi",
StorageClass: "testSC",
NodeSelector: map[string]string{},
ClusterDns: "testDns",
},
})
unav := intstr.FromInt(1)
k8sClient.EXPECT().ApplyPodDisruptionBudget(&policy.PodDisruptionBudget{
ObjectMeta: metav1.ObjectMeta{
Name: "cockroachdb-budget",
Namespace: namespace,
Labels: cockroachLabels,
},
Spec: policy.PodDisruptionBudgetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: cockroachSelectorLabels,
},
MaxUnavailable: &unav,
},
})
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ApplyService(gomock.Any()).Times(3)
k8sClient.EXPECT().ApplyServiceAccount(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRoleBinding(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRoleBinding(gomock.Any()).Times(1)
//statefulset
k8sClient.EXPECT().ApplyStatefulSet(gomock.Any(), gomock.Any()).Times(1)
//running for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, false, time.Duration(60))
//not ready for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(1))
//ready after setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
//client
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
//node
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestManaged_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
timestamp := "testTs"
nodeLabels := map[string]string{
"app.kubernetes.io/component": "database2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
cockroachLabels := map[string]string{
"app.kubernetes.io/component": "database2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": "cockroachdb-budget",
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v1",
"caos.ch/kind": "testKind2",
}
cockroachSelectorLabels := map[string]string{
"app.kubernetes.io/component": "database2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": "cockroachdb",
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "testKind2", "v1"), "database2")
nodeselector := map[string]string{"test2": "test2"}
var tolerations []corev1.Toleration
version := "testVersion2"
features := []string{"database"}
masterkey := "testMk2"
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
queried := map[string]interface{}{}
desired := getDesiredTree(t, masterkey, &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/CockroachDB",
Version: "v0",
},
Spec: Spec{
Verbose: false,
ReplicaCount: 1,
StorageCapacity: "368Gi",
StorageClass: "testSC",
NodeSelector: map[string]string{},
ClusterDns: "testDns",
},
})
unav := intstr.FromInt(1)
k8sClient.EXPECT().ApplyPodDisruptionBudget(&policy.PodDisruptionBudget{
ObjectMeta: metav1.ObjectMeta{
Name: "cockroachdb-budget",
Namespace: namespace,
Labels: cockroachLabels,
},
Spec: policy.PodDisruptionBudgetSpec{
Selector: &metav1.LabelSelector{
MatchLabels: cockroachSelectorLabels,
},
MaxUnavailable: &unav,
},
})
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ApplyService(gomock.Any()).Times(3)
k8sClient.EXPECT().ApplyServiceAccount(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRole(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyRoleBinding(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplyClusterRoleBinding(gomock.Any()).Times(1)
//statefulset
k8sClient.EXPECT().ApplyStatefulSet(gomock.Any(), gomock.Any()).Times(1)
//running for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, false, time.Duration(60))
//not ready for setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(1))
//ready after setup
k8sClient.EXPECT().WaitUntilStatefulsetIsReady(namespace, SfsName, true, true, time.Duration(60))
//client
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
//node
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1)
query, _, _, err := AdaptFunc(componentLabels, namespace, timestamp, nodeselector, tolerations, version, features)(monitor, desired, &tree.Tree{})
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,91 @@
package certificate
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/client"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/node"
)
var (
nodeSecret = "cockroachdb.node"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
componentLabels *labels.Component,
clusterDns string,
generateNodeIfNotExists bool,
) (
operator.QueryFunc,
operator.DestroyFunc,
func(user string) (operator.QueryFunc, error),
func(user string) (operator.DestroyFunc, error),
func(k8sClient kubernetes.ClientInt) ([]string, error),
error,
) {
cMonitor := monitor.WithField("type", "certificates")
queryNode, destroyNode, err := node.AdaptFunc(
cMonitor,
namespace,
labels.MustForName(componentLabels, nodeSecret),
clusterDns,
generateNodeIfNotExists,
)
if err != nil {
return nil, nil, nil, nil, nil, err
}
queriers := []operator.QueryFunc{
queryNode,
}
destroyers := []operator.DestroyFunc{
destroyNode,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(cMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(cMonitor, destroyers),
func(user string) (operator.QueryFunc, error) {
query, _, err := client.AdaptFunc(
cMonitor,
namespace,
componentLabels,
)
if err != nil {
return nil, err
}
queryClient := query(user)
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
_, err := queryNode(k8sClient, queried)
if err != nil {
return nil, err
}
return queryClient(k8sClient, queried)
}, nil
},
func(user string) (operator.DestroyFunc, error) {
_, destroy, err := client.AdaptFunc(
cMonitor,
namespace,
componentLabels,
)
if err != nil {
return nil, err
}
return destroy(user), nil
},
func(k8sClient kubernetes.ClientInt) ([]string, error) {
return client.QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
},
nil
}

View File

@@ -0,0 +1,305 @@
package certificate
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
const (
caPem = `-----BEGIN CERTIFICATE-----
MIIE9TCCAt2gAwIBAgICB+MwDQYJKoZIhvcNAQELBQAwKzESMBAGA1UEChMJQ29j
a3JvYWNoMRUwEwYDVQQDEwxDb2Nrcm9hY2ggQ0EwHhcNMjAxMjAxMTAyMjA0WhcN
MzAxMjAxMTAyMjA0WjArMRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENv
Y2tyb2FjaCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOja5IXJ
GUY9sFgyvWkav+O74gzcv8y69uJzSOx0piOP+sfpZWVeGEjqO4JgdcS5NPMrT5Tb
aJ52CjiOdlHVTyR87i5JmOIvA2qA4dWQmSbX7AQ8r9ptYDe9xMn+qFegcbr4YxCz
K9mmbDZUhlLO7cz3QV6nvRxGFWbzffo8BXZnOUCAOyOHrbnPpLumnfZlL5BckdtY
pS7jAlUpKSMBTK4AHcmrouFsNKHqlUopYXeJFdg9g1F0DuCnVP9x7+XcUW8dAVut
Q7Jswy+++GAXs6mPVsYFLXUSYNyW+Bfl/jKwx0XTQx/6iyNpK0XtAzjBZFjXwmot
0mODkqnfE3BB4lXxZ5knomAQEGSScUhCUb9upbF4uJJF27xr/kIkwtWxMGpCXds0
IxI+wNRCenhfFZEIQCzri0zn6WdN8b/gbv1BErNcccYwolPUv1oUgYbzowbQ5O2D
aLQPqO1VAZiZHLxb787bRywpCl33VZ1ptMHi2ogKjcsh4DQ9SsRj+rU/Tk5lyk7G
FHteyHcq12TGpz9/CQYMacl8yeRRHfNO3Rq0jFTYeD4+ZdVBPKeuTHyXGzy2T157
pgqMFzwqxlNYPzpuz7xsZBExJzCtcomB8fMlsCJnxV/kuMTTsWsPrRc7hsqzBCq3
plfYT8a7EBCVmJmDu+6mMVh+A9M2zZCqtV57AgMBAAGjIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQDRmkBYDk/F
lYTgMaL7yhRAowDR8mtOl06Oba/MCDDsmBvfOVIqY9Sa9sJtb78Uvecv9WAE4qpb
PNSaIvNmujmGQrBxtGwQdeHZvtXQ1lKAGBM+ukfz/NFU+6rwOdTuLgS5jTaiM4r1
uMoIG6S5z/brWGHAw8Qv2Fq05dSLnevmPWK903V/RnwmCKT9as5Qpfkaie2PNory
euxVGeGolxzgbSps3ZJlDSSymQBl10iJYYdGIsgcw5FHcCdpS4AutdpQbIFuCGk2
CHTcTTa4KMaVfRm/gKm1gh8UnuVDLBKQS8+1CHFYUnih8ozBNUyBo5r5L4BHJK/n
f0gnrYqaxtqPAyHkfo0PRc50HlAQRS/gW8ESv2PQmcSWlDggEzXt5MqFIYOfEXWE
gtC7Ct1P7gzIRxolYVsNSgBR62sJM1PUa39E5v0QJsmuM51txHuw/PGqkkBlEwx9
4u3FFXD9Pslg/g1l50Ek5O3/TWMta4Q+NagDDaOdkb4LRQr522ySgGIinBw00Xr+
Ics2L42rP1LQyaE7+BljDEwmGaT7mrl8QGZR9uv/9mbtk2+q7wAOQzcMSFFACfwx
KdKENmK0GpQZ6HaeDZO7wxGiAyFKhTNpd39XNOEhaKpWLSUxnZTLGlMIzvSyMGCj
PV3xm+Xg1Xvki7f/qC1djysMcaokoJzNdg==
-----END CERTIFICATE-----
`
caPrivPem = `-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA2Owc3IOljq3hkKO9q6jT1kFZp/+Y9dv/lTIbsMhV7gPrn2WS
Os2KQphcrB9DvGoZAxUh1aD8MUO7USIMhFodQEq7vfycDT16jrTnc1fSDDcfk3Ra
DvqZGcBkj0lc6w5LK3FNl2y6rA/26teGbyNaVXJfW01dKKy4E2or2+1+tUamyjiU
cvPzNWEoPPJT5HcadLTxZr/SKDvDXFyej4nKT9+j1pKmaqQrIrL4KXKq75LhU+kH
L4TG+kJ35isiv5OTpd2jG6ssz0i+ZEtX4hjIM6eCnZaiFT//33nSL3zl2LUjmyou
ks2FDuX9m90mg8UtcrpA/eVlwyg8nJ/d7/Yn3ZjuHVOgzE8YuQXauJ16HuLcFgEW
WQg1uwhOhrc5b0YJndZbJ2Re4qfaBoC5fODRXoUPvqG9k9kNp98xtclNCIyW8EpA
Su8QmK42ksOJox+OQamHzatrGppgIK77TY3ZFcBHETHClabgsfjv/1GNXXAexJI4
Cjntnou+yoc+LUj87WJfD3ERGgm0nDfnE7uZ7kccO5Lj/oajeO+Q+QJaLZ4I2jz+
a05k16naGGT29AnM+iwBqqoTODr4Z7905niZc6+fEOPml1V7wSuJs2eE4jOa3ixX
5tnruw74rN82Zfrkg6kWPOEfBBXzSotRiHv+BAV2tFbnC55ItnHn1ZE68V0CAwEA
AQKCAgEAssWsZ4PLXpIo8qYve5hQtSP4er7oVb8wnMnGDmSchOMQPbZc1D9uscGV
pnjBvzcFVAgHcWMSVJuIda4E+NK3hrPQlBvqk/LV3WRz1xhKUKzhRgm+6tdWc+We
OoRwonuOMchX9PKzyXgCu7pR3agaG499zOYuX4Yw0jdO3BqXsVf/v2rv1Oj9yEFB
AzGHOCN8VzCEPnTaAzR1pdnjB1K8vCUIhp8nrX2M2zT51lbdT0ISl6/VrzDTN46t
97AXHCHIrgrCENx6un4uAsQhMoHQBNoJiEyLWc371zYzpdVeK8HlDUyvQ2dDQGsF
Hn4c7r4C3aloRJbYzgSMJ1yNcOTCJpciQsq1VmCQFOHfbum2ejquXJ7BbeRGeHdM
145epckL+hbECTCpSs0Z5t90NdfoJspvr+3sOEt6h3DMUGjvobrf/s3KiRY5hHdc
x86Htx3QgWNCG6a+ga7h7s3t+4ZtoPPWn+vlAoxD/2eCzsDgE/XLDmC2T4yS8UOb
LIb4UN7wl2sNM8zhm4BfoiKfjGu4dPZIlsPP+ZKRby1O4ogHHoPREBTH1VSEplVM
fA/KSITV+rUfO3T/qXIFZ4/Wa5YZoULiMCOJOWNgXQzWvTf7Qr31LhhfXd+uIw30
LDtjdkpT43zlKxRosQFLiV7q3fVbvKPVQfxzBz7M1Gl74IllpmECggEBAOnAimKl
w/9mcMkEd2L2ZzDI+57W8DU6grUUILZfjJG+uvOiMmIA5M1wzsVHil3ihZthrJ5A
UUAKG97JgwjpIAliAzgtPZY2AK/bJz7Hht1EwewrV+RyHM+dMv6em2OA+XUjFSPX
VsecFDaUQebkypw236KdA4jgRG6hr4xObdXgTN6RFCv1mI1EijZ/YbKgPgTJaPI4
b2N5QokYFygUCwRxKIt7/Z4hQs9LbdW680NcXtPRPnS1SmwYJbi7wTX+o5f6nfNV
YvojborjXwNrZe0p+FfaEuD44wf6kNlRGfcKXoaAncXV/M5/oXf1dktKP630eq9g
0MAKFYJ6MAUheakCggEBAO2Rgfy/tEydBuP9BIfC84BXk8EwnRJ0ZfmXv77HFj3F
VT5QNU97I5gZJOFkL7OO8bL0/WljlqeBEht0DmHNmOt5Pyew4LRLpfeN6rClZgRN
V4wqKXjoZAIYa9khQcwwFNER1RdI+PkuusJtrvY6CbwwG9LbBq2NR4C1YSgaQnhV
NqdXK5dwrYEky6lI31sDD4BYeiJVKlkkNCQAVOC+04Mrsa9F0NG7TKXzji5hU8l5
x8squjvJ6vmobhmsRTL1LMpafUrt5pHL9jcWIZYxJJo9mB0zxJKcsLI8IOg2QPoj
tQ395FZ2YtjNzZa1CYeUOUiaQu+uvztfE36AdW/vUpUCggEAMV7bW66LURw/4hUx
ahOFBAbPLmNTZMqw5LIVnq9br0TLk73ESnLJ4KJc6coMbXv0oDbnEJ2hC5eW/10s
cetbOuAasfjMMzfAuWPeTCI0V/O3ybv12mhHsYoQRTsWstOA3L7GLkXDLHHIyyZR
LQVRzeDBJ0Vmg7hqe7tmqom+JRg05CVcT1SWHfBGCPCqn+G8d6Jaqh5FWIs6BF60
NWDWWt/TonJTxNxdkg7qaeQMkUOnO7HMMTZBO8d14Ci3zEG2J9llFwoH17E4Hdmc
Lcq3QnpE27lRl3a57Ot9QIkipMzp3hq4OBrURIEsh3uuuoQ6IvGqH/Sg4o6+sEpC
bjL90QKCAQBDc/0kdooK9sruEPkoUwIwfq1FPThb9RC/PYcD9CMshssdVkjMuHny
xbDjDj89DGk0FrudINm11b/+a4Vp36Z7tYFpE5+5kYEeOP1aCpxcvFkPQyljWxiK
P8TfccHs5/oBIr8OTXnjxpDgg6QZ5YC+HirIQ8gxntuef+GGMW6OHCPYf7ew2B1r
fbcV6csBXG0aVATZmrTbepwTXMS8y3Hi3JUm3vvbkQLCW9US9i+EFT/VP9yA/WPq
Xxhj0bYUMej1y5unmsTMwMy392Cx9GIgKTz3jatStYq2ELyHMmBgpaLSxjP/GL4Y
MNce42hBRqS9KI+43jUN9oDiejbeAWXBAoIBACZKnS6EppNDSdos0f32J5TxEUxv
lF9AuVXEAPDR/m3+PlSRzrlf7uowdHTfsemHMSduL8weLNhduprwz4TmW9Fo6fSF
UePLNbXcMX3omAk+AKKOiexLG0fCXGW2zr4nHZJzTbK2+La3yLeUcoPu1puNpLiq
LVj2bH3zKWVRA9/ovuN6V5w18ojjdqOw4bw5qXcdZhoWLxI8Q9Oqua24f/dnRpuI
I8mRtPQ3+vuOKbTT+/80eAUpSfEKwAg1Mjgury9q1/B4Ib6hAGzpJuXxG7xQjnsJ
EFcN1kvdg5WGK41+fYMdexPaLamjhDGN0e1vxJfAukWIAsBMwp8wfEWZvzA=
-----END RSA PRIVATE KEY-----
`
)
func TestCertificate_AdaptWithCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
dbCurrent.EXPECT().SetCertificate(ca).Times(1)
dbCurrent.EXPECT().SetCertificateKey(caPriv).Times(1)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, _, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptWithoutCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1).Return(nil)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, _, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptAlreadyExisting(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
caCertKey := "ca.crt"
caPrivKeyKey := "ca.key"
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{},
Data: map[string][]byte{
caCertKey: []byte(caPem),
caPrivKeyKey: []byte(caPrivPem),
},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, _, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptCreateUser(t *testing.T) {
monitor := mntr.Monitor{}
clusterDns := "testDns"
namespace := "testNs"
user := "test"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "cockroachdb.node",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
caCertKey := "ca.crt"
caPrivKeyKey := "ca.key"
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{},
Data: map[string][]byte{
caCertKey: []byte(caPem),
caPrivKeyKey: []byte(caPrivPem),
},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any())
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
_, _, createUser, _, _, err := AdaptFunc(monitor, namespace, componentLabels, clusterDns, true)
assert.NoError(t, err)
query, err := createUser(user)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,111 @@
package certificates
import (
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"math/big"
"net"
"time"
)
func NewCA() (*rsa.PrivateKey, []byte, error) {
ca := &x509.Certificate{
SerialNumber: big.NewInt(2019),
Subject: pkix.Name{
Organization: []string{"Cockroach"},
CommonName: "Cockroach CA",
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(10, 0, 0),
IsCA: true,
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment | x509.KeyUsageCertSign,
BasicConstraintsValid: true,
}
caPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
caBytes, err := x509.CreateCertificate(rand.Reader, ca, ca, &caPrivKey.PublicKey, caPrivKey)
if err != nil {
return nil, nil, err
}
return caPrivKey, caBytes, nil
}
func NewClient(caPrivKey *rsa.PrivateKey, ca []byte, user string) (*rsa.PrivateKey, []byte, error) {
cert := &x509.Certificate{
SerialNumber: big.NewInt(1658),
Subject: pkix.Name{
Organization: []string{"Cockroach"},
CommonName: user,
},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(10, 0, 0),
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment,
}
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
caCert, err := x509.ParseCertificate(ca)
if err != nil {
return nil, nil, err
}
certBytes, err := x509.CreateCertificate(rand.Reader, cert, caCert, &certPrivKey.PublicKey, caPrivKey)
if err != nil {
return nil, nil, err
}
return certPrivKey, certBytes, nil
}
func NewNode(caPrivKey *rsa.PrivateKey, ca []byte, namespace string, clusterDns string) (*rsa.PrivateKey, []byte, error) {
cert := &x509.Certificate{
SerialNumber: big.NewInt(1658),
Subject: pkix.Name{
Organization: []string{"Cockroach"},
CommonName: "node",
},
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1)},
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(10, 0, 0),
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyEncipherment,
DNSNames: []string{
"localhost",
"cockroachdb-public",
"cockroachdb-public.default",
"cockroachdb-public." + namespace,
"cockroachdb-public." + namespace + ".svc." + clusterDns,
"*.cockroachdb",
"*.cockroachdb." + namespace,
"*.cockroachdb." + namespace + ".svc." + clusterDns,
},
}
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
if err != nil {
return nil, nil, err
}
caCert, err := x509.ParseCertificate(ca)
if err != nil {
return nil, nil, err
}
certBytes, err := x509.CreateCertificate(rand.Reader, cert, caCert, &certPrivKey.PublicKey, caPrivKey)
if err != nil {
return nil, nil, err
}
return certPrivKey, certBytes, nil
}

View File

@@ -0,0 +1,54 @@
package certificates
import (
"crypto/x509"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/stretchr/testify/assert"
"testing"
)
func TestCertificates_CAE(t *testing.T) {
priv, rootCa, err := NewCA()
assert.NoError(t, err)
assert.NotNil(t, priv)
pemCa, err := pem.EncodeCertificate(rootCa)
pemkey, err := pem.EncodeKey(priv)
assert.NotNil(t, pemCa)
assert.NotNil(t, pemkey)
_, err = x509.ParseCertificate(rootCa)
assert.NoError(t, err)
}
func TestCertificates_CA(t *testing.T) {
_, rootCa, err := NewCA()
assert.NoError(t, err)
_, err = x509.ParseCertificate(rootCa)
assert.NoError(t, err)
}
func TestCertificates_Chain(t *testing.T) {
rootKey, rootCert, err := NewCA()
assert.NoError(t, err)
rootPem, err := pem.EncodeCertificate(rootCert)
assert.NoError(t, err)
roots := x509.NewCertPool()
ok := roots.AppendCertsFromPEM(rootPem)
assert.Equal(t, ok, true)
_, clientCert, err := NewClient(rootKey, rootCert, "test")
cert, err := x509.ParseCertificate(clientCert)
assert.NoError(t, err)
opts := x509.VerifyOptions{
Roots: roots,
KeyUsages: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth},
}
_, err = cert.Verify(opts)
assert.NoError(t, err)
}

View File

@@ -0,0 +1,101 @@
package client
import (
"errors"
"github.com/caos/zitadel/operator"
"strings"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/certificates"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
)
const (
clientSecretPrefix = "cockroachdb.client."
caCertKey = "ca.crt"
clientCertKeyPrefix = "client."
clientCertKeySuffix = ".crt"
clientPrivKeyKeyPrefix = "client."
clientPrivKeyKeySuffix = ".key"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
componentLabels *labels.Component,
) (
func(client string) operator.QueryFunc,
func(client string) operator.DestroyFunc,
error,
) {
return func(client string) operator.QueryFunc {
clientSecret := clientSecretPrefix + client
nameLabels := labels.MustForName(componentLabels, strings.ReplaceAll(clientSecret, "_", "-"))
clientCertKey := clientCertKeyPrefix + client + clientCertKeySuffix
clientPrivKeyKey := clientPrivKeyKeyPrefix + client + clientPrivKeyKeySuffix
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
queriers := make([]operator.QueryFunc, 0)
currentDB, err := core.ParseQueriedForDatabase(queried)
if err != nil {
return nil, err
}
caCert := currentDB.GetCertificate()
caKey := currentDB.GetCertificateKey()
if caKey == nil || caCert == nil || len(caCert) == 0 {
return nil, errors.New("no ca-certificate found")
}
clientPrivKey, clientCert, err := certificates.NewClient(caKey, caCert, client)
if err != nil {
return nil, err
}
pemClientPrivKey, err := pem.EncodeKey(clientPrivKey)
if err != nil {
return nil, err
}
pemClientCert, err := pem.EncodeCertificate(clientCert)
if err != nil {
return nil, err
}
pemCaCert, err := pem.EncodeCertificate(caCert)
if err != nil {
return nil, err
}
clientSecretData := map[string]string{
caCertKey: string(pemCaCert),
clientPrivKeyKey: string(pemClientPrivKey),
clientCertKey: string(pemClientCert),
}
queryClientSecret, err := secret.AdaptFuncToEnsure(namespace, labels.AsSelectable(nameLabels), clientSecretData)
if err != nil {
return nil, err
}
queriers = append(queriers, operator.ResourceQueryToZitadelQuery(queryClientSecret))
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
}
}, func(client string) operator.DestroyFunc {
clientSecret := clientSecretPrefix + client
destroy, err := secret.AdaptFuncToDestroy(namespace, clientSecret)
if err != nil {
return nil
}
return operator.ResourceDestroyToZitadelDestroy(destroy)
},
nil
}

View File

@@ -0,0 +1,169 @@
package client
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"testing"
)
const (
caPem = `-----BEGIN CERTIFICATE-----
MIIE9TCCAt2gAwIBAgICB+MwDQYJKoZIhvcNAQELBQAwKzESMBAGA1UEChMJQ29j
a3JvYWNoMRUwEwYDVQQDEwxDb2Nrcm9hY2ggQ0EwHhcNMjAxMjAxMTAyMjA0WhcN
MzAxMjAxMTAyMjA0WjArMRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENv
Y2tyb2FjaCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOja5IXJ
GUY9sFgyvWkav+O74gzcv8y69uJzSOx0piOP+sfpZWVeGEjqO4JgdcS5NPMrT5Tb
aJ52CjiOdlHVTyR87i5JmOIvA2qA4dWQmSbX7AQ8r9ptYDe9xMn+qFegcbr4YxCz
K9mmbDZUhlLO7cz3QV6nvRxGFWbzffo8BXZnOUCAOyOHrbnPpLumnfZlL5BckdtY
pS7jAlUpKSMBTK4AHcmrouFsNKHqlUopYXeJFdg9g1F0DuCnVP9x7+XcUW8dAVut
Q7Jswy+++GAXs6mPVsYFLXUSYNyW+Bfl/jKwx0XTQx/6iyNpK0XtAzjBZFjXwmot
0mODkqnfE3BB4lXxZ5knomAQEGSScUhCUb9upbF4uJJF27xr/kIkwtWxMGpCXds0
IxI+wNRCenhfFZEIQCzri0zn6WdN8b/gbv1BErNcccYwolPUv1oUgYbzowbQ5O2D
aLQPqO1VAZiZHLxb787bRywpCl33VZ1ptMHi2ogKjcsh4DQ9SsRj+rU/Tk5lyk7G
FHteyHcq12TGpz9/CQYMacl8yeRRHfNO3Rq0jFTYeD4+ZdVBPKeuTHyXGzy2T157
pgqMFzwqxlNYPzpuz7xsZBExJzCtcomB8fMlsCJnxV/kuMTTsWsPrRc7hsqzBCq3
plfYT8a7EBCVmJmDu+6mMVh+A9M2zZCqtV57AgMBAAGjIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQDRmkBYDk/F
lYTgMaL7yhRAowDR8mtOl06Oba/MCDDsmBvfOVIqY9Sa9sJtb78Uvecv9WAE4qpb
PNSaIvNmujmGQrBxtGwQdeHZvtXQ1lKAGBM+ukfz/NFU+6rwOdTuLgS5jTaiM4r1
uMoIG6S5z/brWGHAw8Qv2Fq05dSLnevmPWK903V/RnwmCKT9as5Qpfkaie2PNory
euxVGeGolxzgbSps3ZJlDSSymQBl10iJYYdGIsgcw5FHcCdpS4AutdpQbIFuCGk2
CHTcTTa4KMaVfRm/gKm1gh8UnuVDLBKQS8+1CHFYUnih8ozBNUyBo5r5L4BHJK/n
f0gnrYqaxtqPAyHkfo0PRc50HlAQRS/gW8ESv2PQmcSWlDggEzXt5MqFIYOfEXWE
gtC7Ct1P7gzIRxolYVsNSgBR62sJM1PUa39E5v0QJsmuM51txHuw/PGqkkBlEwx9
4u3FFXD9Pslg/g1l50Ek5O3/TWMta4Q+NagDDaOdkb4LRQr522ySgGIinBw00Xr+
Ics2L42rP1LQyaE7+BljDEwmGaT7mrl8QGZR9uv/9mbtk2+q7wAOQzcMSFFACfwx
KdKENmK0GpQZ6HaeDZO7wxGiAyFKhTNpd39XNOEhaKpWLSUxnZTLGlMIzvSyMGCj
PV3xm+Xg1Xvki7f/qC1djysMcaokoJzNdg==
-----END CERTIFICATE-----
`
caPrivPem = `-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA2Owc3IOljq3hkKO9q6jT1kFZp/+Y9dv/lTIbsMhV7gPrn2WS
Os2KQphcrB9DvGoZAxUh1aD8MUO7USIMhFodQEq7vfycDT16jrTnc1fSDDcfk3Ra
DvqZGcBkj0lc6w5LK3FNl2y6rA/26teGbyNaVXJfW01dKKy4E2or2+1+tUamyjiU
cvPzNWEoPPJT5HcadLTxZr/SKDvDXFyej4nKT9+j1pKmaqQrIrL4KXKq75LhU+kH
L4TG+kJ35isiv5OTpd2jG6ssz0i+ZEtX4hjIM6eCnZaiFT//33nSL3zl2LUjmyou
ks2FDuX9m90mg8UtcrpA/eVlwyg8nJ/d7/Yn3ZjuHVOgzE8YuQXauJ16HuLcFgEW
WQg1uwhOhrc5b0YJndZbJ2Re4qfaBoC5fODRXoUPvqG9k9kNp98xtclNCIyW8EpA
Su8QmK42ksOJox+OQamHzatrGppgIK77TY3ZFcBHETHClabgsfjv/1GNXXAexJI4
Cjntnou+yoc+LUj87WJfD3ERGgm0nDfnE7uZ7kccO5Lj/oajeO+Q+QJaLZ4I2jz+
a05k16naGGT29AnM+iwBqqoTODr4Z7905niZc6+fEOPml1V7wSuJs2eE4jOa3ixX
5tnruw74rN82Zfrkg6kWPOEfBBXzSotRiHv+BAV2tFbnC55ItnHn1ZE68V0CAwEA
AQKCAgEAssWsZ4PLXpIo8qYve5hQtSP4er7oVb8wnMnGDmSchOMQPbZc1D9uscGV
pnjBvzcFVAgHcWMSVJuIda4E+NK3hrPQlBvqk/LV3WRz1xhKUKzhRgm+6tdWc+We
OoRwonuOMchX9PKzyXgCu7pR3agaG499zOYuX4Yw0jdO3BqXsVf/v2rv1Oj9yEFB
AzGHOCN8VzCEPnTaAzR1pdnjB1K8vCUIhp8nrX2M2zT51lbdT0ISl6/VrzDTN46t
97AXHCHIrgrCENx6un4uAsQhMoHQBNoJiEyLWc371zYzpdVeK8HlDUyvQ2dDQGsF
Hn4c7r4C3aloRJbYzgSMJ1yNcOTCJpciQsq1VmCQFOHfbum2ejquXJ7BbeRGeHdM
145epckL+hbECTCpSs0Z5t90NdfoJspvr+3sOEt6h3DMUGjvobrf/s3KiRY5hHdc
x86Htx3QgWNCG6a+ga7h7s3t+4ZtoPPWn+vlAoxD/2eCzsDgE/XLDmC2T4yS8UOb
LIb4UN7wl2sNM8zhm4BfoiKfjGu4dPZIlsPP+ZKRby1O4ogHHoPREBTH1VSEplVM
fA/KSITV+rUfO3T/qXIFZ4/Wa5YZoULiMCOJOWNgXQzWvTf7Qr31LhhfXd+uIw30
LDtjdkpT43zlKxRosQFLiV7q3fVbvKPVQfxzBz7M1Gl74IllpmECggEBAOnAimKl
w/9mcMkEd2L2ZzDI+57W8DU6grUUILZfjJG+uvOiMmIA5M1wzsVHil3ihZthrJ5A
UUAKG97JgwjpIAliAzgtPZY2AK/bJz7Hht1EwewrV+RyHM+dMv6em2OA+XUjFSPX
VsecFDaUQebkypw236KdA4jgRG6hr4xObdXgTN6RFCv1mI1EijZ/YbKgPgTJaPI4
b2N5QokYFygUCwRxKIt7/Z4hQs9LbdW680NcXtPRPnS1SmwYJbi7wTX+o5f6nfNV
YvojborjXwNrZe0p+FfaEuD44wf6kNlRGfcKXoaAncXV/M5/oXf1dktKP630eq9g
0MAKFYJ6MAUheakCggEBAO2Rgfy/tEydBuP9BIfC84BXk8EwnRJ0ZfmXv77HFj3F
VT5QNU97I5gZJOFkL7OO8bL0/WljlqeBEht0DmHNmOt5Pyew4LRLpfeN6rClZgRN
V4wqKXjoZAIYa9khQcwwFNER1RdI+PkuusJtrvY6CbwwG9LbBq2NR4C1YSgaQnhV
NqdXK5dwrYEky6lI31sDD4BYeiJVKlkkNCQAVOC+04Mrsa9F0NG7TKXzji5hU8l5
x8squjvJ6vmobhmsRTL1LMpafUrt5pHL9jcWIZYxJJo9mB0zxJKcsLI8IOg2QPoj
tQ395FZ2YtjNzZa1CYeUOUiaQu+uvztfE36AdW/vUpUCggEAMV7bW66LURw/4hUx
ahOFBAbPLmNTZMqw5LIVnq9br0TLk73ESnLJ4KJc6coMbXv0oDbnEJ2hC5eW/10s
cetbOuAasfjMMzfAuWPeTCI0V/O3ybv12mhHsYoQRTsWstOA3L7GLkXDLHHIyyZR
LQVRzeDBJ0Vmg7hqe7tmqom+JRg05CVcT1SWHfBGCPCqn+G8d6Jaqh5FWIs6BF60
NWDWWt/TonJTxNxdkg7qaeQMkUOnO7HMMTZBO8d14Ci3zEG2J9llFwoH17E4Hdmc
Lcq3QnpE27lRl3a57Ot9QIkipMzp3hq4OBrURIEsh3uuuoQ6IvGqH/Sg4o6+sEpC
bjL90QKCAQBDc/0kdooK9sruEPkoUwIwfq1FPThb9RC/PYcD9CMshssdVkjMuHny
xbDjDj89DGk0FrudINm11b/+a4Vp36Z7tYFpE5+5kYEeOP1aCpxcvFkPQyljWxiK
P8TfccHs5/oBIr8OTXnjxpDgg6QZ5YC+HirIQ8gxntuef+GGMW6OHCPYf7ew2B1r
fbcV6csBXG0aVATZmrTbepwTXMS8y3Hi3JUm3vvbkQLCW9US9i+EFT/VP9yA/WPq
Xxhj0bYUMej1y5unmsTMwMy392Cx9GIgKTz3jatStYq2ELyHMmBgpaLSxjP/GL4Y
MNce42hBRqS9KI+43jUN9oDiejbeAWXBAoIBACZKnS6EppNDSdos0f32J5TxEUxv
lF9AuVXEAPDR/m3+PlSRzrlf7uowdHTfsemHMSduL8weLNhduprwz4TmW9Fo6fSF
UePLNbXcMX3omAk+AKKOiexLG0fCXGW2zr4nHZJzTbK2+La3yLeUcoPu1puNpLiq
LVj2bH3zKWVRA9/ovuN6V5w18ojjdqOw4bw5qXcdZhoWLxI8Q9Oqua24f/dnRpuI
I8mRtPQ3+vuOKbTT+/80eAUpSfEKwAg1Mjgury9q1/B4Ib6hAGzpJuXxG7xQjnsJ
EFcN1kvdg5WGK41+fYMdexPaLamjhDGN0e1vxJfAukWIAsBMwp8wfEWZvzA=
-----END RSA PRIVATE KEY-----
`
)
func TestClient_Adapt1(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
user := "test"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
k8sClient.EXPECT().ApplySecret(gomock.Any())
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
createUser, _, err := AdaptFunc(monitor, namespace, componentLabels)
assert.NoError(t, err)
query := createUser(user)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestClient_Adapt2(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs2"
user := "test2"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb")
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
k8sClient.EXPECT().ApplySecret(gomock.Any())
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
createUser, _, err := AdaptFunc(monitor, namespace, componentLabels)
assert.NoError(t, err)
query := createUser(user)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,32 @@
package client
import (
"strings"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/kubernetes"
)
func QueryCertificates(
namespace string,
selector *labels.Selector,
k8sClient kubernetes.ClientInt,
) (
[]string,
error,
) {
list, err := k8sClient.ListSecrets(namespace, labels.MustK8sMap(selector))
if err != nil {
return nil, err
}
certs := []string{}
for _, secret := range list.Items {
if strings.HasPrefix(secret.Name, clientSecretPrefix) {
certs = append(certs, strings.TrimPrefix(secret.Name, clientSecretPrefix))
}
}
return certs, nil
}

View File

@@ -0,0 +1,95 @@
package client
import (
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestClient_Query0(t *testing.T) {
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "testComponent")
clientLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ListSecrets(namespace, clientLabels).Times(1).Return(secretList, nil)
users, err := QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
assert.NoError(t, err)
assert.Equal(t, users, []string{})
}
func TestClient_Query(t *testing.T) {
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "testComponent")
clientLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{
Name: clientSecretPrefix + "test",
},
Data: map[string][]byte{},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, clientLabels).Times(1).Return(secretList, nil)
users, err := QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
assert.NoError(t, err)
assert.Contains(t, users, "test")
}
func TestClient_Query2(t *testing.T) {
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "testKind", "v0"), "testComponent")
clientLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{
Name: clientSecretPrefix + "test1",
},
Data: map[string][]byte{},
Type: "Opaque",
}, {
ObjectMeta: metav1.ObjectMeta{
Name: clientSecretPrefix + "test2",
},
Data: map[string][]byte{},
Type: "Opaque",
}},
}
k8sClient.EXPECT().ListSecrets(namespace, clientLabels).Times(1).Return(secretList, nil)
users, err := QueryCertificates(namespace, labels.DeriveComponentSelector(componentLabels, false), k8sClient)
assert.NoError(t, err)
assert.Equal(t, users, []string{"test1", "test2"})
}

View File

@@ -0,0 +1,10 @@
package certificate
import (
"crypto/rsa"
)
type Current struct {
CertificateKey *rsa.PrivateKey
Certificate []byte
}

View File

@@ -0,0 +1,150 @@
package node
import (
"crypto/rsa"
"errors"
"github.com/caos/zitadel/operator"
"reflect"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/secret"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/certificates"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
)
const (
caCertKey = "ca.crt"
caPrivKeyKey = "ca.key"
nodeCertKey = "node.crt"
nodePrivKeyKey = "node.key"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
nameLabels *labels.Name,
clusterDns string,
generateIfNotExists bool,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
caPrivKey := new(rsa.PrivateKey)
caCert := make([]byte, 0)
nodeSecretSelector := labels.MustK8sMap(labels.DeriveNameSelector(nameLabels, false))
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
queriers := make([]operator.QueryFunc, 0)
currentDB, err := core.ParseQueriedForDatabase(queried)
if err != nil {
return nil, err
}
allNodeSecrets, err := k8sClient.ListSecrets(namespace, nodeSecretSelector)
if err != nil {
return nil, err
}
if len(allNodeSecrets.Items) == 0 {
if !generateIfNotExists {
return nil, errors.New("node secret not found")
}
emptyCert := true
emptyKey := true
if currentCaCert := currentDB.GetCertificate(); currentCaCert != nil && len(currentCaCert) != 0 {
emptyCert = false
caCert = currentCaCert
}
if currentCaCertKey := currentDB.GetCertificateKey(); currentCaCertKey != nil && !reflect.DeepEqual(currentCaCertKey, &rsa.PrivateKey{}) {
emptyKey = false
caPrivKey = currentCaCertKey
}
if emptyCert || emptyKey {
caPrivKeyInternal, caCertInternal, err := certificates.NewCA()
if err != nil {
return nil, err
}
caPrivKey = caPrivKeyInternal
caCert = caCertInternal
nodePrivKey, nodeCert, err := certificates.NewNode(caPrivKey, caCert, namespace, clusterDns)
if err != nil {
return nil, err
}
pemNodePrivKey, err := pem.EncodeKey(nodePrivKey)
if err != nil {
return nil, err
}
pemCaPrivKey, err := pem.EncodeKey(caPrivKey)
if err != nil {
return nil, err
}
pemCaCert, err := pem.EncodeCertificate(caCert)
if err != nil {
return nil, err
}
pemNodeCert, err := pem.EncodeCertificate(nodeCert)
if err != nil {
return nil, err
}
nodeSecretData := map[string]string{
caPrivKeyKey: string(pemCaPrivKey),
caCertKey: string(pemCaCert),
nodePrivKeyKey: string(pemNodePrivKey),
nodeCertKey: string(pemNodeCert),
}
queryNodeSecret, err := secret.AdaptFuncToEnsure(namespace, labels.AsSelectable(nameLabels), nodeSecretData)
if err != nil {
return nil, err
}
queriers = append(queriers, operator.ResourceQueryToZitadelQuery(queryNodeSecret))
}
} else {
key, err := pem.DecodeKey(allNodeSecrets.Items[0].Data[caPrivKeyKey])
if err != nil {
return nil, err
}
caPrivKey = key
cert, err := pem.DecodeCertificate(allNodeSecrets.Items[0].Data[caCertKey])
if err != nil {
return nil, err
}
caCert = cert
}
currentDB.SetCertificate(caCert)
currentDB.SetCertificateKey(caPrivKey)
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
}, func(k8sClient kubernetes.ClientInt) error {
allNodeSecrets, err := k8sClient.ListSecrets(namespace, nodeSecretSelector)
if err != nil {
return err
}
for _, deleteSecret := range allNodeSecrets.Items {
destroyer, err := secret.AdaptFuncToDestroy(namespace, deleteSecret.Name)
if err != nil {
return err
}
if err := destroyer(k8sClient); err != nil {
return err
}
}
return nil
}, nil
}

View File

@@ -0,0 +1,245 @@
package node
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases/core"
coremock "github.com/caos/zitadel/operator/database/kinds/databases/core/mock"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate/pem"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
const (
caPem = `-----BEGIN CERTIFICATE-----
MIIE9TCCAt2gAwIBAgICB+MwDQYJKoZIhvcNAQELBQAwKzESMBAGA1UEChMJQ29j
a3JvYWNoMRUwEwYDVQQDEwxDb2Nrcm9hY2ggQ0EwHhcNMjAxMjAxMTAyMjA0WhcN
MzAxMjAxMTAyMjA0WjArMRIwEAYDVQQKEwlDb2Nrcm9hY2gxFTATBgNVBAMTDENv
Y2tyb2FjaCBDQTCCAiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAOja5IXJ
GUY9sFgyvWkav+O74gzcv8y69uJzSOx0piOP+sfpZWVeGEjqO4JgdcS5NPMrT5Tb
aJ52CjiOdlHVTyR87i5JmOIvA2qA4dWQmSbX7AQ8r9ptYDe9xMn+qFegcbr4YxCz
K9mmbDZUhlLO7cz3QV6nvRxGFWbzffo8BXZnOUCAOyOHrbnPpLumnfZlL5BckdtY
pS7jAlUpKSMBTK4AHcmrouFsNKHqlUopYXeJFdg9g1F0DuCnVP9x7+XcUW8dAVut
Q7Jswy+++GAXs6mPVsYFLXUSYNyW+Bfl/jKwx0XTQx/6iyNpK0XtAzjBZFjXwmot
0mODkqnfE3BB4lXxZ5knomAQEGSScUhCUb9upbF4uJJF27xr/kIkwtWxMGpCXds0
IxI+wNRCenhfFZEIQCzri0zn6WdN8b/gbv1BErNcccYwolPUv1oUgYbzowbQ5O2D
aLQPqO1VAZiZHLxb787bRywpCl33VZ1ptMHi2ogKjcsh4DQ9SsRj+rU/Tk5lyk7G
FHteyHcq12TGpz9/CQYMacl8yeRRHfNO3Rq0jFTYeD4+ZdVBPKeuTHyXGzy2T157
pgqMFzwqxlNYPzpuz7xsZBExJzCtcomB8fMlsCJnxV/kuMTTsWsPrRc7hsqzBCq3
plfYT8a7EBCVmJmDu+6mMVh+A9M2zZCqtV57AgMBAAGjIzAhMA4GA1UdDwEB/wQE
AwICpDAPBgNVHRMBAf8EBTADAQH/MA0GCSqGSIb3DQEBCwUAA4ICAQDRmkBYDk/F
lYTgMaL7yhRAowDR8mtOl06Oba/MCDDsmBvfOVIqY9Sa9sJtb78Uvecv9WAE4qpb
PNSaIvNmujmGQrBxtGwQdeHZvtXQ1lKAGBM+ukfz/NFU+6rwOdTuLgS5jTaiM4r1
uMoIG6S5z/brWGHAw8Qv2Fq05dSLnevmPWK903V/RnwmCKT9as5Qpfkaie2PNory
euxVGeGolxzgbSps3ZJlDSSymQBl10iJYYdGIsgcw5FHcCdpS4AutdpQbIFuCGk2
CHTcTTa4KMaVfRm/gKm1gh8UnuVDLBKQS8+1CHFYUnih8ozBNUyBo5r5L4BHJK/n
f0gnrYqaxtqPAyHkfo0PRc50HlAQRS/gW8ESv2PQmcSWlDggEzXt5MqFIYOfEXWE
gtC7Ct1P7gzIRxolYVsNSgBR62sJM1PUa39E5v0QJsmuM51txHuw/PGqkkBlEwx9
4u3FFXD9Pslg/g1l50Ek5O3/TWMta4Q+NagDDaOdkb4LRQr522ySgGIinBw00Xr+
Ics2L42rP1LQyaE7+BljDEwmGaT7mrl8QGZR9uv/9mbtk2+q7wAOQzcMSFFACfwx
KdKENmK0GpQZ6HaeDZO7wxGiAyFKhTNpd39XNOEhaKpWLSUxnZTLGlMIzvSyMGCj
PV3xm+Xg1Xvki7f/qC1djysMcaokoJzNdg==
-----END CERTIFICATE-----
`
caPrivPem = `-----BEGIN RSA PRIVATE KEY-----
MIIJKAIBAAKCAgEA2Owc3IOljq3hkKO9q6jT1kFZp/+Y9dv/lTIbsMhV7gPrn2WS
Os2KQphcrB9DvGoZAxUh1aD8MUO7USIMhFodQEq7vfycDT16jrTnc1fSDDcfk3Ra
DvqZGcBkj0lc6w5LK3FNl2y6rA/26teGbyNaVXJfW01dKKy4E2or2+1+tUamyjiU
cvPzNWEoPPJT5HcadLTxZr/SKDvDXFyej4nKT9+j1pKmaqQrIrL4KXKq75LhU+kH
L4TG+kJ35isiv5OTpd2jG6ssz0i+ZEtX4hjIM6eCnZaiFT//33nSL3zl2LUjmyou
ks2FDuX9m90mg8UtcrpA/eVlwyg8nJ/d7/Yn3ZjuHVOgzE8YuQXauJ16HuLcFgEW
WQg1uwhOhrc5b0YJndZbJ2Re4qfaBoC5fODRXoUPvqG9k9kNp98xtclNCIyW8EpA
Su8QmK42ksOJox+OQamHzatrGppgIK77TY3ZFcBHETHClabgsfjv/1GNXXAexJI4
Cjntnou+yoc+LUj87WJfD3ERGgm0nDfnE7uZ7kccO5Lj/oajeO+Q+QJaLZ4I2jz+
a05k16naGGT29AnM+iwBqqoTODr4Z7905niZc6+fEOPml1V7wSuJs2eE4jOa3ixX
5tnruw74rN82Zfrkg6kWPOEfBBXzSotRiHv+BAV2tFbnC55ItnHn1ZE68V0CAwEA
AQKCAgEAssWsZ4PLXpIo8qYve5hQtSP4er7oVb8wnMnGDmSchOMQPbZc1D9uscGV
pnjBvzcFVAgHcWMSVJuIda4E+NK3hrPQlBvqk/LV3WRz1xhKUKzhRgm+6tdWc+We
OoRwonuOMchX9PKzyXgCu7pR3agaG499zOYuX4Yw0jdO3BqXsVf/v2rv1Oj9yEFB
AzGHOCN8VzCEPnTaAzR1pdnjB1K8vCUIhp8nrX2M2zT51lbdT0ISl6/VrzDTN46t
97AXHCHIrgrCENx6un4uAsQhMoHQBNoJiEyLWc371zYzpdVeK8HlDUyvQ2dDQGsF
Hn4c7r4C3aloRJbYzgSMJ1yNcOTCJpciQsq1VmCQFOHfbum2ejquXJ7BbeRGeHdM
145epckL+hbECTCpSs0Z5t90NdfoJspvr+3sOEt6h3DMUGjvobrf/s3KiRY5hHdc
x86Htx3QgWNCG6a+ga7h7s3t+4ZtoPPWn+vlAoxD/2eCzsDgE/XLDmC2T4yS8UOb
LIb4UN7wl2sNM8zhm4BfoiKfjGu4dPZIlsPP+ZKRby1O4ogHHoPREBTH1VSEplVM
fA/KSITV+rUfO3T/qXIFZ4/Wa5YZoULiMCOJOWNgXQzWvTf7Qr31LhhfXd+uIw30
LDtjdkpT43zlKxRosQFLiV7q3fVbvKPVQfxzBz7M1Gl74IllpmECggEBAOnAimKl
w/9mcMkEd2L2ZzDI+57W8DU6grUUILZfjJG+uvOiMmIA5M1wzsVHil3ihZthrJ5A
UUAKG97JgwjpIAliAzgtPZY2AK/bJz7Hht1EwewrV+RyHM+dMv6em2OA+XUjFSPX
VsecFDaUQebkypw236KdA4jgRG6hr4xObdXgTN6RFCv1mI1EijZ/YbKgPgTJaPI4
b2N5QokYFygUCwRxKIt7/Z4hQs9LbdW680NcXtPRPnS1SmwYJbi7wTX+o5f6nfNV
YvojborjXwNrZe0p+FfaEuD44wf6kNlRGfcKXoaAncXV/M5/oXf1dktKP630eq9g
0MAKFYJ6MAUheakCggEBAO2Rgfy/tEydBuP9BIfC84BXk8EwnRJ0ZfmXv77HFj3F
VT5QNU97I5gZJOFkL7OO8bL0/WljlqeBEht0DmHNmOt5Pyew4LRLpfeN6rClZgRN
V4wqKXjoZAIYa9khQcwwFNER1RdI+PkuusJtrvY6CbwwG9LbBq2NR4C1YSgaQnhV
NqdXK5dwrYEky6lI31sDD4BYeiJVKlkkNCQAVOC+04Mrsa9F0NG7TKXzji5hU8l5
x8squjvJ6vmobhmsRTL1LMpafUrt5pHL9jcWIZYxJJo9mB0zxJKcsLI8IOg2QPoj
tQ395FZ2YtjNzZa1CYeUOUiaQu+uvztfE36AdW/vUpUCggEAMV7bW66LURw/4hUx
ahOFBAbPLmNTZMqw5LIVnq9br0TLk73ESnLJ4KJc6coMbXv0oDbnEJ2hC5eW/10s
cetbOuAasfjMMzfAuWPeTCI0V/O3ybv12mhHsYoQRTsWstOA3L7GLkXDLHHIyyZR
LQVRzeDBJ0Vmg7hqe7tmqom+JRg05CVcT1SWHfBGCPCqn+G8d6Jaqh5FWIs6BF60
NWDWWt/TonJTxNxdkg7qaeQMkUOnO7HMMTZBO8d14Ci3zEG2J9llFwoH17E4Hdmc
Lcq3QnpE27lRl3a57Ot9QIkipMzp3hq4OBrURIEsh3uuuoQ6IvGqH/Sg4o6+sEpC
bjL90QKCAQBDc/0kdooK9sruEPkoUwIwfq1FPThb9RC/PYcD9CMshssdVkjMuHny
xbDjDj89DGk0FrudINm11b/+a4Vp36Z7tYFpE5+5kYEeOP1aCpxcvFkPQyljWxiK
P8TfccHs5/oBIr8OTXnjxpDgg6QZ5YC+HirIQ8gxntuef+GGMW6OHCPYf7ew2B1r
fbcV6csBXG0aVATZmrTbepwTXMS8y3Hi3JUm3vvbkQLCW9US9i+EFT/VP9yA/WPq
Xxhj0bYUMej1y5unmsTMwMy392Cx9GIgKTz3jatStYq2ELyHMmBgpaLSxjP/GL4Y
MNce42hBRqS9KI+43jUN9oDiejbeAWXBAoIBACZKnS6EppNDSdos0f32J5TxEUxv
lF9AuVXEAPDR/m3+PlSRzrlf7uowdHTfsemHMSduL8weLNhduprwz4TmW9Fo6fSF
UePLNbXcMX3omAk+AKKOiexLG0fCXGW2zr4nHZJzTbK2+La3yLeUcoPu1puNpLiq
LVj2bH3zKWVRA9/ovuN6V5w18ojjdqOw4bw5qXcdZhoWLxI8Q9Oqua24f/dnRpuI
I8mRtPQ3+vuOKbTT+/80eAUpSfEKwAg1Mjgury9q1/B4Ib6hAGzpJuXxG7xQjnsJ
EFcN1kvdg5WGK41+fYMdexPaLamjhDGN0e1vxJfAukWIAsBMwp8wfEWZvzA=
-----END RSA PRIVATE KEY-----
`
)
func TestNode_AdaptWithCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb"), "testNode")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "testNode",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
ca, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPriv, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(ca)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(caPriv)
dbCurrent.EXPECT().SetCertificate(ca).Times(1)
dbCurrent.EXPECT().SetCertificateKey(caPriv).Times(1)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, err := AdaptFunc(monitor, namespace, nameLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptWithoutCA(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb"), "testNode")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "testNode",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{},
}
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
dbCurrent.EXPECT().GetCertificate().Times(1).Return(nil)
dbCurrent.EXPECT().GetCertificateKey().Times(1).Return(nil)
dbCurrent.EXPECT().SetCertificate(gomock.Any()).Times(1)
dbCurrent.EXPECT().SetCertificateKey(gomock.Any()).Times(1)
k8sClient.EXPECT().ApplySecret(gomock.Any()).Times(1).Return(nil)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, err := AdaptFunc(monitor, namespace, nameLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestNode_AdaptAlreadyExisting(t *testing.T) {
monitor := mntr.Monitor{}
namespace := "testNs"
clusterDns := "testDns"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "cockroachdb"), "testNode")
nodeLabels := map[string]string{
"app.kubernetes.io/component": "cockroachdb",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": "testNode",
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
dbCurrent := coremock.NewMockDatabaseCurrent(gomock.NewController(t))
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
secretList := &corev1.SecretList{
Items: []corev1.Secret{{
ObjectMeta: metav1.ObjectMeta{},
Data: map[string][]byte{
caCertKey: []byte(caPem),
caPrivKeyKey: []byte(caPrivPem),
},
Type: "Opaque",
}},
}
caCert, err := pem.DecodeCertificate([]byte(caPem))
assert.NoError(t, err)
caPrivKey, err := pem.DecodeKey([]byte(caPrivPem))
assert.NoError(t, err)
dbCurrent.EXPECT().SetCertificate(caCert).Times(1)
dbCurrent.EXPECT().SetCertificateKey(caPrivKey).Times(1)
k8sClient.EXPECT().ListSecrets(namespace, nodeLabels).Times(1).Return(secretList, nil)
queried := map[string]interface{}{}
current := &tree.Tree{
Parsed: dbCurrent,
}
core.SetQueriedForDatabase(queried, current)
query, _, err := AdaptFunc(monitor, namespace, nameLabels, clusterDns, true)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,48 @@
package pem
import (
"bytes"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
)
func EncodeCertificate(data []byte) ([]byte, error) {
certPem := new(bytes.Buffer)
if err := pem.Encode(certPem, &pem.Block{
Type: "CERTIFICATE",
Bytes: data,
}); err != nil {
return nil, err
}
return certPem.Bytes(), nil
}
func EncodeKey(key *rsa.PrivateKey) ([]byte, error) {
keyPem := new(bytes.Buffer)
if err := pem.Encode(keyPem, &pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(key),
}); err != nil {
return nil, err
}
return keyPem.Bytes(), nil
}
func DecodeKey(data []byte) (*rsa.PrivateKey, error) {
block, _ := pem.Decode(data)
if block == nil || block.Type != "RSA PRIVATE KEY" {
return nil, errors.New("failed to decode PEM block containing public key")
}
return x509.ParsePKCS1PrivateKey(block.Bytes)
}
func DecodeCertificate(data []byte) ([]byte, error) {
block, _ := pem.Decode(data)
if block == nil || block.Type != "CERTIFICATE" {
return nil, errors.New("failed to decode PEM block containing public key")
}
return block.Bytes, nil
}

View File

@@ -0,0 +1,73 @@
package managed
import (
"crypto/rsa"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate"
)
type Current struct {
Common *tree.Common `yaml:",inline"`
Current *CurrentDB
}
type CurrentDB struct {
URL string
Port string
ReadyFunc operator.EnsureFunc
CA *certificate.Current
AddUserFunc func(user string) (operator.QueryFunc, error)
DeleteUserFunc func(user string) (operator.DestroyFunc, error)
ListUsersFunc func(k8sClient kubernetes.ClientInt) ([]string, error)
ListDatabasesFunc func(k8sClient kubernetes.ClientInt) ([]string, error)
}
func (c *Current) GetURL() string {
return c.Current.URL
}
func (c *Current) GetPort() string {
return c.Current.Port
}
func (c *Current) GetReadyQuery() operator.EnsureFunc {
return c.Current.ReadyFunc
}
func (c *Current) GetCA() *certificate.Current {
return c.Current.CA
}
func (c *Current) GetCertificateKey() *rsa.PrivateKey {
return c.Current.CA.CertificateKey
}
func (c *Current) SetCertificateKey(key *rsa.PrivateKey) {
c.Current.CA.CertificateKey = key
}
func (c *Current) GetCertificate() []byte {
return c.Current.CA.Certificate
}
func (c *Current) SetCertificate(cert []byte) {
c.Current.CA.Certificate = cert
}
func (c *Current) GetListDatabasesFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return c.Current.ListDatabasesFunc
}
func (c *Current) GetListUsersFunc() func(k8sClient kubernetes.ClientInt) ([]string, error) {
return c.Current.ListUsersFunc
}
func (c *Current) GetAddUserFunc() func(user string) (operator.QueryFunc, error) {
return c.Current.AddUserFunc
}
func (c *Current) GetDeleteUserFunc() func(user string) (operator.DestroyFunc, error) {
return c.Current.DeleteUserFunc
}

View File

@@ -0,0 +1,50 @@
package database
import (
"fmt"
"github.com/caos/zitadel/operator"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
deployName string,
containerName string,
certsDir string,
userName string,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
cmdSql := fmt.Sprintf("cockroach sql --certs-dir=%s", certsDir)
createSql := fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s ", userName)
deleteSql := fmt.Sprintf("DROP DATABASE IF EXISTS %s", userName)
ensureDatabase := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, createSql))
}
destroyDatabase := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, deleteSql))
}
queriers := []operator.QueryFunc{
operator.EnsureFuncToQueryFunc(ensureDatabase),
}
destroyers := []operator.DestroyFunc{
destroyDatabase,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}

View File

@@ -0,0 +1,39 @@
package managed
import (
"github.com/caos/orbos/pkg/kubernetes/k8s"
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec Spec
}
type Spec struct {
Verbose bool
Force bool `yaml:"force,omitempty"`
ReplicaCount int `yaml:"replicaCount,omitempty"`
StorageCapacity string `yaml:"storageCapacity,omitempty"`
StorageClass string `yaml:"storageClass,omitempty"`
NodeSelector map[string]string `yaml:"nodeSelector,omitempty"`
Tolerations []corev1.Toleration `yaml:"tolerations,omitempty"`
ClusterDns string `yaml:"clusterDNS,omitempty"`
Backups map[string]*tree.Tree `yaml:"backups,omitempty"`
Resources *k8s.Resources `yaml:"resources,omitempty"`
}
func parseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{
Common: desiredTree.Common,
Spec: Spec{},
}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
return desiredKind, nil
}

View File

@@ -0,0 +1,36 @@
package managed
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/backups"
"github.com/pkg/errors"
)
func BackupList() func(monitor mntr.Monitor, desired *tree.Tree) ([]string, error) {
return func(monitor mntr.Monitor, desired *tree.Tree) ([]string, error) {
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
if !monitor.IsVerbose() && desiredKind.Spec.Verbose {
monitor.Verbose()
}
backuplists := make([]string, 0)
if desiredKind.Spec.Backups != nil {
for name, def := range desiredKind.Spec.Backups {
backuplist, err := backups.GetBackupList(monitor, name, def)
if err != nil {
return nil, err
}
for _, backup := range backuplist {
backuplists = append(backuplists, name+"."+backup)
}
}
}
return backuplists, nil
}
}

View File

@@ -0,0 +1,106 @@
package rbac
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/clusterrole"
"github.com/caos/orbos/pkg/kubernetes/resources/clusterrolebinding"
"github.com/caos/orbos/pkg/kubernetes/resources/role"
"github.com/caos/orbos/pkg/kubernetes/resources/rolebinding"
"github.com/caos/orbos/pkg/kubernetes/resources/serviceaccount"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
nameLabels *labels.Name,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("component", "rbac")
serviceAccountLabels := nameLabels
roleLabels := nameLabels
clusterRoleLabels := nameLabels
destroySA, err := serviceaccount.AdaptFuncToDestroy(namespace, serviceAccountLabels.Name())
if err != nil {
return nil, nil, err
}
destroyR, err := role.AdaptFuncToDestroy(namespace, roleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyCR, err := clusterrole.AdaptFuncToDestroy(clusterRoleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyRB, err := rolebinding.AdaptFuncToDestroy(namespace, roleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyCRB, err := clusterrolebinding.AdaptFuncToDestroy(roleLabels.Name())
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyR),
operator.ResourceDestroyToZitadelDestroy(destroyCR),
operator.ResourceDestroyToZitadelDestroy(destroyRB),
operator.ResourceDestroyToZitadelDestroy(destroyCRB),
operator.ResourceDestroyToZitadelDestroy(destroySA),
}
querySA, err := serviceaccount.AdaptFuncToEnsure(namespace, serviceAccountLabels)
if err != nil {
return nil, nil, err
}
queryR, err := role.AdaptFuncToEnsure(namespace, roleLabels, []string{""}, []string{"secrets"}, []string{"create", "get"})
if err != nil {
return nil, nil, err
}
queryCR, err := clusterrole.AdaptFuncToEnsure(clusterRoleLabels, []string{"certificates.k8s.io"}, []string{"certificatesigningrequests"}, []string{"create", "get", "watch"})
if err != nil {
return nil, nil, err
}
subjects := []rolebinding.Subject{{Kind: "ServiceAccount", Name: serviceAccountLabels.Name(), Namespace: namespace}}
queryRB, err := rolebinding.AdaptFuncToEnsure(namespace, roleLabels, subjects, roleLabels.Name())
if err != nil {
return nil, nil, err
}
subjectsCRB := []clusterrolebinding.Subject{{Kind: "ServiceAccount", Name: serviceAccountLabels.Name(), Namespace: namespace}}
queryCRB, err := clusterrolebinding.AdaptFuncToEnsure(roleLabels, subjectsCRB, roleLabels.Name())
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
//serviceaccount
operator.ResourceQueryToZitadelQuery(querySA),
//rbac
operator.ResourceQueryToZitadelQuery(queryR),
operator.ResourceQueryToZitadelQuery(queryCR),
operator.ResourceQueryToZitadelQuery(queryRB),
operator.ResourceQueryToZitadelQuery(queryCRB),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,207 @@
package rbac
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
rbacv1 "k8s.io/api/rbac/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
)
func TestRbac_Adapt1(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
name := "testName"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "testComponent"), name)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyServiceAccount(&corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
}})
k8sClient.EXPECT().ApplyRole(&rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{"create", "get"},
},
},
})
k8sClient.EXPECT().ApplyClusterRole(&rbacv1.ClusterRole{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{"certificates.k8s.io"},
Resources: []string{"certificatesigningrequests"},
Verbs: []string{"create", "get", "watch"},
},
},
})
k8sClient.EXPECT().ApplyRoleBinding(&rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
Name: name,
Kind: "Role",
APIGroup: "rbac.authorization.k8s.io",
},
})
k8sClient.EXPECT().ApplyClusterRoleBinding(&rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Name: name,
Kind: "ClusterRole",
},
})
query, _, err := AdaptFunc(monitor, namespace, nameLabels)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestRbac_Adapt2(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
name := "testName2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "cockroachdb", "v0"), "testComponent2"), name)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyServiceAccount(&corev1.ServiceAccount{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
}})
k8sClient.EXPECT().ApplyRole(&rbacv1.Role{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{""},
Resources: []string{"secrets"},
Verbs: []string{"create", "get"},
},
},
})
k8sClient.EXPECT().ApplyClusterRole(&rbacv1.ClusterRole{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Rules: []rbacv1.PolicyRule{
{
APIGroups: []string{"certificates.k8s.io"},
Resources: []string{"certificatesigningrequests"},
Verbs: []string{"create", "get", "watch"},
},
},
})
k8sClient.EXPECT().ApplyRoleBinding(&rbacv1.RoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
Name: name,
Kind: "Role",
APIGroup: "rbac.authorization.k8s.io",
},
})
k8sClient.EXPECT().ApplyClusterRoleBinding(&rbacv1.ClusterRoleBinding{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Labels: k8sLabels,
},
Subjects: []rbacv1.Subject{{
Kind: "ServiceAccount",
Name: name,
Namespace: namespace,
}},
RoleRef: rbacv1.RoleRef{
APIGroup: "rbac.authorization.k8s.io",
Name: name,
Kind: "ClusterRole",
},
})
query, _, err := AdaptFunc(monitor, namespace, nameLabels)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,79 @@
package services
import (
"github.com/caos/zitadel/operator"
"strconv"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/service"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
publicServiceNameLabels *labels.Name,
privateServiceNameLabels *labels.Name,
cockroachSelector *labels.Selector,
cockroachPort int32,
cockroachHTTPPort int32,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
internalMonitor := monitor.WithField("type", "services")
publicServiceSelectable := labels.AsSelectable(publicServiceNameLabels)
destroySPD, err := service.AdaptFuncToDestroy("default", publicServiceSelectable.Name())
if err != nil {
return nil, nil, err
}
destroySP, err := service.AdaptFuncToDestroy(namespace, publicServiceSelectable.Name())
if err != nil {
return nil, nil, err
}
destroyS, err := service.AdaptFuncToDestroy(namespace, privateServiceNameLabels.Name())
if err != nil {
return nil, nil, err
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroySPD),
operator.ResourceDestroyToZitadelDestroy(destroySP),
operator.ResourceDestroyToZitadelDestroy(destroyS),
}
ports := []service.Port{
{Port: 26257, TargetPort: strconv.Itoa(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: strconv.Itoa(int(cockroachHTTPPort)), Name: "http"},
}
querySPD, err := service.AdaptFuncToEnsure("default", publicServiceSelectable, ports, "", cockroachSelector, false, "", "")
if err != nil {
return nil, nil, err
}
querySP, err := service.AdaptFuncToEnsure(namespace, publicServiceSelectable, ports, "", cockroachSelector, false, "", "")
if err != nil {
return nil, nil, err
}
queryS, err := service.AdaptFuncToEnsure(namespace, privateServiceNameLabels, ports, "", cockroachSelector, true, "None", "")
if err != nil {
return nil, nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(querySPD),
operator.ResourceQueryToZitadelQuery(querySP),
operator.ResourceQueryToZitadelQuery(queryS),
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(internalMonitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(internalMonitor, destroyers),
nil
}

View File

@@ -0,0 +1,218 @@
package services
import (
"github.com/caos/orbos/mntr"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"testing"
)
func TestService_Adapt1(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "testComponent")
name := "testSvc"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(componentLabels, name)
publicName := "testPublic"
k8sPublicLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": publicName,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
publicNameLabels := labels.MustForName(componentLabels, publicName)
cdbName := "testCdbName"
k8sCdbLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": cdbName,
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
cdbNameLabels := labels.MustForName(componentLabels, cdbName)
cockroachPort := int32(25267)
cockroachHttpPort := int32(8080)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: namespace,
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: "default",
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: true,
ClusterIP: "None",
},
})
query, _, err := AdaptFunc(monitor, namespace, publicNameLabels, nameLabels, labels.DeriveNameSelector(cdbNameLabels, false), cockroachPort, cockroachHttpPort)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestService_Adapt2(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
componentLabels := labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "cockroachdb", "v0"), "testComponent2")
name := "testSvc2"
k8sLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
}
nameLabels := labels.MustForName(componentLabels, name)
publicName := "testPublic2"
k8sPublicLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": publicName,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
publicNameLabels := labels.MustForName(componentLabels, publicName)
cdbName := "testCdbName2"
k8sCdbLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": cdbName,
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
cdbNameLabels := labels.MustForName(componentLabels, cdbName)
cockroachPort := int32(23)
cockroachHttpPort := int32(24)
queried := map[string]interface{}{}
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: namespace,
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: publicName,
Namespace: "default",
Labels: k8sPublicLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: false,
},
})
k8sClient.EXPECT().ApplyService(&corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sLabels,
},
Spec: corev1.ServiceSpec{
Ports: []corev1.ServicePort{
{Port: 26257, TargetPort: intstr.FromInt(int(cockroachPort)), Name: "grpc"},
{Port: 8080, TargetPort: intstr.FromInt(int(cockroachHttpPort)), Name: "http"},
},
Selector: k8sCdbLabels,
PublishNotReadyAddresses: true,
ClusterIP: "None",
},
})
query, _, err := AdaptFunc(monitor, namespace, publicNameLabels, nameLabels, labels.DeriveNameSelector(cdbNameLabels, false), cockroachPort, cockroachHttpPort)
assert.NoError(t, err)
ensure, err := query(k8sClient, queried)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,352 @@
package statefulset
import (
"fmt"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/helpers"
"k8s.io/apimachinery/pkg/util/intstr"
"sort"
"strings"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/k8s"
"github.com/caos/orbos/pkg/kubernetes/resources"
"github.com/caos/orbos/pkg/kubernetes/resources/statefulset"
"github.com/pkg/errors"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
const (
certPath = "/cockroach/cockroach-certs"
clientCertPath = "/cockroach/cockroach-client-certs"
datadirPath = "/cockroach/cockroach-data"
datadirInternal = "datadir"
certsInternal = "certs"
clientCertsInternal = "client-certs"
defaultMode = int32(256)
nodeSecret = "cockroachdb.node"
rootSecret = "cockroachdb.client.root"
)
type Affinity struct {
key string
value string
}
type Affinitys []metav1.LabelSelectorRequirement
func (a Affinitys) Len() int { return len(a) }
func (a Affinitys) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
func (a Affinitys) Less(i, j int) bool { return a[i].Key < a[j].Key }
func AdaptFunc(
monitor mntr.Monitor,
sfsSelectable *labels.Selectable,
podSelector *labels.Selector,
force bool,
namespace string,
image string,
serviceAccountName string,
replicaCount int,
storageCapacity string,
dbPort int32,
httpPort int32,
storageClass string,
nodeSelector map[string]string,
tolerations []corev1.Toleration,
resourcesSFS *k8s.Resources,
) (
resources.QueryFunc,
resources.DestroyFunc,
operator.EnsureFunc,
operator.EnsureFunc,
func(k8sClient kubernetes.ClientInt) ([]string, error),
error,
) {
internalMonitor := monitor.WithField("component", "statefulset")
quantity, err := resource.ParseQuantity(storageCapacity)
if err != nil {
return nil, nil, nil, nil, nil, err
}
name := sfsSelectable.Name()
k8sSelectable := labels.MustK8sMap(sfsSelectable)
statefulsetDef := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sSelectable,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: helpers.PointerInt32(int32(replicaCount)),
Selector: &metav1.LabelSelector{
MatchLabels: labels.MustK8sMap(podSelector),
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: k8sSelectable,
},
Spec: corev1.PodSpec{
NodeSelector: nodeSelector,
Tolerations: tolerations,
ServiceAccountName: serviceAccountName,
Affinity: getAffinity(k8sSelectable),
Containers: []corev1.Container{{
Name: name,
Image: image,
ImagePullPolicy: "IfNotPresent",
Ports: []corev1.ContainerPort{
{ContainerPort: dbPort, Name: "grpc"},
{ContainerPort: httpPort, Name: "http"},
},
LivenessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 30,
PeriodSeconds: 5,
},
ReadinessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health?ready=1",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 10,
PeriodSeconds: 5,
FailureThreshold: 2,
},
VolumeMounts: []corev1.VolumeMount{{
Name: datadirInternal,
MountPath: datadirPath,
}, {
Name: certsInternal,
MountPath: certPath,
}, {
Name: clientCertsInternal,
MountPath: clientCertPath,
}},
Env: []corev1.EnvVar{{
Name: "COCKROACH_CHANNEL",
Value: "kubernetes-multiregion",
}},
Command: []string{
"/bin/bash",
"-ecx",
getJoinExec(
namespace,
name,
int(dbPort),
replicaCount,
),
},
Resources: getResources(resourcesSFS),
}},
Volumes: []corev1.Volume{{
Name: datadirInternal,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: datadirInternal,
},
},
}, {
Name: certsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: nodeSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: clientCertsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}},
},
},
PodManagementPolicy: appsv1.ParallelPodManagement,
UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
Type: "RollingUpdate",
},
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{{
ObjectMeta: metav1.ObjectMeta{
Name: datadirInternal,
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{
corev1.PersistentVolumeAccessMode("ReadWriteOnce"),
},
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"storage": quantity,
},
},
StorageClassName: &storageClass,
},
}},
},
}
query, err := statefulset.AdaptFuncToEnsure(statefulsetDef, force)
if err != nil {
return nil, nil, nil, nil, nil, err
}
destroy, err := statefulset.AdaptFuncToDestroy(namespace, name)
if err != nil {
return nil, nil, nil, nil, nil, err
}
wrapedQuery, wrapedDestroy, err := resources.WrapFuncs(internalMonitor, query, destroy)
checkDBRunning := func(k8sClient kubernetes.ClientInt) error {
internalMonitor.Info("waiting for statefulset to be running")
if err := k8sClient.WaitUntilStatefulsetIsReady(namespace, name, true, false, 60); err != nil {
internalMonitor.Error(errors.Wrap(err, "error while waiting for statefulset to be running"))
return err
}
internalMonitor.Info("statefulset is running")
return nil
}
checkDBNotReady := func(k8sClient kubernetes.ClientInt) error {
internalMonitor.Info("checking for statefulset to not be ready")
if err := k8sClient.WaitUntilStatefulsetIsReady(namespace, name, true, true, 1); err != nil {
internalMonitor.Info("statefulset is not ready")
return nil
}
internalMonitor.Info("statefulset is ready")
return errors.New("statefulset is ready")
}
ensureInit := func(k8sClient kubernetes.ClientInt) error {
if err := checkDBRunning(k8sClient); err != nil {
return err
}
if err := checkDBNotReady(k8sClient); err != nil {
return nil
}
command := "/cockroach/cockroach init --certs-dir=" + clientCertPath + " --host=" + name + "-0." + name
if err := k8sClient.ExecInPod(namespace, name+"-0", name, command); err != nil {
return err
}
return nil
}
checkDBReady := func(k8sClient kubernetes.ClientInt) error {
internalMonitor.Info("waiting for statefulset to be ready")
if err := k8sClient.WaitUntilStatefulsetIsReady(namespace, name, true, true, 60); err != nil {
internalMonitor.Error(errors.Wrap(err, "error while waiting for statefulset to be ready"))
return err
}
internalMonitor.Info("statefulset is ready")
return nil
}
getAllDBs := func(k8sClient kubernetes.ClientInt) ([]string, error) {
if err := checkDBRunning(k8sClient); err != nil {
return nil, err
}
if err := checkDBReady(k8sClient); err != nil {
return nil, err
}
command := "/cockroach/cockroach sql --certs-dir=" + clientCertPath + " --host=" + name + "-0." + name + " -e 'SHOW DATABASES;'"
databasesStr, err := k8sClient.ExecInPodWithOutput(namespace, name+"-0", name, command)
if err != nil {
return nil, err
}
databases := strings.Split(databasesStr, "\n")
dbAndOwners := databases[1 : len(databases)-1]
dbs := []string{}
for _, dbAndOwner := range dbAndOwners {
parts := strings.Split(dbAndOwner, "\t")
if parts[1] != "node" {
dbs = append(dbs, parts[0])
}
}
return dbs, nil
}
return wrapedQuery, wrapedDestroy, ensureInit, checkDBReady, getAllDBs, err
}
func getJoinExec(namespace string, name string, dbPort int, replicaCount int) string {
joinList := make([]string, 0)
for i := 0; i < replicaCount; i++ {
joinList = append(joinList, fmt.Sprintf("%s-%d.%s.%s:%d", name, i, name, namespace, dbPort))
}
joinListStr := strings.Join(joinList, ",")
locality := "zone=" + namespace
return "exec /cockroach/cockroach start --logtostderr --certs-dir " + certPath + " --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join " + joinListStr + " --locality " + locality + " --cache 25% --max-sql-memory 25%"
}
func getResources(resourcesSFS *k8s.Resources) corev1.ResourceRequirements {
internalResources := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
}
if resourcesSFS != nil {
internalResources = corev1.ResourceRequirements{}
if resourcesSFS.Requests != nil {
internalResources.Requests = resourcesSFS.Requests
}
if resourcesSFS.Limits != nil {
internalResources.Limits = resourcesSFS.Limits
}
}
return internalResources
}
func getAffinity(labels map[string]string) *corev1.Affinity {
affinity := Affinitys{}
for k, v := range labels {
affinity = append(affinity, metav1.LabelSelectorRequirement{
Key: k,
Operator: metav1.LabelSelectorOpIn,
Values: []string{
v,
}})
}
sort.Sort(affinity)
return &corev1.Affinity{
PodAntiAffinity: &corev1.PodAntiAffinity{
RequiredDuringSchedulingIgnoredDuringExecution: []corev1.PodAffinityTerm{{
LabelSelector: &metav1.LabelSelector{
MatchExpressions: affinity,
},
TopologyKey: "kubernetes.io/hostname",
}},
},
}
}

View File

@@ -0,0 +1,506 @@
package statefulset
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes/k8s"
kubernetesmock "github.com/caos/orbos/pkg/kubernetes/mock"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/helpers"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/intstr"
"testing"
)
func TestStatefulset_JoinExec0(t *testing.T) {
namespace := "testNs"
name := "test"
dbPort := 26257
replicaCount := 0
equals := "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join --locality zone=testNs --cache 25% --max-sql-memory 25%"
assert.Equal(t, equals, getJoinExec(namespace, name, dbPort, replicaCount))
}
func TestStatefulset_JoinExec1(t *testing.T) {
namespace := "testNs2"
name := "test2"
dbPort := 26257
replicaCount := 1
equals := "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join test2-0.test2.testNs2:26257 --locality zone=testNs2 --cache 25% --max-sql-memory 25%"
assert.Equal(t, equals, getJoinExec(namespace, name, dbPort, replicaCount))
}
func TestStatefulset_JoinExec2(t *testing.T) {
namespace := "testNs"
name := "test"
dbPort := 23
replicaCount := 2
equals := "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join test-0.test.testNs:23,test-1.test.testNs:23 --locality zone=testNs --cache 25% --max-sql-memory 25%"
assert.Equal(t, equals, getJoinExec(namespace, name, dbPort, replicaCount))
}
func TestStatefulset_Resources0(t *testing.T) {
equals := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("100m"),
"memory": resource.MustParse("512Mi"),
},
}
assert.Equal(t, equals, getResources(nil))
}
func TestStatefulset_Resources1(t *testing.T) {
res := &k8s.Resources{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("200m"),
"memory": resource.MustParse("600Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("500m"),
"memory": resource.MustParse("126Mi"),
},
}
equals := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("200m"),
"memory": resource.MustParse("600Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("500m"),
"memory": resource.MustParse("126Mi"),
},
}
assert.Equal(t, equals, getResources(res))
}
func TestStatefulset_Resources2(t *testing.T) {
res := &k8s.Resources{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("300m"),
"memory": resource.MustParse("670Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("600m"),
"memory": resource.MustParse("256Mi"),
},
}
equals := corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"cpu": resource.MustParse("300m"),
"memory": resource.MustParse("670Mi"),
},
Limits: corev1.ResourceList{
"cpu": resource.MustParse("600m"),
"memory": resource.MustParse("256Mi"),
},
}
assert.Equal(t, equals, getResources(res))
}
func TestStatefulset_Adapt1(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs"
name := "test"
image := "cockroach"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd", "testOp", "testVersion"), "cockroachdb", "v0"), "testComponent"), name)
k8sSelectableLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"app.kubernetes.io/version": "testVersion",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
k8sSelectorLabels := map[string]string{
"app.kubernetes.io/component": "testComponent",
"app.kubernetes.io/managed-by": "testOp",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd",
"orbos.ch/selectable": "yes",
}
selector := labels.DeriveNameSelector(nameLabels, false)
selectable := labels.AsSelectable(nameLabels)
serviceAccountName := "testSA"
replicaCount := 1
storageCapacity := "20Gi"
dbPort := int32(26257)
httpPort := int32(8080)
storageClass := "testSC"
nodeSelector := map[string]string{}
tolerations := []corev1.Toleration{}
resourcesSFS := &k8s.Resources{}
quantity, err := resource.ParseQuantity(storageCapacity)
assert.NoError(t, err)
sfs := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sSelectableLabels,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: helpers.PointerInt32(int32(replicaCount)),
Selector: &metav1.LabelSelector{
MatchLabels: k8sSelectorLabels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: k8sSelectableLabels,
},
Spec: corev1.PodSpec{
NodeSelector: nodeSelector,
Tolerations: tolerations,
ServiceAccountName: serviceAccountName,
Affinity: getAffinity(k8sSelectableLabels),
Containers: []corev1.Container{{
Name: name,
Image: image,
ImagePullPolicy: "IfNotPresent",
Ports: []corev1.ContainerPort{
{ContainerPort: dbPort, Name: "grpc"},
{ContainerPort: httpPort, Name: "http"},
},
LivenessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 30,
PeriodSeconds: 5,
},
ReadinessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health?ready=1",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 10,
PeriodSeconds: 5,
FailureThreshold: 2,
},
VolumeMounts: []corev1.VolumeMount{{
Name: datadirInternal,
MountPath: datadirPath,
}, {
Name: certsInternal,
MountPath: certPath,
}, {
Name: clientCertsInternal,
MountPath: clientCertPath,
}},
Env: []corev1.EnvVar{{
Name: "COCKROACH_CHANNEL",
Value: "kubernetes-multiregion",
}},
Command: []string{
"/bin/bash",
"-ecx",
getJoinExec(
namespace,
name,
int(dbPort),
replicaCount,
),
},
Resources: getResources(resourcesSFS),
}},
Volumes: []corev1.Volume{{
Name: datadirInternal,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: datadirInternal,
},
},
}, {
Name: certsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: nodeSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: clientCertsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}},
},
},
PodManagementPolicy: appsv1.PodManagementPolicyType("Parallel"),
UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
Type: "RollingUpdate",
},
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{{
ObjectMeta: metav1.ObjectMeta{
Name: datadirInternal,
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{
corev1.PersistentVolumeAccessMode("ReadWriteOnce"),
},
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"storage": quantity,
},
},
StorageClassName: &storageClass,
},
}},
},
}
k8sClient.EXPECT().ApplyStatefulSet(sfs, false)
query, _, _, _, _, err := AdaptFunc(
monitor,
selectable,
selector,
false,
namespace,
image,
serviceAccountName,
replicaCount,
storageCapacity,
dbPort,
httpPort,
storageClass,
nodeSelector,
tolerations,
resourcesSFS,
)
assert.NoError(t, err)
ensure, err := query(k8sClient)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}
func TestStatefulset_Adapt2(t *testing.T) {
k8sClient := kubernetesmock.NewMockClientInt(gomock.NewController(t))
monitor := mntr.Monitor{}
namespace := "testNs2"
name := "test2"
image := "cockroach2"
nameLabels := labels.MustForName(labels.MustForComponent(labels.MustForAPI(labels.MustForOperator("testProd2", "testOp2", "testVersion2"), "cockroachdb", "v0"), "testComponent2"), name)
k8sSelectableLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"app.kubernetes.io/version": "testVersion2",
"caos.ch/apiversion": "v0",
"caos.ch/kind": "cockroachdb",
"orbos.ch/selectable": "yes",
}
k8sSelectorLabels := map[string]string{
"app.kubernetes.io/component": "testComponent2",
"app.kubernetes.io/managed-by": "testOp2",
"app.kubernetes.io/name": name,
"app.kubernetes.io/part-of": "testProd2",
"orbos.ch/selectable": "yes",
}
selector := labels.DeriveNameSelector(nameLabels, false)
selectable := labels.AsSelectable(nameLabels)
serviceAccountName := "testSA2"
replicaCount := 2
storageCapacity := "40Gi"
dbPort := int32(23)
httpPort := int32(24)
storageClass := "testSC2"
nodeSelector := map[string]string{}
tolerations := []corev1.Toleration{}
resourcesSFS := &k8s.Resources{}
quantity, err := resource.ParseQuantity(storageCapacity)
assert.NoError(t, err)
sfs := &appsv1.StatefulSet{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: namespace,
Labels: k8sSelectableLabels,
},
Spec: appsv1.StatefulSetSpec{
ServiceName: name,
Replicas: helpers.PointerInt32(int32(replicaCount)),
Selector: &metav1.LabelSelector{
MatchLabels: k8sSelectorLabels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: k8sSelectableLabels,
},
Spec: corev1.PodSpec{
NodeSelector: nodeSelector,
Tolerations: tolerations,
ServiceAccountName: serviceAccountName,
Affinity: getAffinity(k8sSelectableLabels),
Containers: []corev1.Container{{
Name: name,
Image: image,
ImagePullPolicy: "IfNotPresent",
Ports: []corev1.ContainerPort{
{ContainerPort: dbPort, Name: "grpc"},
{ContainerPort: httpPort, Name: "http"},
},
LivenessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 30,
PeriodSeconds: 5,
},
ReadinessProbe: &corev1.Probe{
Handler: corev1.Handler{
HTTPGet: &corev1.HTTPGetAction{
Path: "/health?ready=1",
Port: intstr.Parse("http"),
Scheme: "HTTPS",
},
},
InitialDelaySeconds: 10,
PeriodSeconds: 5,
FailureThreshold: 2,
},
VolumeMounts: []corev1.VolumeMount{{
Name: datadirInternal,
MountPath: datadirPath,
}, {
Name: certsInternal,
MountPath: certPath,
}, {
Name: clientCertsInternal,
MountPath: clientCertPath,
}},
Env: []corev1.EnvVar{{
Name: "COCKROACH_CHANNEL",
Value: "kubernetes-multiregion",
}},
Command: []string{
"/bin/bash",
"-ecx",
getJoinExec(
namespace,
name,
int(dbPort),
replicaCount,
),
},
Resources: getResources(resourcesSFS),
}},
Volumes: []corev1.Volume{{
Name: datadirInternal,
VolumeSource: corev1.VolumeSource{
PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
ClaimName: datadirInternal,
},
},
}, {
Name: certsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: nodeSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}, {
Name: clientCertsInternal,
VolumeSource: corev1.VolumeSource{
Secret: &corev1.SecretVolumeSource{
SecretName: rootSecret,
DefaultMode: helpers.PointerInt32(defaultMode),
},
},
}},
},
},
PodManagementPolicy: appsv1.PodManagementPolicyType("Parallel"),
UpdateStrategy: appsv1.StatefulSetUpdateStrategy{
Type: "RollingUpdate",
},
VolumeClaimTemplates: []corev1.PersistentVolumeClaim{{
ObjectMeta: metav1.ObjectMeta{
Name: datadirInternal,
},
Spec: corev1.PersistentVolumeClaimSpec{
AccessModes: []corev1.PersistentVolumeAccessMode{
corev1.PersistentVolumeAccessMode("ReadWriteOnce"),
},
Resources: corev1.ResourceRequirements{
Requests: corev1.ResourceList{
"storage": quantity,
},
},
StorageClassName: &storageClass,
},
}},
},
}
k8sClient.EXPECT().ApplyStatefulSet(sfs, false)
query, _, _, _, _, err := AdaptFunc(
monitor,
selectable,
selector,
false,
namespace,
image,
serviceAccountName,
replicaCount,
storageCapacity,
dbPort,
httpPort,
storageClass,
nodeSelector,
tolerations,
resourcesSFS,
)
assert.NoError(t, err)
ensure, err := query(k8sClient)
assert.NoError(t, err)
assert.NotNil(t, ensure)
assert.NoError(t, ensure(k8sClient))
}

View File

@@ -0,0 +1,72 @@
package user
import (
"fmt"
"github.com/caos/zitadel/operator"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/zitadel/operator/database/kinds/databases/managed/certificate"
)
func AdaptFunc(
monitor mntr.Monitor,
namespace string,
deployName string,
containerName string,
certsDir string,
userName string,
password string,
componentLabels *labels.Component,
) (
operator.QueryFunc,
operator.DestroyFunc,
error,
) {
cmdSql := fmt.Sprintf("cockroach sql --certs-dir=%s", certsDir)
createSql := fmt.Sprintf("CREATE USER IF NOT EXISTS %s ", userName)
if password != "" {
createSql = fmt.Sprintf("%s WITH PASSWORD %s", createSql, password)
}
deleteSql := fmt.Sprintf("DROP USER IF EXISTS %s", userName)
_, _, addUserFunc, deleteUserFunc, _, err := certificate.AdaptFunc(monitor, namespace, componentLabels, "", false)
if err != nil {
return nil, nil, err
}
addUser, err := addUserFunc(userName)
if err != nil {
return nil, nil, err
}
ensureUser := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, createSql))
}
deleteUser, err := deleteUserFunc(userName)
if err != nil {
return nil, nil, err
}
destoryUser := func(k8sClient kubernetes.ClientInt) error {
return k8sClient.ExecInPodOfDeployment(namespace, deployName, containerName, fmt.Sprintf("%s -e '%s;'", cmdSql, deleteSql))
}
queriers := []operator.QueryFunc{
addUser,
operator.EnsureFuncToQueryFunc(ensureUser),
}
destroyers := []operator.DestroyFunc{
destoryUser,
deleteUser,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
return operator.QueriersToEnsureFunc(monitor, false, queriers, k8sClient, queried)
},
operator.DestroyersToDestroyFunc(monitor, destroyers),
nil
}

View File

@@ -0,0 +1,59 @@
package provided
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
"github.com/pkg/errors"
)
func AdaptFunc() func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
return func(
monitor mntr.Monitor,
desired *tree.Tree,
current *tree.Tree,
) (
operator.QueryFunc,
operator.DestroyFunc,
map[string]*secret.Secret,
error,
) {
desiredKind, err := parseDesiredV0(desired)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
desired.Parsed = desiredKind
currentDB := &Current{
Common: &tree.Common{
Kind: "databases.caos.ch/ProvidedDatabase",
Version: "v0",
},
}
current.Parsed = currentDB
return func(k8sClient kubernetes.ClientInt, _ map[string]interface{}) (operator.EnsureFunc, error) {
currentDB.Current.URL = desiredKind.Spec.URL
currentDB.Current.Port = desiredKind.Spec.Port
return func(k8sClient kubernetes.ClientInt) error {
return nil
}, nil
}, func(k8sClient kubernetes.ClientInt) error {
return nil
},
map[string]*secret.Secret{},
nil
}
}

View File

@@ -0,0 +1,21 @@
package provided
import (
"github.com/caos/orbos/pkg/tree"
)
type Current struct {
Common *tree.Common `yaml:",inline"`
Current struct {
URL string
Port string
}
}
func (c *Current) GetURL() string {
return c.Current.URL
}
func (c *Current) GetPort() string {
return c.Current.Port
}

View File

@@ -0,0 +1,32 @@
package provided
import (
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec Spec
}
type Spec struct {
Verbose bool
Namespace string
URL string
Port string
Users []string
}
func parseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{
Common: desiredTree.Common,
Spec: Spec{},
}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
return desiredKind, nil
}

View File

@@ -0,0 +1,110 @@
package orb
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/kubernetes/resources/namespace"
"github.com/caos/orbos/pkg/labels"
"github.com/caos/orbos/pkg/secret"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/orbos/pkg/treelabels"
"github.com/caos/zitadel/operator"
"github.com/caos/zitadel/operator/database/kinds/databases"
"github.com/pkg/errors"
)
const (
NamespaceStr = "caos-zitadel"
)
func OperatorSelector() *labels.Selector {
return labels.OpenOperatorSelector("ZITADEL", "database.caos.ch")
}
func AdaptFunc(timestamp string, binaryVersion *string, features ...string) operator.AdaptFunc {
return func(monitor mntr.Monitor, orbDesiredTree *tree.Tree, currentTree *tree.Tree) (queryFunc operator.QueryFunc, destroyFunc operator.DestroyFunc, secrets map[string]*secret.Secret, err error) {
defer func() {
err = errors.Wrapf(err, "building %s failed", orbDesiredTree.Common.Kind)
}()
orbMonitor := monitor.WithField("kind", "orb")
desiredKind, err := parseDesiredV0(orbDesiredTree)
if err != nil {
return nil, nil, nil, errors.Wrap(err, "parsing desired state failed")
}
orbDesiredTree.Parsed = desiredKind
currentTree = &tree.Tree{}
if desiredKind.Spec.Verbose && !orbMonitor.IsVerbose() {
orbMonitor = orbMonitor.Verbose()
}
queryNS, err := namespace.AdaptFuncToEnsure(NamespaceStr)
if err != nil {
return nil, nil, nil, err
}
destroyNS, err := namespace.AdaptFuncToDestroy(NamespaceStr)
if err != nil {
return nil, nil, nil, err
}
databaseCurrent := &tree.Tree{}
operatorLabels := mustDatabaseOperator(binaryVersion)
queryDB, destroyDB, secrets, err := databases.GetQueryAndDestroyFuncs(
orbMonitor,
desiredKind.Database,
databaseCurrent,
NamespaceStr,
treelabels.MustForAPI(desiredKind.Database, operatorLabels),
timestamp,
desiredKind.Spec.NodeSelector,
desiredKind.Spec.Tolerations,
desiredKind.Spec.Version,
features,
)
if err != nil {
return nil, nil, nil, err
}
queriers := []operator.QueryFunc{
operator.ResourceQueryToZitadelQuery(queryNS),
queryDB,
}
if desiredKind.Spec.SelfReconciling {
queriers = append(queriers,
operator.EnsureFuncToQueryFunc(Reconcile(monitor, orbDesiredTree)),
)
}
destroyers := []operator.DestroyFunc{
operator.ResourceDestroyToZitadelDestroy(destroyNS),
destroyDB,
}
currentTree.Parsed = &DesiredV0{
Common: &tree.Common{
Kind: "databases.caos.ch/Orb",
Version: "v0",
},
Database: databaseCurrent,
}
return func(k8sClient kubernetes.ClientInt, queried map[string]interface{}) (operator.EnsureFunc, error) {
if queried == nil {
queried = map[string]interface{}{}
}
monitor.WithField("queriers", len(queriers)).Info("Querying")
return operator.QueriersToEnsureFunc(monitor, true, queriers, k8sClient, queried)
},
func(k8sClient kubernetes.ClientInt) error {
monitor.WithField("destroyers", len(queriers)).Info("Destroy")
return operator.DestroyersToDestroyFunc(monitor, destroyers)(k8sClient)
},
secrets,
nil
}
}

View File

@@ -0,0 +1,24 @@
package orb
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator/database/kinds/databases"
"github.com/pkg/errors"
)
func BackupListFunc() func(monitor mntr.Monitor, desiredTree *tree.Tree) (strings []string, err error) {
return func(monitor mntr.Monitor, desiredTree *tree.Tree) (strings []string, err error) {
desiredKind, err := parseDesiredV0(desiredTree)
if err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desiredTree.Parsed = desiredKind
if desiredKind.Spec.Verbose && !monitor.IsVerbose() {
monitor = monitor.Verbose()
}
return databases.GetBackupList(monitor, desiredKind.Database)
}
}

View File

@@ -0,0 +1,33 @@
package orb
import (
"github.com/caos/orbos/pkg/tree"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
)
type DesiredV0 struct {
Common *tree.Common `yaml:",inline"`
Spec struct {
Verbose bool
NodeSelector map[string]string `yaml:"nodeSelector,omitempty"`
Tolerations []corev1.Toleration `yaml:"tolerations,omitempty"`
Version string `yaml:"version,omitempty"`
SelfReconciling bool `yaml:"selfReconciling"`
//Use this registry to pull the ZITADEL operator image from
//@default: ghcr.io
CustomImageRegistry string `json:"customImageRegistry,omitempty" yaml:"customImageRegistry,omitempty"`
}
Database *tree.Tree
}
func parseDesiredV0(desiredTree *tree.Tree) (*DesiredV0, error) {
desiredKind := &DesiredV0{Common: desiredTree.Common}
if err := desiredTree.Original.Decode(desiredKind); err != nil {
return nil, errors.Wrap(err, "parsing desired state failed")
}
desiredKind.Common.Version = "v0"
return desiredKind, nil
}

View File

@@ -0,0 +1,13 @@
package orb
import "github.com/caos/orbos/pkg/labels"
func mustDatabaseOperator(binaryVersion *string) *labels.Operator {
version := "unknown"
if binaryVersion != nil {
version = *binaryVersion
}
return labels.MustForOperator("ZITADEL", "database.caos.ch", version)
}

View File

@@ -0,0 +1,48 @@
package orb
import (
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/orbos/pkg/treelabels"
"github.com/caos/zitadel/operator"
zitadelKubernetes "github.com/caos/zitadel/pkg/kubernetes"
"github.com/pkg/errors"
)
func Reconcile(monitor mntr.Monitor, desiredTree *tree.Tree) operator.EnsureFunc {
return func(k8sClient kubernetes.ClientInt) (err error) {
defer func() {
err = errors.Wrapf(err, "building %s failed", desiredTree.Common.Kind)
}()
desiredKind, err := parseDesiredV0(desiredTree)
if err != nil {
return errors.Wrap(err, "parsing desired state failed")
}
desiredTree.Parsed = desiredKind
recMonitor := monitor.WithField("version", desiredKind.Spec.Version)
if desiredKind.Spec.Version == "" {
err := errors.New("No version set in database.yml")
monitor.Error(err)
return err
}
imageRegistry := desiredKind.Spec.CustomImageRegistry
if imageRegistry == "" {
imageRegistry = "ghcr.io"
}
if err := zitadelKubernetes.EnsureDatabaseArtifacts(monitor, treelabels.MustForAPI(desiredTree, mustDatabaseOperator(&desiredKind.Spec.Version)), k8sClient, desiredKind.Spec.Version, desiredKind.Spec.NodeSelector, desiredKind.Spec.Tolerations, imageRegistry); err != nil {
recMonitor.Error(errors.Wrap(err, "Failed to deploy database-operator into k8s-cluster"))
return err
}
recMonitor.Info("Applied database-operator")
return nil
}
}

View File

@@ -0,0 +1,45 @@
package database
import (
"errors"
"github.com/caos/orbos/mntr"
"github.com/caos/orbos/pkg/git"
"github.com/caos/orbos/pkg/kubernetes"
"github.com/caos/orbos/pkg/tree"
"github.com/caos/zitadel/operator"
)
func Takeoff(monitor mntr.Monitor, gitClient *git.Client, adapt operator.AdaptFunc, k8sClient *kubernetes.Client) func() {
return func() {
internalMonitor := monitor.WithField("operator", "database")
internalMonitor.Info("Takeoff")
treeDesired, err := operator.Parse(gitClient, "database.yml")
if err != nil {
monitor.Error(err)
return
}
treeCurrent := &tree.Tree{}
if !k8sClient.Available() {
internalMonitor.Error(errors.New("kubeclient is not available"))
return
}
query, _, _, err := adapt(internalMonitor, treeDesired, treeCurrent)
if err != nil {
internalMonitor.Error(err)
return
}
ensure, err := query(k8sClient, map[string]interface{}{})
if err != nil {
internalMonitor.Error(err)
return
}
if err := ensure(k8sClient); err != nil {
internalMonitor.Error(err)
return
}
}
}