Files
zitadel/internal/api/grpc/server/connect_middleware/execution_interceptor.go

162 lines
4.5 KiB
Go
Raw Normal View History

feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
package connect_middleware
import (
"context"
"encoding/json"
"connectrpc.com/connect"
"google.golang.org/protobuf/encoding/protojson"
"google.golang.org/protobuf/proto"
"github.com/zitadel/zitadel/internal/api/authz"
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
"github.com/zitadel/zitadel/internal/crypto"
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
"github.com/zitadel/zitadel/internal/execution"
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
target_domain "github.com/zitadel/zitadel/internal/execution/target"
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
"github.com/zitadel/zitadel/internal/telemetry/tracing"
)
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
func ExecutionHandler(alg crypto.EncryptionAlgorithm) connect.UnaryInterceptorFunc {
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
return func(handler connect.UnaryFunc) connect.UnaryFunc {
return func(ctx context.Context, req connect.AnyRequest) (_ connect.AnyResponse, err error) {
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
requestTargets := execution.QueryExecutionTargetsForRequest(ctx, req.Spec().Procedure)
handledReq, err := executeTargetsForRequest(ctx, requestTargets, req.Spec().Procedure, req, alg)
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
if err != nil {
return nil, err
}
response, err := handler(ctx, handledReq)
if err != nil {
return nil, err
}
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
responseTargets := execution.QueryExecutionTargetsForResponse(ctx, req.Spec().Procedure)
return executeTargetsForResponse(ctx, responseTargets, req.Spec().Procedure, handledReq, response, alg)
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
}
}
}
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
func executeTargetsForRequest(ctx context.Context, targets []target_domain.Target, fullMethod string, req connect.AnyRequest, alg crypto.EncryptionAlgorithm) (_ connect.AnyRequest, err error) {
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
ctx, span := tracing.NewSpan(ctx)
defer func() { span.EndWithError(err) }()
// if no targets are found, return without any calls
if len(targets) == 0 {
return req, nil
}
ctxData := authz.GetCtxData(ctx)
info := &ContextInfoRequest{
FullMethod: fullMethod,
InstanceID: authz.GetInstance(ctx).InstanceID(),
ProjectID: ctxData.ProjectID,
OrgID: ctxData.OrgID,
UserID: ctxData.UserID,
Request: Message{req.Any().(proto.Message)},
}
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
_, err = execution.CallTargets(ctx, targets, info, alg)
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
if err != nil {
return nil, err
}
return req, nil
}
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
func executeTargetsForResponse(ctx context.Context, targets []target_domain.Target, fullMethod string, req connect.AnyRequest, resp connect.AnyResponse, alg crypto.EncryptionAlgorithm) (_ connect.AnyResponse, err error) {
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
ctx, span := tracing.NewSpan(ctx)
defer func() { span.EndWithError(err) }()
// if no targets are found, return without any calls
if len(targets) == 0 {
return resp, nil
}
ctxData := authz.GetCtxData(ctx)
info := &ContextInfoResponse{
FullMethod: fullMethod,
InstanceID: authz.GetInstance(ctx).InstanceID(),
ProjectID: ctxData.ProjectID,
OrgID: ctxData.OrgID,
UserID: ctxData.UserID,
Request: Message{req.Any().(proto.Message)},
Response: Message{resp.Any().(proto.Message)},
}
perf(actionsv2): execution target router (#10564) # Which Problems Are Solved The event execution system currently uses a projection handler that subscribes to and processes all events for all instances. This creates a high static cost because the system over-fetches event data, handling many events that are not needed by most instances. This inefficiency is also reflected in high "rows returned" metrics in the database. # How the Problems Are Solved Eliminate the use of a project handler. Instead, events for which "execution targets" are defined, are directly pushed to the queue by the eventstore. A Router is populated in the Instance object in the authz middleware. - By joining the execution targets to the instance, no additional queries are needed anymore. - As part of the instance object, execution targets are now cached as well. - Events are queued within the same transaction, giving transactional guarantees on delivery. - Uses the "insert many fast` variant of River. Multiple jobs are queued in a single round-trip to the database. - Fix compatibility with PostgreSQL 15 # Additional Changes - The signing key was stored as plain-text in the river job payload in the DB. This violated our [Secrets Storage](https://zitadel.com/docs/concepts/architecture/secrets#secrets-storage) principle. This change removed the field and only uses the encrypted version of the signing key. - Fixed the target ordering from descending to ascending. - Some minor linter warnings on the use of `io.WriteString()`. # Additional Context - Introduced in https://github.com/zitadel/zitadel/pull/9249 - Closes https://github.com/zitadel/zitadel/issues/10553 - Closes https://github.com/zitadel/zitadel/issues/9832 - Closes https://github.com/zitadel/zitadel/issues/10372 - Closes https://github.com/zitadel/zitadel/issues/10492 --------- Co-authored-by: Stefan Benz <46600784+stebenz@users.noreply.github.com> (cherry picked from commit a9ebc06c778e1f46e04ff2b56f8ec4f337375aec)
2025-09-01 08:21:10 +03:00
_, err = execution.CallTargets(ctx, targets, info, alg)
feat: exchange gRPC server implementation to connectRPC (#10145) # Which Problems Are Solved The current maintained gRPC server in combination with a REST (grpc) gateway is getting harder and harder to maintain. Additionally, there have been and still are issues with supporting / displaying `oneOf`s correctly. We therefore decided to exchange the server implementation to connectRPC, which apart from supporting connect as protocol, also also "standard" gRCP clients as well as HTTP/1.1 / rest like clients, e.g. curl directly call the server without any additional gateway. # How the Problems Are Solved - All v2 services are moved to connectRPC implementation. (v1 services are still served as pure grpc servers) - All gRPC server interceptors were migrated / copied to a corresponding connectRPC interceptor. - API.ListGrpcServices and API. ListGrpcMethods were changed to include the connect services and endpoints. - gRPC server reflection was changed to a `StaticReflector` using the `ListGrpcServices` list. - The `grpc.Server` interfaces was split into different combinations to be able to handle the different cases (grpc server and prefixed gateway, connect server with grpc gateway, connect server only, ...) - Docs of services serving connectRPC only with no additional gateway (instance, webkey, project, app, org v2 beta) are changed to expose that - since the plugin is not yet available on buf, we download it using `postinstall` hook of the docs # Additional Changes - WebKey service is added as v2 service (in addition to the current v2beta) # Additional Context closes #9483 --------- Co-authored-by: Elio Bischof <elio@zitadel.com>
2025-07-04 10:06:20 -04:00
if err != nil {
return nil, err
}
return resp, nil
}
var _ execution.ContextInfo = &ContextInfoRequest{}
type ContextInfoRequest struct {
FullMethod string `json:"fullMethod,omitempty"`
InstanceID string `json:"instanceID,omitempty"`
OrgID string `json:"orgID,omitempty"`
ProjectID string `json:"projectID,omitempty"`
UserID string `json:"userID,omitempty"`
Request Message `json:"request,omitempty"`
}
type Message struct {
proto.Message
}
func (r *Message) MarshalJSON() ([]byte, error) {
data, err := protojson.Marshal(r.Message)
if err != nil {
return nil, err
}
return data, nil
}
func (r *Message) UnmarshalJSON(data []byte) error {
return protojson.Unmarshal(data, r.Message)
}
func (c *ContextInfoRequest) GetHTTPRequestBody() []byte {
data, err := json.Marshal(c)
if err != nil {
return nil
}
return data
}
func (c *ContextInfoRequest) SetHTTPResponseBody(resp []byte) error {
return json.Unmarshal(resp, &c.Request)
}
func (c *ContextInfoRequest) GetContent() interface{} {
return c.Request.Message
}
var _ execution.ContextInfo = &ContextInfoResponse{}
type ContextInfoResponse struct {
FullMethod string `json:"fullMethod,omitempty"`
InstanceID string `json:"instanceID,omitempty"`
OrgID string `json:"orgID,omitempty"`
ProjectID string `json:"projectID,omitempty"`
UserID string `json:"userID,omitempty"`
Request Message `json:"request,omitempty"`
Response Message `json:"response,omitempty"`
}
func (c *ContextInfoResponse) GetHTTPRequestBody() []byte {
data, err := json.Marshal(c)
if err != nil {
return nil
}
return data
}
func (c *ContextInfoResponse) SetHTTPResponseBody(resp []byte) error {
return json.Unmarshal(resp, &c.Response)
}
func (c *ContextInfoResponse) GetContent() interface{} {
return c.Response.Message
}