-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote otel collector configuration #934
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #934 +/- ##
==========================================
- Coverage 69.44% 67.35% -2.09%
==========================================
Files 30 30
Lines 3040 3128 +88
==========================================
- Hits 2111 2107 -4
- Misses 763 852 +89
- Partials 166 169 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
fd99270
to
e483818
Compare
@@ -86,7 +89,10 @@ func main() { | |||
"Enable metrics collection for all Policy Servers and the Kubewarden Controller") | |||
flag.BoolVar(&enableTracing, "enable-tracing", false, | |||
"Enable tracing collection for all Policy Servers") | |||
flag.StringVar(&openTelemetryEndpoint, "opentelemetry-endpoint", "127.0.0.1:4317", "The OpenTelemetry connection endpoint") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a breaking change. But considering that we are doing a major change on how we integrate with OpenTelemetry, I think is the right time and worth it. It will simplify the configuration and avoid confusion.
5e75bba
to
cebdb5b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks good. I left some minor comments, some of them are not mandatory.
Please address the linter warnings before merging the PR
if index := envVarsContainVariable(admissionContainer.Env, envVar); index >= 0 { | ||
admissionContainer.Env[index] = envvar | ||
} else { | ||
admissionContainer.Env = append(admissionContainer.Env, envvar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is generic reflection. I've double checked the code, we create the deployment definition by add some default env variables and then we append the ones provided by the user via the .spec.Env
list. Then we keep adding more env variables to the list if needed, like we are doing here.
Kubernetes' uses a list, not a hash to store these env variables. Hence there's the chance of the user defining variables we need to control. Should we change all the code to ensure our env variables have precedence over the ones provided by the user?
Maybe this is something to be done with another PR.
What do you think @kubewarden/kubewarden-developers ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I was thinking about that as well. Another option is to add some validation in the webhook to prevent the users from deploying a policy server with a configuration that can cause issues. Therefore, we can at least tell them why the controller cannot deploy the policy server. Instead of allow they to deploy and notice that something is not working as they expected afterwards.
|
||
certificatePath := filepath.Dir(os.Getenv("OTEL_EXPORTER_OTLP_CERTIFICATE")) | ||
if otelCertificateSecret != "" { | ||
policyServerDeployment.Spec.Template.Spec.Volumes = append(policyServerDeployment.Spec.Template.Spec.Volumes, corev1.Volume{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be idempotent? (same for otelClientCertificateSecret
).
If the values change, to me it seems that we append again, instead of removing the old ones?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is idempotent, give we always start from an empty Deployment and add all the data defined inside of the PolicyServer spec
, plus all the values we deem required.
@kubewarden/kubewarden-developers am I right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you are right, I consider this resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks good. Please wait for everybody's approvals before merging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Updates the collector to allow the communication with a remote OpenTelemetry collector. Also updates the policy server reconciler to replicate the same configuration in the policy server deployment enabling it to send data to the same remote Otel collector. Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com>
Merging, addressing the code coverage is tough, we will track this technical debt and address that after the 1.20 release |
Description
Updates the collector to allow the communication with a remote OpenTelemetry collector. Also updates the policy server reconciler to replicate the same configuration in the policy server deployment enabling it to send data to the same remote Otel collector.
Fix #933
Tests
Warning
This PR was tested together with the changes from this Helm chart changes
Create a simple cluster with required dependencies:
I've tested this changes with this Otel collector configuration