Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dependencies #602

Merged
merged 7 commits into from
Mar 18, 2024
Merged

Conversation

yeazelm
Copy link
Contributor

@yeazelm yeazelm commented Mar 14, 2024

Description of changes:
Updates dependencies to resolve several dependabot PRs along with moving code to compatible implementations for new opentelemetry changes.

actix-web-opentelemetry to 0.17
k8s-openapi to 0.21
kube to 0.85
opentelemetry to 0.22
opentelemetry_sdk to 0.22
opentelemetry-prometheus to 0.15

Used https://kube.rs/upgrading/ to ensure the dependencies work together.

Testing done:

Ran integ tests and validated that Prometheus can see the upgrade metrics still.

$ cargo run --bin integ integration-test --cluster-name brupop27 --region us-west-2 --bottlerocket-version 1.19.0  --nodegroup-name brupop
    Finished dev [unoptimized + debuginfo] target(s) in 0.16s
     Running `target/debug/integ integration-test --cluster-name brupop27 --region us-west-2 --bottlerocket-version 1.19.0 --nodegroup-name brupop`
[2024-03-16T19:00:57Z INFO  aws_config::meta::region] load_region; provider=Some(Region("us-west-2"))
[2024-03-16T19:00:57Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:00:57Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(58.948µs))
[2024-03-16T19:00:58Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:00:58Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(94.601µs))
[2024-03-16T19:00:58Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:00:58Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(110.054µs))
[2024-03-16T19:00:59Z INFO  integ] decoding and writing kubeconfig ...
2024-03-16 19:01:00 [✔]  saved kubeconfig as "/tmp/brupop27-us-west-2/kubeconfig.yaml"
[2024-03-16T19:01:00Z INFO  integ] kubeconfig has been written and store at "/tmp/brupop27-us-west-2/kubeconfig.yaml"
[2024-03-16T19:01:00Z INFO  integ] Creating EC2 instances via nodegroup ...
[2024-03-16T19:01:00Z INFO  aws_config::meta::region] load_region; provider=Some(Region("us-west-2"))
[2024-03-16T19:01:00Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:01:00Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(42.823µs))
[2024-03-16T19:01:00Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:01:00Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(87.247µs))
[2024-03-16T19:01:00Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:01:00Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(85.953µs))
[2024-03-16T19:01:01Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:01:01Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(218.437µs))
[2024-03-16T19:04:08Z INFO  integ] EC2 instances/nodegroup have been created
[2024-03-16T19:04:08Z INFO  integ] creating pods(statefulset pods, stateless pods, and pods with PodDisruptionBudgets) ...

service/nginx created
statefulset.apps/web-test created
deployment.apps/nginx-test created
poddisruptionbudget.policy/pod-disruption-budget-test created
[2024-03-16T19:04:09Z INFO  integ] Running cert-manager on existing EKS cluster ...
namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
[2024-03-16T19:05:42Z INFO  integ] Running brupop on existing EKS cluster ...
namespace/brupop-bottlerocket-aws created
customresourcedefinition.apiextensions.k8s.io/bottlerocketshadows.brupop.bottlerocket.aws created
serviceaccount/brupop-agent-service-account created
serviceaccount/brupop-apiserver-service-account created
serviceaccount/brupop-controller-service-account created
clusterrole.rbac.authorization.k8s.io/brupop-agent-role created
clusterrole.rbac.authorization.k8s.io/brupop-apiserver-role created
clusterrole.rbac.authorization.k8s.io/brupop-controller-role created
clusterrolebinding.rbac.authorization.k8s.io/brupop-agent-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/brupop-apiserver-auth-delegator-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/brupop-apiserver-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/brupop-controller-role-binding created
service/brupop-apiserver created
service/brupop-controller-server created
daemonset.apps/brupop-agent created
deployment.apps/brupop-apiserver created
deployment.apps/brupop-controller-deployment created
certificate.cert-manager.io/brupop-apiserver-client-certificate created
certificate.cert-manager.io/brupop-apiserver-certificate created
certificate.cert-manager.io/brupop-selfsigned-ca created
issuer.cert-manager.io/brupop-root-certificate-issuer created
issuer.cert-manager.io/selfsigned-issuer created
priorityclass.scheduling.k8s.io/brupop-controller-high-priority created


$ cargo run --bin integ monitor --cluster-name brupop27 --region us-west-2

[2024-03-18T17:49:44Z INFO  integ] monitoring brupop
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: StagedAndPerformedUpdate
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: StagedAndPerformedUpdate
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: StagedAndPerformedUpdate
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: StagedAndPerformedUpdate
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: StagedAndPerformedUpdate
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.0"       current_state: StagedAndPerformedUpdate
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Not ready] keep monitoring!
brs: "brs-ip-192-168-136-95.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-147-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-159-14.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
[Complete]: All nodes have been successfully updated to latest version!


$ cargo run --bin integ monitor --cluster-name brupop27 --region us-west-2
    Finished dev [unoptimized + debuginfo] target(s) in 0.17s
     Running `target/debug/integ monitor --cluster-name brupop27 --region us-west-2`
[2024-03-16T19:41:09Z INFO  aws_config::meta::region] load_region; provider=Some(Region("us-west-2"))
[2024-03-16T19:41:10Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:41:10Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(52.064µs))
[2024-03-16T19:41:10Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:41:10Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(97.123µs))
[2024-03-16T19:41:11Z INFO  tracing::span] lazy_load_credentials;
[2024-03-16T19:41:11Z INFO  aws_credential_types::cache::lazy_caching] credentials cache miss occurred; added new AWS credentials (took Ok(117.511µs))
[2024-03-16T19:41:13Z INFO  integ] monitoring brupop
brs: "brs-ip-192-168-149-125.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-150-232.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle
brs: "brs-ip-192-168-152-25.us-west-2.compute.internal"      current_version: "1.19.2"       current_state: Idle

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

Update to k8s-openapi 0.21 and kube 0.88
The docstrings are not valid so removing a / to resolve warnings.
@gthao313 gthao313 self-requested a review March 14, 2024 20:14
controller/src/main.rs Outdated Show resolved Hide resolved
apiserver/Cargo.toml Show resolved Hide resolved
@@ -240,10 +234,10 @@ pub async fn run_server<T: 'static + BottlerocketShadowClient>(
.exclude(CRD_CONVERT_ENDPOINT),
)
.wrap(RequestTracing::new())
.wrap(request_metrics.clone())
.wrap(RequestMetrics::default())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the implication here if we use default metrics instead of metrics created from a named Meter? Do we still have a way to tell if the metric is emitted by "apiserver"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I looked around to find this info and essentially the docs no longer reference this way of naming global metrics. It might exist somewhere in the library but besides the name, the code was a straight copy of the example docs: https://docs.rs/opentelemetry-prometheus/0.11.0/opentelemetry_prometheus/ I verified that the prometheus functionality seems to still work, but I don't have any context on if there was more to this namespacing than this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might be able to set the global provider:

let provider = SdkMeterProvider::builder().build();
let meter = provider.meter("apiserver");
global::set_meter_provider(provider.clone());

But its unclear if that will actually set it since we are setting the meter provider but the meter namespace is on the meter, not the provider. I can take a bit more time to research this and see if I can find a solid answer. We might also consider using .with_resource(Resource::new([KeyValue::new("service.name", "my_app")])) as described in https://github.com/OutThereLabs/actix-web-opentelemetry/blob/main/examples/server.rs. I'll have to see if I can pull the logs to confirm which approach solves this for us.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replied in a new comment. I think we can pass this provider to the run_server and use the meter_provider to create a meter in the run_server func like we did in the old commit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found how to add the apiserver back into the tracing so we should be good. FWIW the logs don't really have the namespacing I was worried about:

kubectl --kubeconfig /tmp/brupop27-us-west-2/kubeconfig.yaml logs brupop-apiserver-6bf69bf794-vnhwl -n brupop-bottlerocket-aws
....
  2024-03-16T19:13:26.354337Z  INFO models::node::drain: Pod brupop-apiserver-6bf69bf794-xqjgs deleted.
    at models/src/node/drain.rs:287
    in models::node::drain::wait_for_deletion
    in models::node::drain::drain_node with node_name: "ip-192-168-152-25.us-west-2.compute.internal"
    in models::node::client::drain_node with selector: BottlerocketShadowSelector { node_name: "ip-192-168-152-25.us-west-2.compute.internal", node_uid: "fe061dba-ee1d-4e21-aa03-5d20c8c33e16" }
    in apiserver::telemetry::HTTP request with http.method: POST, http.route: /bottlerocket-node-resource/cordon-and-drain, http.flavor: 2.0, http.scheme: https, http.host: brupop-apiserver.brupop-bottlerocket-aws.svc.cluster.local, http.client_ip: 192.168.137.65, http.user_agent: , http.target: /bottlerocket-node-resource/cordon-and-drain, otel.name: HTTP POST /bottlerocket-node-resource/cordon-and-drain, otel.kind: "server", request_id: 1879d381-bba7-48c7-ad49-6ff6cd286f1f, node_name: "ip-192-168-152-25.us-west-2.compute.internal"

  2024-03-16T19:13:26.363742Z  INFO models::node::drain: Pod nginx-test-67cb89c578-v7f49 deleted.
    at models/src/node/drain.rs:287
    in models::node::drain::wait_for_deletion
    in models::node::drain::drain_node with node_name: "ip-192-168-152-25.us-west-2.compute.internal"
    in models::node::client::drain_node with selector: BottlerocketShadowSelector { node_name: "ip-192-168-152-25.us-west-2.compute.internal", node_uid: "fe061dba-ee1d-4e21-aa03-5d20c8c33e16" }
    in apiserver::telemetry::HTTP request with http.method: POST, http.route: /bottlerocket-node-resource/cordon-and-drain, http.flavor: 2.0, http.scheme: https, http.host: brupop-apiserver.brupop-bottlerocket-aws.svc.cluster.local, http.client_ip: 192.168.137.65, http.user_agent: , http.target: /bottlerocket-node-resource/cordon-and-drain, otel.name: HTTP POST /bottlerocket-node-resource/cordon-and-drain, otel.kind: "server", request_id: 1879d381-bba7-48c7-ad49-6ff6cd286f1f, node_name: "ip-192-168-152-25.us-west-2.compute.internal"

  2024-03-16T19:13:26.364038Z  INFO models::node::drain: Pod brupop-controller-deployment-b5f58c996-twvq6 deleted.
    at models/src/node/drain.rs:287
    in models::node::drain::wait_for_deletion
    in models::node::drain::drain_node with node_name: "ip-192-168-152-25.us-west-2.compute.internal"
    in models::node::client::drain_node with selector: BottlerocketShadowSelector { node_name: "ip-192-168-152-25.us-west-2.compute.internal", node_uid: "fe061dba-ee1d-4e21-aa03-5d20c8c33e16" }
    in apiserver::telemetry::HTTP request with http.method: POST, http.route: /bottlerocket-node-resource/cordon-and-drain, http.flavor: 2.0, http.scheme: https, http.host: brupop-apiserver.brupop-bottlerocket-aws.svc.cluster.local, http.client_ip: 192.168.137.65, http.user_agent: , http.target: /bottlerocket-node-resource/cordon-and-drain, otel.name: HTTP POST /bottlerocket-node-resource/cordon-and-drain, otel.kind: "server", request_id: 1879d381-bba7-48c7-ad49-6ff6cd286f1f, node_name: "ip-192-168-152-25.us-west-2.compute.internal"

  2024-03-16T19:13:26.404277Z  INFO models::node::drain: Pod cert-manager-cainjector-8699cf859b-hnswk deleted.
    at models/src/node/drain.rs:287
    in models::node::drain::wait_for_deletion
    in models::node::drain::drain_node with node_name: "ip-192-168-152-25.us-west-2.compute.internal"
    in models::node::client::drain_node with selector: BottlerocketShadowSelector { node_name: "ip-192-168-152-25.us-west-2.compute.internal", node_uid: "fe061dba-ee1d-4e21-aa03-5d20c8c33e16" }
    in apiserver::telemetry::HTTP request with http.method: POST, http.route: /bottlerocket-node-resource/cordon-and-drain, http.flavor: 2.0, http.scheme: https, http.host: brupop-apiserver.brupop-bottlerocket-aws.svc.cluster.local, http.client_ip: 192.168.137.65, http.user_agent: , http.target: /bottlerocket-node-resource/cordon-and-drain, otel.name: HTTP POST /bottlerocket-node-resource/cordon-and-drain, otel.kind: "server", request_id: 1879d381-bba7-48c7-ad49-6ff6cd286f1f, node_name: "ip-192-168-152-25.us-west-2.compute.internal"

controller/src/main.rs Outdated Show resolved Hide resolved
Copy link
Member

@gthao313 gthao313 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me!

May I see the test result? :) thanks!

This moves to opentelemetry 0.22 with related changes in prometheus and
actix-web-opentelemetry. Some of the logic around how one should
configure opentelemetry changed and these changes bring it up to the
latest recommendations.
This moves to opentelemetry 0.22 with related changes in prometheus and
actix-web-opentelemetry. Some of the logic around how one should
configure opentelemetry changed and these changes bring it up to the
latest recommendations.
@@ -111,7 +111,7 @@ pub struct APIServerSettings<T: BottlerocketShadowClient> {
pub async fn run_server<T: 'static + BottlerocketShadowClient>(
settings: APIServerSettings<T>,
k8s_client: kube::Client,
prometheus_exporter: opentelemetry_prometheus::PrometheusExporter,
prometheus_registry: prometheus::Registry,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the named meter, have we considered pass in the SdkMeterProvider as a parameter here. And create a meter in the function like

let meter = provider.meter("apiserver");

// Set up metrics request builder
let request_metrics = RequestMetricsBuilder::new().build(apiserver_meter);

Copy link
Contributor

@ytsssun ytsssun Mar 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative would be to just directly call global::meter_provider and get the GlobalMeterProvider for the application and create the meter by using this global provider. We called global::set_meter_provider here.

Referring to their official doc for global::meter_provider.

@yeazelm
Copy link
Contributor Author

yeazelm commented Mar 16, 2024

Updated the commits to include apiserver by attaching the resource to the provider with .with_resource(Resource::new([KeyValue::new("service.name", "apiserver")])) and put the tokio feature into the opentelemetry_sdk. I also cleaned up a few more warnings that were bugging me and re-ran the integration tests.

@gthao313 gthao313 self-requested a review March 18, 2024 18:01
@yeazelm yeazelm merged commit 128b168 into bottlerocket-os:develop Mar 18, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants