-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PWX-37601: Using informer cache's event handling instead of native k8s watchers for pod monitoring and extender metrics. #1795
Conversation
…s watchers for pod monitoring and extender metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. One nit and one clarification. Please fix the build failures.
pkg/cache/cache.go
Outdated
@@ -36,6 +38,9 @@ type SharedInformerCache interface { | |||
|
|||
// ListTransformedPods lists the all the Pods from the cache after applying TransformFunc | |||
ListTransformedPods() (*corev1.PodList, error) | |||
|
|||
// WatchPods watches registers the pod event handler with the informer cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit..watches registers?
I think need to remove watches or need to reword it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
modified
log.Warnf("Shared informer cache has not been initialized, using watch for pod monitor.") | ||
if err := core.Instance().WatchPods("", fn, metav1.ListOptions{}); err != nil { | ||
log.Errorf("failed to watch pods for health monitoring due to: %v", err) | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering in what scenario will we enter inside this function? I don't think we ever will? It's still ok to keep it as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly not required, I also think as a failsafe , let's keep it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1795 +/- ##
==========================================
- Coverage 38.15% 38.09% -0.06%
==========================================
Files 51 51
Lines 11443 11470 +27
==========================================
+ Hits 4366 4370 +4
- Misses 6688 6707 +19
- Partials 389 393 +4 ☔ View full report in Codecov by Sentry. |
…s watchers for pod monitoring and extender metrics. (#1795) * PWX-37601: Using informer cache's event handling instead of native k8s watchers for pod monitoring and extender metrics. * Generated the mock files.
…s watchers for pod monitoring and extender metrics. (#1795) * PWX-37601: Using informer cache's event handling instead of native k8s watchers for pod monitoring and extender metrics. * Generated the mock files.
Signed-Off-By: Diptiranjan
What type of PR is this?
What this PR does / why we need it:
Reducing the load by not registering for watchers with kube-api-server for pod updates.
Instead informer cache's event handling has been used.
Does this PR change a user-facing CRD or CLI?:
no
Is a release note needed?: