You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe what should be investigated or refactored
We are watching for changes to the Kubernetes service and Neuvector Jobs in order to update network policy. This triggers a cascade of events on each change leading to likely thrash in the kube-apiserver. If we put them into a queue then they will be processes one at a time in the order in which the event came in. Cutting down the load on the API Server.
Visual Proof
Look at the wild spikes in CPU on the watcher. Now with this ordered processing, it seems to throttle the amount used and therefore is not hitting that strange frozen state.
Describe what should be investigated or refactored
We are watching for changes to the Kubernetes service and Neuvector Jobs in order to update network policy. This triggers a cascade of events on each change leading to likely thrash in the kube-apiserver. If we put them into a queue then they will be processes one at a time in the order in which the event came in. Cutting down the load on the API Server.
Visual Proof
Look at the wild spikes in CPU on the watcher. Now with this ordered processing, it seems to throttle the amount used and therefore is not hitting that strange frozen state.
From 24m to 8m
from 32m to 15m
Ive seen it drop at low at 3m of CPU
Links to any relevant code
uds-core/src/pepr/operator/index.ts
Line 43 in da3eb5a
uds-core/src/pepr/istio/index.ts
Line 22 in da3eb5a
Additional context
Add any other context or screenshots about the technical debt here.
The text was updated successfully, but these errors were encountered: