Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

namespace filter calculation is horrendously inefficient #2945

Closed
rade opened this issue Nov 18, 2017 · 0 comments · Fixed by #2985
Closed

namespace filter calculation is horrendously inefficient #2945

rade opened this issue Nov 18, 2017 · 0 comments · Fixed by #2985
Assignees
Labels
chore Related to fix/refinement/improvement of end user or new/existing developer functionality k8s Pertains to integration with Kubernetes performance Excessive resource usage and latency; usually a bug or chore

Comments

@rade
Copy link
Member

rade commented Nov 18, 2017

The namespace filter calculation in updateKubeFilters trawls through (nearly) all k8s topologies. Instead we should just get the probes to tell us what namespaces exist. That does mean we'd be including empty namespaces in filters in the UI, but that surely is a price worth paying. And arguably less surprising for users.

@rade rade added chore Related to fix/refinement/improvement of end user or new/existing developer functionality k8s Pertains to integration with Kubernetes performance Excessive resource usage and latency; usually a bug or chore labels Nov 18, 2017
rade added a commit that referenced this issue Nov 18, 2017
to include recently added k8s types.

This is all rather inefficient. See #2945.
@rbruggem rbruggem self-assigned this Dec 12, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chore Related to fix/refinement/improvement of end user or new/existing developer functionality k8s Pertains to integration with Kubernetes performance Excessive resource usage and latency; usually a bug or chore
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants