Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support watching a set of namespaces #767

Closed
hasbro17 opened this issue Nov 20, 2018 · 8 comments · Fixed by #1876
Closed

Support watching a set of namespaces #767

hasbro17 opened this issue Nov 20, 2018 · 8 comments · Fixed by #1876
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@hasbro17
Copy link
Contributor

Feature Request

Support restricting the operator to only watch a set of namespaces.

The controller-runtime's manager currently only allows the cache to be restricted to a single namespace or all namespaces. This forces the permissions model to either be a ClusterRole/ClusterRoleBinding to watch all namespaces, or a Role/RoleBinding to watch a single namespace.

When watching resources across a set of namespaces however we can have a ClusterRole with multiple RoleBindings(each referring to the operator's service account).

Proposed fix:
Long term: Follow up on kubernetes-sigs/controller-runtime#124 (comment) upstream and add support for the MultiListWatcher to support this in the controller runtime.

Short term: While it might take some time to add the change upstream we can have our own cache implementation in SDK that supports a set of namespaces, and override the manager's GetCache() method for all dependent objects.

e.g:

type OurManager struct {
     manager.Manager
     cache ourCache
}

func GetCache() {
    return cache
}
@hasbro17 hasbro17 added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 20, 2018
@ironcladlou
Copy link

For the use cases where people want to work with multiple namespaces, I'm now wondering if this is more of a documentation issue. Based on a tip from the kubebuilder book I was able to support multi-namespace with minimal fuss and no new features/wrappers.

@shawn-hurley
Copy link
Member

@ironcladlou I think that I have discussed this before, but the solution that you are you using has some severe problems with the default client that we would expect folks to use. The way that you are doing things is clever and works for your use case.

I think that this is an issue that we want to solve, either here or upstream.

@ironcladlou
Copy link

I agree it's still not ideal. My solution does imply an understanding of how to deal with a mixture of scoped/caching clients and live clients, and ensuring they share schemes, etc. Probably not a reasonable expectation for most users.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2019
@hasbro17
Copy link
Contributor Author

/remove-lifecycle stale

This is coming up with the stable release of controller-runtime v0.2.0, currently v0.2.0-alpha.0

ref: #1388

@openshift-ci-robot openshift-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2019
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 8, 2019
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 8, 2019
@navidsh
Copy link

navidsh commented Sep 12, 2019

@estroz Is there any doc/guide about how to use this feature now that it is supported?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
6 participants