-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Allow re-use of ranges per namespace #50
Comments
Currently, the whereabouts daemonset installs a kubeconfig that the I think to support this you would need to use the pods' service account, and the pod would need appropriate access granted to it. Is that available to whereabouts (e.g. at cni ADD/DEL-time -- how do you access the pod filesystem)? Otherwise, you may end up allowing cross-namespace access issues. |
I recall in an earlier implementation of the CRD client I used the pods namespace as the namespace of the range, and granted the whereabouts cni service-account access to IPPools across all namespaces. I opted to limit the RBAC requirements. Having a distinct namespace-scoped and cluster-scoped modes as brought up by #49 and #51 might help but that doesn't touch the behavior of where we put the namespaced reservations. |
I have also another idea - named pools. Basically you can add a IMO it's even better approach from having this separation only on namespace level. |
I think a feature that would allow using the same IP range on multiple (L2) networks is sensible to have. I like the way of having a |
Currently an
IPPool
will be created in the namespace as specified in the Kubernetes config (for example), so in the default install this means that theIPPool
CRs are created inkube-system
.What if we added an option to make a pool namespaced, so you could re-use a pooled range between two different namespaces?
I had a recent conversation where this was brought up. I think in the context of multi-tenancy. But, I'll return to the conversation to see if I can get input on the use-case.
The text was updated successfully, but these errors were encountered: