Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Allow re-use of ranges per namespace #50

Closed
dougbtv opened this issue Jul 21, 2020 · 5 comments · Fixed by #256
Closed

Enhancement: Allow re-use of ranges per namespace #50

dougbtv opened this issue Jul 21, 2020 · 5 comments · Fixed by #256

Comments

@dougbtv
Copy link
Member

dougbtv commented Jul 21, 2020

Currently an IPPool will be created in the namespace as specified in the Kubernetes config (for example), so in the default install this means that the IPPool CRs are created in kube-system.

What if we added an option to make a pool namespaced, so you could re-use a pooled range between two different namespaces?

I had a recent conversation where this was brought up. I think in the context of multi-tenancy. But, I'll return to the conversation to see if I can get input on the use-case.

@crandles
Copy link
Collaborator

crandles commented Jul 23, 2020

namespace could be an option in the kubernetes section of the CNI config used to configure whereabouts.

https://github.com/dougbtv/whereabouts/blob/158a70d5bca503785ce40cd99748782f3f4c79fa/pkg/types/types.go#L57-L60

Currently, the whereabouts daemonset installs a kubeconfig that the whereabouts cni plugin later uses at time of CNI ADD/DEL. To aid that, we grant it access via the RBAC definitions in the default install: https://github.com/dougbtv/whereabouts/blob/158a70d5bca503785ce40cd99748782f3f4c79fa/doc/daemonset-install.yaml#L7-L18

I think to support this you would need to use the pods' service account, and the pod would need appropriate access granted to it. Is that available to whereabouts (e.g. at cni ADD/DEL-time -- how do you access the pod filesystem)? Otherwise, you may end up allowing cross-namespace access issues.

@crandles
Copy link
Collaborator

crandles commented Jul 23, 2020

I recall in an earlier implementation of the CRD client I used the pods namespace as the namespace of the range, and granted the whereabouts cni service-account access to IPPools across all namespaces. I opted to limit the RBAC requirements.

Having a distinct namespace-scoped and cluster-scoped modes as brought up by #49 and #51 might help but that doesn't touch the behavior of where we put the namespaced reservations.

@jellonek
Copy link

jellonek commented Jul 5, 2021

I have also another idea - named pools. Basically you can add a networkName field to stored allocated entry. This will allow to have the same subned usable on a separate physical networks, e.g. set of applications could use a particular subnet on one vlan, or vxlan, while other set of applications can reuse the same subnet on different medium (e.g. different plugin type, or the same with other hw configuration, like other vlan or vxlan).
As L2 networks are not connected - there should be a possibility to assign the same IP, on different networks.

IMO it's even better approach from having this separation only on namespace level.
WDYT?

@toelke
Copy link
Contributor

toelke commented Aug 3, 2022

I think a feature that would allow using the same IP range on multiple (L2) networks is sensible to have. I like the way of having a network_name next to the range and think it allows more flexibility than namespace-scoped IP-ranges.

@maiqueb
Copy link
Collaborator

maiqueb commented Aug 4, 2022

@toelke thanks for pointing me towards this issue, and also for your PR.

I would welcome a PR with a design proposal for this feature.
You can refer to #237 for a good reference on what we're looking for.

Right now, it seems to me everyone wants to achieve a different set of things.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants