-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add IPAddressUsage resoruce type #602
base: main
Are you sure you want to change the base?
Add IPAddressUsage resoruce type #602
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let us hold
|
||
const ( | ||
GroupName = "nsx.vmware.com" | ||
Version = "v1alpha3" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why it should be v1alpha3?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for v1alpha1 and v1alpha2, there are already Local type apiservices to handle CRD resources,
As we use the same same API Group, if using the same Version: v1alpha1 or v1alpha2, we can not make the new apiservice works. The reason is that the new added apiservice for aggregated API server will be non-Local type service. We can't make service as both types. That is why we use v1alpha3 for the new apiservice and resource.
kubectl get apiservices -A
v1alpha1.nsx.vmware.com Local True 6d9h
v1alpha2.nsx.vmware.com Local True 6d9h
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think this version trick is the right approach, let us use a different APIGroup. I am thinking about the following:
- Use an APIGroup like vpc.nsx.vmware.com or a name that can indicate it is aggregated API not CRD.
- If we believe we will finally remove or rename CRD APIGroup in future, e.g. to crd.nsx.vmware.com, then we can go your way of using nsx.vmware.com for API aggregation and use a new version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, a name indicating it is aggregated API not CRD might be better.
so, how about:
v1alpha1.apiservice.nsx.vmware.com?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"apiservice.nsx.vmware.com" can be one way.
Or we change CRD group to "crd.nsx.vmware.com" in future, and use "nsx.vmware.com" or "vpc.nsx.vmware.com" for APIService resources.
Let us see if @tnqn has an opinion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If my understanding is correct, I think "IPBlockUsage" may be a bit confusing as it sounds the usage of an IPBlock and user may expect the resource name to be IPBlock, but it's actually about the IP usages of a single VPC on multiple IPBlocks, the resource name would be VPC name, right? If yes, I feel the API name should be VPCIPAddressUsage, or IPAddressUsage since it's already in VPC API group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for Quan. This API is designed to show IpAddress usage inside one namesapce. Of course, current, there is only VPC created for one VPC. so, vpc.nsx.vmware.com would make more sense.
Also, IPAddressUsage name is more appropriate to align with NSX API side, Thanks for advice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we do not feel adding "extensions" is ideal, and no good way to reuse "nsx.vmware.com" (assuming we will remove/change CRD apigroup in future), I prefer to use "vpc.nsx.vmware.com". Even we will have TGW, IPBlock, etc, in a sense they are all for VPC APIs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, Let's use vpc.nsx.vmware.com as for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vpc.nsx.vmware.com makes sense to me.
) | ||
|
||
const ( | ||
GroupName = "nsx.vmware.com" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issue to share the same API Group with CRDs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can still use the same API Group with our CRDS, since we will use the different version. If using the same Group and the same Version with CRD, it will hit the issue.
// IPBlocks contains the used IPAddresses and CIDR. | ||
type IPBlocks struct { | ||
Path string `json:"path,omitempty" protobuf:"bytes,1,rep,name=path"` | ||
Available int `json:"available,omitempty" protobuf:"bytes,2,rep,name=available"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no "available count" per Block? It does not make sense to have a single available count given each Block has different visibilities. Please monitor NSX API changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that is true. From NSX IPBlock API, there should only be Path and CIDR fields.
Here, the IPBlocks definition just an internal data type specific for IPBlockUsage to align
NSX VPC IPBlockUsage API:
GET /policy/api/v1/orgs/default/projects/project-1/vpcs/vpc-1/ip-address-usage
example_response: |
{
"ip_blocks": [
{
"path": "/infra/ip-blocks/934e516a-733d-47c6-bd82-738b4fa69a9c",
"available": 1024,
"allocated_by_vpc": {
"count": 512,
"ip-addresses": [
{
"address": "10.0.0.1-10.0.0.4",
"subnet": "/orgs/default/projects/<project-id>/vpcs/vpc_1/subnets/subnet_1"
}
],
"cidr": "10.0.0.0/16",
"visibility": "EXTERNAL"
}
}
]
}
In this API, is to retrives the usage information for IP addresses within a specific VPC. The response will tell user each IPBlock info, including available cout, allocated ipaddress by which subnet, cidr, and visibilities inside one VPC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think the confusion is your struct is named "IPBlocks", but it is actually for a single IPBlock info. Please use a better name. What is the type name in NSX API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. It should be IPBlock for a single IPBlock.
in NSX API: ip_blocks is a list for each single ip_block. I will rename my code to align this.
|
||
// IPBlocks contains the used IPAddresses and CIDR. | ||
type IPBlocks struct { | ||
Path string `json:"path,omitempty" protobuf:"bytes,1,rep,name=path"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Question is the IPBlock name unique? Finally, we may expose IPBlock as a K8s resource from CCI apiserver, and so using the K8s IPBlock resource name makes more sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, IPBlock is unique. Current we only expose ipblockusages as a k8s resources. If we also want to expose IPBlock as a k8s resource instead of IPBlockUsage?
kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
ipblockusages nsx.vmware.com/v1alpha3 true IPBlockUsage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not mean your patch, but finally we need to expose IPBlocks in CCI apiserver too, so Tenant Admin can for example view External IPBlocks shared to the Project, and create Project IPBlocks.
In such cases, do all IPBlocks inc. shared External IPBlocks, Project IPBlocks, and Private IPBlocks have a unique name in NSX API inside a Project? What you would use for IPBlock resource names in CCI apiserver? We should use that name here, not NSX path, which is useless for CCI/Supervisor users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. From NSX API, we can have the same display name for External IPBlocks and Project IPBlocks, but internally, the IPBlock id is different.
GET api/v1/infra/ip-blocks/ip-1 shared to project Dev_project
{
"cidr": "172.16.10.0/24",
"visibility": "EXTERNAL",
"available_allocation_size": "256",
"ip_address_type": "IPV4",
"id": "ip-1",
"display_name": "ip-1",
}
GET api/v1/orgs/default/projects/Dev_project/infra/ip-blocks/ip-1
{
"cidr": "10.22.12.0/23",
"visibility": "PRIVATE",
"available_allocation_size": "512",
"id": "ip-1",
"display_name": "ip-1",
GET api/v1/orgs/default/projects/Dev_project/infra/ip-blocks/ip-2
{
"cidr": "10.22.12.0/23",
"visibility": "PRIVATE",
"available_allocation_size": "512",
"id": "ip-2",
"display_name": "ip-1",
}
So For each IPBLOCK, path or id should be unique in NSX API. I think for CCI, we need thinking a way to present all this different types IPBLock using an unique way.
for this patch specifically, we want to only present private IPBlocks usage inside a VPC, id should be ok.
|
||
// IPBlockUsage | ||
// +k8s:openapi-gen=true | ||
// +genclient:Namespaced |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we expose all Tenant scope resources from CCI apiserver, then they should be cluster scoped. Probably let us hold this change until the plan is clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that make sense if we expose Tenant scope resources from CCI apiserver.
Current design is to expose ipblockUsage resources from SV aggregated apiserver, in this case,
we have to put this resource in namespace scoped in order to enabling CCI namespace users access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would consider adding the API to CCI first, not a Namespace scoped resource in Supervisor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, if we could expose all IPBlocks(External, Project and Private) info from CCI side, there is no need to do double effort to expose these info in SV any longer.
sure. |
976ba5f
to
6923f24
Compare
Adding a new IPAddressUsage resource type which is used for an aggregated API server to get IPAddressUsage from the VPC that is associated with the VPC.
6923f24
to
31bbfd2
Compare
Adding a new IPAddressUsage resource type which is used
for an aggregated API server to get IPAddressUsage from the VPC
that is associated with the VPC.
The namespace scoped IPAddressUsage resource is put in the group: vpc.nsx.vmware.com, version: v1alpha1. for the aggregated API service.
The final exposed API from aggregated API service is:
/apis/vpc.nsx.vmware.com/v1alpha1/namespaces//ipaddressusages
{
The initial test:
Creating a IPAddressUsage resource in the ns-1 namespace.
and using API: kubectl get ipaddressusages -nns-1 -v6 -o json