Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node Slice Fast IPAM #458

Merged
merged 7 commits into from
Jul 23, 2024
Merged

Conversation

ivelichkovich
Copy link
Contributor

@ivelichkovich ivelichkovich commented Apr 17, 2024

What this PR does / why we need it:

improves performance with node slice mode

https://docs.google.com/document/d/1YlWfg3Omrk3bf6Ujj-s5wXlP6nYo4PZseA0bS6qmvkk/edit#heading=h.ehhncqtntm3t

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer (optional):

This is a very very rough draft to help guide the design and discussion

cmd/whereabouts.go Outdated Show resolved Hide resolved
pkg/types/types.go Outdated Show resolved Hide resolved
pkg/node-controller/controller.go Outdated Show resolved Hide resolved
pkg/node-controller/controller.go Show resolved Hide resolved
pkg/node-controller/controller.go Show resolved Hide resolved
pkg/storage/kubernetes/ipam.go Outdated Show resolved Hide resolved
@ivelichkovich ivelichkovich mentioned this pull request Apr 17, 2024
pkg/node-controller/controller.go Fixed Show fixed Hide fixed
pkg/node-controller/controller.go Fixed Show fixed Hide fixed
@jingczhang
Copy link

jingczhang commented Apr 18, 2024

NodeSlice CR is user to define and change...so hard to match the different runtime need of each node? not before the user is AI I guess.
If we can view the current allocation as using a blockSize=1, we can expose this blockSize for user to define e.g. up to 8/16, which would greatly reduce the lease collision? and we will have node slice size based on need.

@ivelichkovich
Copy link
Contributor Author

NodeSlice CR is user to define and change...so hard to match the different runtime need of each node? not before the user is AI I guess. If we can view the current allocation as using a blockSize=1, we can expose this blockSize for user to define e.g. up to 8/16, which would greatly reduce the lease collision? and we will have node slice size based on need.

I'm not sure I fully understand. In this current version the user can define whatever slice size they need.

@jingczhang
Copy link

jingczhang commented Apr 22, 2024

Hi @ivelichkovich, sorry for not making my point clear. I meant to suggest not to limit a whereabouts node agent to only one network slice. Here are more details for your review: (1) Limiting a node agent to one network slice effectively remove the need for "lease lock", since the locking will always be successful (2) We can use the existing "lease lock" workflow for the node agent to require access to other network slice (not full) when its primary slice is full. (3) When a new node added, it can take a free network slice (not assigned to any node yet).

@ivelichkovich
Copy link
Contributor Author

Hi @ivelichkovich, sorry for not making my point clear. I meant to suggest not to limit a whereabouts node agent to only one network slice. Here are more details for your review: (1) Limiting a node agent to one network slice effectively remove the need for "lease lock", since the locking will always be successful (2) We can use the existing "lease lock" workflow for the node agent to require access to other network slice (not full) when its primary slice is full. (3) When a new node added, it can take a free network slice (not assigned to any node yet).

we discussed this in maintainers meeting, lease is still needed because you can run multiple network-attachment-definitions for same node as well as each node can allocate multiple IPs at same time and each launches a new whereabouts process

@ivelichkovich
Copy link
Contributor Author

note to self: clean imports

@ivelichkovich ivelichkovich changed the title (very wip) WIP controller prototype Node Slice Fast IPAM May 13, 2024
@ivelichkovich ivelichkovich marked this pull request as ready for review May 13, 2024 21:07
e2e/client/whereabouts.go Fixed Show fixed Hide fixed
By("deleting replicaset with whereabouts net-attach-def")
Expect(clientInfo.DeleteReplicaSet(replicaSet)).To(Succeed())
})

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these tests be run from multi-threaded in parallel, to verify simultaneous requests from several client?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could do something like that but what exactly are we trying to test with that? Whereabouts works on pod create so these tests do launch many concurrent pods.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly the pod creation here is serial, so the IP requests toward whereabouts may be not fully concurrent, and so don't test simultaneous requests (at least if the test is blocking on pod creation)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it sets the replicaset to testConfig.MaxReplicas(allPods.Items) that will result in many pods launching in parallel. Depending on how many nodes are in the test cluster this should also lead to multiple pods per node so this would get exercised with multiple pods to a lease/node pool.

@coveralls
Copy link

coveralls commented May 23, 2024

Pull Request Test Coverage Report for Build 9844370844

Details

  • 315 of 590 (53.39%) changed or added relevant lines in 4 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage decreased (-17.3%) to 54.615%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/config/config.go 0 1 0.0%
pkg/iphelpers/iphelpers.go 37 45 82.22%
pkg/storage/kubernetes/ipam.go 23 124 18.55%
pkg/node-controller/controller.go 255 420 60.71%
Totals Coverage Status
Change from base Build 9746321392: -17.3%
Covered Lines: 1438
Relevant Lines: 2633

💛 - Coveralls

@ivelichkovich
Copy link
Contributor Author

Might be worth marking this feature as experimental in the docs until we've built out more of the phases from the proposal and had more baketime/testing time

Copy link
Member

@dougbtv dougbtv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Igor -- this is awesome. I just ran through some functional testing, and it's looking great for the cases I tried, which didn't try to push the boundaries of the limitations we noted.

I'm all for moving forward with a merge. Especially since we've decided on a phased approach, if there's tailoring that's necessary, we can follow on with it, also because of the approach you took (and thank you for it), I don't think there's a strong risk to other functionality.

Agreed on marking it experimental in the docs... here's a quick attempt at an addition to the README. Feel free to incorporate it, or we can follow on with something:

## Fast IPAM by Using Preallocated Node Slices [Experimental]

**Enhance IPAM performance in large-scale Kubernetes environments by reducing IP allocation contention through node-based IP slicing.**

### Fast IPAM Configuration

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: whereabouts-fast-ipam
spec:
  config: '{
      "cniVersion": "0.3.0",
      "name": "whereaboutsexample",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "range": "192.168.2.0/24",
        "fast_ipam": true,
        "node_slice size": "/22"
      }
    }'

This setup enables the fast IPAM feature to optimize IP allocation for nodes, improving network performance in clusters with high pod density.

@mlguerrero12
Copy link
Collaborator

Hi @ivelichkovich, I'm about to start reviewing this PR but I wanted to understand the design first. From the proposal, it is not clear to me how the range is divided.

Could you please elaborate how a range set in the IPAM config is divided between the nodes, assuming the node_slice_size represents a division larger than the current number of nodes? What would be the range in the NodeSlicePool for each node? i.e. range 192.168.1.0/24, node_slice_size /26, 2 nodes

What happens if the number of nodes increases? i.e. nodes increases to 6

What does the new controller do when a node is unreachable?

Thanks.

@ivelichkovich
Copy link
Contributor Author

Hi @ivelichkovich, I'm about to start reviewing this PR but I wanted to understand the design first. From the proposal, it is not clear to me how the range is divided.

Could you please elaborate how a range set in the IPAM config is divided between the nodes, assuming the node_slice_size represents a division larger than the current number of nodes? What would be the range in the NodeSlicePool for each node? i.e. range 192.168.1.0/24, node_slice_size /26, 2 nodes

What happens if the number of nodes increases? i.e. nodes increases to 6

What does the new controller do when a node is unreachable?

Thanks.

Hey so this requires running a controller in the cluster, that controller is responsible for going for creating and managing the NodeSlicePools (resource representing node allocations). When nodes are added it assigns the nodes to a open "slice". If there's too many nodes it just skips them but it could fire an event or something like that. If a node is not reachable I don't think it'll be removed but when the node itself is actually deleted from the cluster it's "slice" will open up again.

@@ -81,6 +83,8 @@ func (ic *IPAMConfig) UnmarshalJSON(data []byte) error {
Datastore string `json:"datastore"`
Addresses []Address `json:"addresses,omitempty"`
IPRanges []RangeConfiguration `json:"ipRanges"`
NodeSliceSize string `json:"node_slice_size"`
Namespace string `json:"namespace"` //TODO: best way to get namespace of the NAD?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the biggest remaining issue in the PR I think, it's needed to know the namespace of the NAD which is the same namespace as the nodeslices so its used to get the nodeslices.

Not sure if there's some easy way to discover this value, we could also make nodeslicepools be cluster scoped and not need to worry about it wouldn't be consistent with the rest of the CRDs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also an API change so this one is probably worth figuring out even before merging as an experimental feature

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay fixed this to use WHEREABOUTS_NAMESPACE the same way it does for IPPools, there's some implications there if there's like multiple sets of duplicate NADs in different namespaces, maybe these resources should be cluster scoped? anyway for this PR this should fix the namespace thing to follow current patterns

@dougbtv
Copy link
Member

dougbtv commented Jul 23, 2024

Appreciate all the hard work on this -- huge benefit to the whereabouts community, hugely appreciated.

@dougbtv dougbtv merged commit f1a7e7a into k8snetworkplumbingwg:master Jul 23, 2024
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants