Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] Add ability to specify node affinity & toleration using KFP V2 #9682

Open
AlexandreBrown opened this issue Jun 24, 2023 · 20 comments

Comments

@AlexandreBrown
Copy link

AlexandreBrown commented Jun 24, 2023

Feature Area

/area sdk

What feature would you like to see?

A core production use case of KFP involves being able to run CPU and GPU workload on specific nodegroups that are more powerful and different from the nodegroup where Kubeflow is installed and usually they will have autoscaling as well.
In order to achieve this, we used to be able to simply specify which component would run on which node using node affinity + tolerations. This is no longer possible in KFP v2 yet I feel like such a core feature should be supported.

What is the use case or pain point?

The existing set_accelerator_type is far from being flexible enough to allow such use case. Here is a small list of examples that shows that set_accelerator_type is not flexible enough to support production use cases :

  • Does not work if the GPU is not one of the few (3) supported GPU : NVIDIA_TESLA_K80, TPU_V3 or cloud-tpus.google.com/v3. Otherwise we must use the generic nvidia.com/gpu which is not precise hence defeating the purpose of selecting an accelerator.
  • If you have 2 nodegroups with the same GPU but one should be reserved for inference and one should be reserved for pipeline exectuion (eg: training) then there is no way to cover such distinction purely based on set_accelerator_type('nvidia.com/gpu').
  • this method is only meant to be used for GPU but it is common to want to run CPU workload on specific nodegroups, reasons could include nodegroup isolation (to run workload that wont affect the nodegroup where Kubeflow core pods run) or to allow for more powerful CPU nodegroups to be used for pipeline while Kubeflow would remain on cheaper instances.

Is there a workaround currently?

Users can try to use external tools such as Kyverno to create mutating rules that a webhook can use to add a toleration and/or node affinity/node selector based on some predefined criteria such as a label name and value.
It's still a pain since it makes it way more involved than being able to use .add_node_affinity() and .add_toleration() on a component. In fact, we can't even add a label using kfp sdk anymore so matching has to be done on labels that are present by chance (we have no control to explicitly ensure their presence).
Also even using Kyverno, some cases might be hard or impossible to cover. For instance, if you have 2 kubeflow components, both will have the same labels yet you'd like one to run on a less expensive GPU nodegroup and only have one of the two run on a more powerful GPU nodegroup, then in that case since the pods have the same labels, the only way to specify which nodegroup it should run on is at component definition time (via KFP sdk) yet this is not supported currently in KFP v2.
Given that Kubeflow's main goal is to lower the barrier to run ML on kubernetes, I believe this workaround goes against such goal and should not be the only solution that is available to people. It would be in everyone's best interest if the KFP SDK adds back the add_node_affinity() and add_toleration() so that data scientists/ML specialists can easily specify where to run each component instead of relying on more advanced MLOps solutions that require more and more Kubernetes knowledge.

Love this idea? Give it a 👍.

@cjidboon94
Copy link
Contributor

Additionally would be great to have the ability to set requests/limits for custom resources. cpu, memory and nvidia.com/gpu are obviously staples and cover most of the necessary resource requests/limits, but being able to use and experiment with other custom resources (e.g. to make GPU sharing between containers possible) is a big plus too. So in addition to the above, would like to see add_resource_request() and add_resource_limit() back in the new versions of KFP SDK.

@jlyaoyuli
Copy link
Contributor

Hello @AlexandreBrown , thanks for proposing this. The node selector is already supported. https://www.kubeflow.org/docs/components/pipelines/v2/platform-specific-features/
For the node affinity and toleration is awaiting contributors!

@AlexandreBrown AlexandreBrown changed the title [feature] Add ability to specify node selector/node affinity & toleration using KFP V2 [feature] Add ability to specify node affinity & toleration using KFP V2 Jul 1, 2023
@mcrisafu
Copy link

Hello @connor-mccarthy, as I have a high need for this feature, I already have an implementation. I would love to contribute it. How should we proceed? Can I open a PR or do we need to do a Design Review first? (CLA is already submitted)

@connor-mccarthy
Copy link
Member

Hi, @mcrisafu! Thanks for your interest in contributing this.

I think this feature is sufficiently large to be deserving of a design. Please feel free to start there. You can add the doc to this issue. From there, we can decide whether it makes sense to discuss at an upcoming KFP community meeting [Meeting, Agenda].

@mcrisafu
Copy link

Hi @connor-mccarthy, thank you for your feedback. Here the requested design doc.

@connor-mccarthy
Copy link
Member

@Linchin, when you have the chance, could you please take a look at this FR for the KFP BE?

@Linchin
Copy link
Contributor

Linchin commented Aug 9, 2023

@mcrisafu Thank you for writing the design doc, which includes the general idea. Could you expand it to include more implementation details, ambiguities, potential challenges etc.?

@schrodervictor
Copy link

I would like to draw some attention to this topic. There is another issue referencing this one (#9768) where the implementation of tolerations and affinity is mentioned as part of a bigger plan. @cjidboon94 and @Linchin interacted with that one, but I think @connor-mccarthy and @AlexandreBrown not yet.

We really need this feature and I believe a lot of people do. Node selection, toleration and affinity settings are essential parts of effective pod scheduling in Kubernetes.

I offered help in that thread and could go ahead and start implementing these features, however I found this thread where @mcrisafu mentioned: "...I already have an implementation".

I would love to contribute, but of course there's no point doing duplicated work. Therefore, I ask:

  • Is there anything blocking the progress of this feature?
  • Is the implementation fully done or, if not, how advanced is it?
  • Can we help you in any way to accelerate the process?

We are happy to jump into code review, testing or the implementation itself, should it still be missing some (or several) parts.

@mcrisafu
Copy link

Thank you, @schrodervictor. I haven't had the time yet to update the document. I also believe that the suggestion from @cjidboon94 in #9768 is much better than my own "implementation."

We have decided not to migrate to KFP v2 just yet, as we have several issues beyond just toleration. We would greatly appreciate having #9768 implemented. Unfortunately, I don't understand the code base (and Go) well enough to do this myself.

From my perspective, it would be better to prioritize pushing the other feature instead of going with my sub-optimal hack. However, if you're still interested in the code, please take a look at this commit.

@Linchin
Copy link
Contributor

Linchin commented Aug 23, 2023

A related PR: #9913

@pythonking6
Copy link

This is a critical feature...

@droctothorpe
Copy link
Contributor

droctothorpe commented Jan 10, 2024

Any updates on this? We really need this in order to migrate users to V2. Also, we'd be down to contribute to implementation.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the lifecycle/stale The issue / pull request is stale, any activities remove this label. label Mar 17, 2024
@strickvl
Copy link

Commenting so this doesn’t get closed. The feature is still needed.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@github-actions github-actions bot added the lifecycle/stale The issue / pull request is stale, any activities remove this label. label May 17, 2024
@krzysztofkropatwa
Copy link

Commenting so this doesn’t get closed. The feature is still very needed.

@stale stale bot removed the lifecycle/stale The issue / pull request is stale, any activities remove this label. label May 17, 2024
@strickvl
Copy link

+1

@rimolive
Copy link
Member

/lifecycle frozen

@droctothorpe
Copy link
Contributor

Happy to tackle affinity support, bandwidth permitting.

@rimolive
Copy link
Member

rimolive commented Jun 4, 2024

@droctothorpe Thank you for volunteering! There is a draft PR to cover that issue, I pinged you in there so you can talk to the PR author to team up and finish the implementation.

@HumairAK HumairAK added this to the KFP SDK 2.11 milestone Oct 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests