-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Add ability to specify node affinity & toleration using KFP V2 #9682
Comments
Additionally would be great to have the ability to set requests/limits for custom resources. |
Hello @AlexandreBrown , thanks for proposing this. The node selector is already supported. https://www.kubeflow.org/docs/components/pipelines/v2/platform-specific-features/ |
Hello @connor-mccarthy, as I have a high need for this feature, I already have an implementation. I would love to contribute it. How should we proceed? Can I open a PR or do we need to do a Design Review first? (CLA is already submitted) |
Hi, @mcrisafu! Thanks for your interest in contributing this. I think this feature is sufficiently large to be deserving of a design. Please feel free to start there. You can add the doc to this issue. From there, we can decide whether it makes sense to discuss at an upcoming KFP community meeting [Meeting, Agenda]. |
Hi @connor-mccarthy, thank you for your feedback. Here the requested design doc. |
@Linchin, when you have the chance, could you please take a look at this FR for the KFP BE? |
@mcrisafu Thank you for writing the design doc, which includes the general idea. Could you expand it to include more implementation details, ambiguities, potential challenges etc.? |
I would like to draw some attention to this topic. There is another issue referencing this one (#9768) where the implementation of tolerations and affinity is mentioned as part of a bigger plan. @cjidboon94 and @Linchin interacted with that one, but I think @connor-mccarthy and @AlexandreBrown not yet. We really need this feature and I believe a lot of people do. Node selection, toleration and affinity settings are essential parts of effective pod scheduling in Kubernetes. I offered help in that thread and could go ahead and start implementing these features, however I found this thread where @mcrisafu mentioned: "...I already have an implementation". I would love to contribute, but of course there's no point doing duplicated work. Therefore, I ask:
We are happy to jump into code review, testing or the implementation itself, should it still be missing some (or several) parts. |
Thank you, @schrodervictor. I haven't had the time yet to update the document. I also believe that the suggestion from @cjidboon94 in #9768 is much better than my own "implementation." We have decided not to migrate to KFP v2 just yet, as we have several issues beyond just toleration. We would greatly appreciate having #9768 implemented. Unfortunately, I don't understand the code base (and Go) well enough to do this myself. From my perspective, it would be better to prioritize pushing the other feature instead of going with my sub-optimal hack. However, if you're still interested in the code, please take a look at this commit. |
A related PR: #9913 |
This is a critical feature... |
Any updates on this? We really need this in order to migrate users to V2. Also, we'd be down to contribute to implementation. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Commenting so this doesn’t get closed. The feature is still needed. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Commenting so this doesn’t get closed. The feature is still very needed. |
+1 |
/lifecycle frozen |
Happy to tackle affinity support, bandwidth permitting. |
@droctothorpe Thank you for volunteering! There is a draft PR to cover that issue, I pinged you in there so you can talk to the PR author to team up and finish the implementation. |
Feature Area
/area sdk
What feature would you like to see?
A core production use case of KFP involves being able to run CPU and GPU workload on specific nodegroups that are more powerful and different from the nodegroup where Kubeflow is installed and usually they will have autoscaling as well.
In order to achieve this, we used to be able to simply specify which component would run on which node using node affinity + tolerations. This is no longer possible in KFP v2 yet I feel like such a core feature should be supported.
What is the use case or pain point?
The existing set_accelerator_type is far from being flexible enough to allow such use case. Here is a small list of examples that shows that
set_accelerator_type
is not flexible enough to support production use cases :NVIDIA_TESLA_K80
,TPU_V3
orcloud-tpus.google.com/v3
. Otherwise we must use the genericnvidia.com/gpu
which is not precise hence defeating the purpose of selecting an accelerator.set_accelerator_type('nvidia.com/gpu')
.Is there a workaround currently?
Users can try to use external tools such as Kyverno to create mutating rules that a webhook can use to add a toleration and/or node affinity/node selector based on some predefined criteria such as a label name and value.
It's still a pain since it makes it way more involved than being able to use
.add_node_affinity()
and.add_toleration()
on a component. In fact, we can't even add a label using kfp sdk anymore so matching has to be done on labels that are present by chance (we have no control to explicitly ensure their presence).Also even using Kyverno, some cases might be hard or impossible to cover. For instance, if you have 2 kubeflow components, both will have the same labels yet you'd like one to run on a less expensive GPU nodegroup and only have one of the two run on a more powerful GPU nodegroup, then in that case since the pods have the same labels, the only way to specify which nodegroup it should run on is at component definition time (via KFP sdk) yet this is not supported currently in KFP v2.
Given that Kubeflow's main goal is to lower the barrier to run ML on kubernetes, I believe this workaround goes against such goal and should not be the only solution that is available to people. It would be in everyone's best interest if the KFP SDK adds back the
add_node_affinity()
andadd_toleration()
so that data scientists/ML specialists can easily specify where to run each component instead of relying on more advanced MLOps solutions that require more and more Kubernetes knowledge.Love this idea? Give it a 👍.
The text was updated successfully, but these errors were encountered: