-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CAS: cloudprovider-specific nodegroupset #6907
CAS: cloudprovider-specific nodegroupset #6907
Conversation
84ea188
to
85d96c0
Compare
/retest |
2 similar comments
/retest |
/retest |
/test pull-cluster-autoscaler-e2e-azure |
5f9cf96
to
172dcc5
Compare
@x13n @gjtempleton @MaciekPytel this one is ready for a final review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for taking so long to review. Looks good, just one minor comment. Feel free to cancel the hold if you disagree.
/lgtm
/approve
/hold
schedulerframework "k8s.io/kubernetes/pkg/scheduler/framework" | ||
) | ||
|
||
// CreateAwsNodeInfoComparator returns a comparator that checks if two nodes should be considered | ||
// part of the same NodeGroupSet. This is true if they match usual conditions checked by IsCloudProviderNodeInfoSimilar, | ||
// even if they have different AWS-specific labels. | ||
func CreateAwsNodeInfoComparator(extraIgnoredLabels []string, ratioOpts config.NodeGroupDifferenceRatios) NodeInfoComparator { | ||
func CreateAwsNodeInfoComparator(extraIgnoredLabels []string, ratioOpts config.NodeGroupDifferenceRatios) nodegroupset.NodeInfoComparator { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Since aws
is already the package name, maybe rename to just CreateNodeInfoComparator
to avoid redundancy? Same for azure
& gce
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. A better way would be to define an interface once, but the value of this cleanup work (IMO) doesn't warrant that type of surgery.
Done!
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jackfrancis, x13n The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Signed-off-by: Jack Francis <jackfrancis@gmail.com>
172dcc5
to
4ff4079
Compare
/hold cancel |
Thanks! /lgtm |
This PR seems to have broken the ability to build CA with just one provider. Specifically:
For this reason we've introduced https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/builder as the only import path for cloudproviders and have each provider gated on build tags. This allows using build tags to build CA with just the provider you want (and presumably trust) to mitigate the risk above. This PR I think breaks this mechanism by introducing a new import path for individual cloudprovider implementations. I think the right solution would be a refactor that allows cloudproviders to register their processors via cloudprovider/builder mechanism. note: it is not lost on me that we take a different security stance regarding cloudprovider code and provider-specific processors, but practically speaking the possibility to hide something nasty is vastly different between the two - not least because of set of approvers needed to add a processor today. |
@MaciekPytel I agree 100%, TIL about provider-specific build flows I'll revert this now. Do we want to add additional test coverage that enumerates through the set of provider build tags so that we catch this kind of thing in CI next time? |
Yeah, good point. Revert + follow up refactor of cloudprovider/builder to allow compilation for a specific cloud provider makes sense to me. |
And re: CI - that is also reasonable. If we say we support building CA with specific build tags, we should test whether these builds actually work. |
Big +1 to all the comments above. |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This PR moves the cloudprovider-specific implementations of nodegroupset into cloudprovider-managed code areas so that they can be more easily maintained without involving core cluster-autoscaler maintainers.
This should not have any impact on compilation outcomes, the idea is to simply re-organize code.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: