Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azure labels to skip in nodegroupset #6634

Conversation

gandhipr
Copy link
Contributor

@gandhipr gandhipr commented Mar 14, 2024

What type of PR is this?

feature - skip a few azure specific labels in nodegroupset

What this PR does / why we need it:

This PR add a few azure specific labels that needs to be skipped when checking for nodegroup similarity.
This change doesn't include moving azure specific lables under /azure directory - that chore will be covered in another change

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Skips Azure-specific node labels that might mistakenly categorize nodegroups as different when, in reality, they are similar.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 14, 2024
@x13n
Copy link
Member

x13n commented Mar 15, 2024

Change itself LGTM, though WDYT about moving this code to cluster-autoscaler/cloudprovider/azure/processors? You wouldn't require global CA approval that way.

/assign

@gjtempleton gjtempleton added the area/provider/azure Issues or PRs related to azure provider label Mar 17, 2024
@gandhipr gandhipr force-pushed the prachigandhi/azure-ignore-nodegroupset branch from cb3c22f to 1df29be Compare March 19, 2024 17:18
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 19, 2024
Copy link
Contributor

@tallaxes tallaxes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes themselves look good (may be a good idea to normalize capitalization of constants), but the code currently does not build, likely as a result of the package move; additional post-move work needed.

@tallaxes
Copy link
Contributor

/approve cancel

@gandhipr gandhipr force-pushed the prachigandhi/azure-ignore-nodegroupset branch from 3f82ebf to 832017c Compare April 2, 2024 00:15
@k8s-ci-robot k8s-ci-robot removed the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 2, 2024
Copy link
Contributor

@tallaxes tallaxes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we are keeping these outside of provider-specific space temporarily (for change sequencing reasons), and plan to move it the near future - let's add a comment to this effect in the description.

@gandhipr
Copy link
Contributor Author

gandhipr commented May 2, 2024

/test ls

@k8s-ci-robot
Copy link
Contributor

@gandhipr: The specified target(s) for /test were not found.
The following commands are available to trigger optional jobs:

  • /test pull-cluster-autoscaler-e2e-azure

In response to this:

/test ls

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@gandhipr
Copy link
Contributor Author

gandhipr commented May 2, 2024

/test pull-cluster-autoscaler-e2e-azure

@jackfrancis
Copy link
Contributor

/lgtm

@MaciekPytel @gjtempleton @x13n would it make sense to schedule a follow-up item to move all of the cloud-provider-specific stuff in cluster-autoscaler/processors/nodegroupset/ into the provider-specific code surface area so that this maintenance can be performed without having to loop in core cluster-autoscaler approvers?

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 5, 2024
@x13n
Copy link
Member

x13n commented Jun 6, 2024

Yeah, that makes a lot of sense to me. There's no point having cloudprovider-specific processors in core CA dirs.

@jackfrancis
Copy link
Contributor

Yeah, that makes a lot of sense to me. There's no point having cloudprovider-specific processors in core CA dirs.

Cool, once this gets and approval and merge I'll do that.

@x13n
Copy link
Member

x13n commented Jun 7, 2024

Sounds good, approving then!

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: gandhipr, rakechill, tallaxes, x13n

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 7, 2024
@k8s-ci-robot k8s-ci-robot merged commit 83db225 into kubernetes:master Jun 7, 2024
7 checks passed
@comtalyst
Copy link
Contributor

/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.27

@k8s-infra-cherrypick-robot

@comtalyst: new pull request created: #7262

In response to this:

/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.27

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@comtalyst
Copy link
Contributor

/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.27

@k8s-infra-cherrypick-robot

@comtalyst: new pull request created: #7263

In response to this:

/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.27

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@comtalyst
Copy link
Contributor

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.27

@k8s-infra-cherrypick-robot

@comtalyst: new pull request created: #7264

In response to this:

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.27

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@comtalyst
Copy link
Contributor

/cherry-pick cluster-autoscaler-release-1.27

@k8s-infra-cherrypick-robot

@comtalyst: new pull request created: #7265

In response to this:

/cherry-pick cluster-autoscaler-release-1.27

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler area/provider/azure Issues or PRs related to azure provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants