-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VMs pool] Add implementation for VMs pool #6951
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: wenxuan0923 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @wenxuan0923. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looking good, but I'm afraid I don't have a permission to approve (I think).
Also, I don't have much expertise in interactions with AKS API on AgentPool. Would rely on you to test.
return nil, err | ||
|
||
// Use Service Principal | ||
if len(cfg.AADClientID) > 0 && len(cfg.AADClientSecret) > 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this support UseWorkloadIdentityExtension
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really sure...but it should be fairly easy to add it if needed later.
manager *AzureManager | ||
resourceGroup string | ||
manager *AzureManager | ||
resourceGroup string // MC_ resource group for nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually prefer resourceGroup
to be nodeResourceGroup
. But the current convention is being used in other places as well. I think I will save this idea for later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we want to keep this consistent with other places.
} | ||
|
||
// scaleUpToCount sets node count for vms agent pool to target value through PUT AP call. | ||
func (agentPool *VMsPool) scaleUpToCount(count int64) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I generally prefer always-one-use methods like this to be a part of the code calling it, rather than being a separate method, especially when there are no clear boundary of responsibility. I think the case of IncreaseSize
and scaleUpToCount
(and more) applies.
In case you have a similar opinion, I don't think you need to be restricted by the pattern in azure_scale_set.go
for this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a preference here, just thought it might make it a bit easier for maintainers if it has the same pattern with vmss. I would like to collect more feedback of this PR and make changes together if needed.
What type of PR is this?
/kind feature
What this PR does / why we need it:
Add implementation to support VMs pool autoscaling. Right now, only the basic scenario is covered: it does not include Deallocate scale down policy or GPU pool. They will be handled in the later PRs.
To test it with VMs pool:
Create a cluster in prod:
az aks create -n play-vms -g wenxrg --vm-set-type "VirtualMachines" -l eastus
add deploy CAS with command argument:
--nodes=1:6:nodepool1
and required environment variables (note cluster name and cluster resource group name is required for VMs pool).Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: