-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubetest: add AKS deployer #16452
kubetest: add AKS deployer #16452
Conversation
@alexeldeib is this ready for review? |
@feiskyer it's a bit rough still, but enough of the plumbing works to run conformance. If you're open to reviewing as-is, we can work towards an initial merge, and then follow up with a periodic prowjob to run conformance and use that as signal for where to continue iterating? |
return fmt.Errorf("failed to read downloaded cluster template file: %v", err) | ||
} | ||
|
||
var model containerservice.ManagedCluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd like to convert this to raw JSON and use a REST client so we can use any version, but for now I will do it with a specific type.
@feiskyer please do take a pass on review on this. I think the main things we will want to change will be |
/assign @feiskyer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one question and a nit
if err != nil { | ||
return nil, fmt.Errorf("failed to get azure credentials: %v", err) | ||
} | ||
env, err := azure.EnvironmentFromName(*aksAzureEnv) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just double-checking, this is still in the aks-engine deployer and i guess this is fine for the PR purposes and the goal is to split into a common Azure or even just define them locally in aks.go
based on what we need later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think i'd like to split the flags and share a common azure set. Since you'll never invoke with two deployers, I didn't think it was a high priority fix. Something frustrating is that the flags are mostly used directly from different parts of the code, not passed down, which makes it a slightly more involved change, but nothing crazy.
*aksResourceGroupName = *aksResourceName | ||
} | ||
if *aksDNSPrefix == "" { | ||
*aksDNSPrefix = *aksResourceName |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would dns prefix too long since uuid is used in the name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generated names with form kubetest-$UUID
work okay in Azure. If the user provides one, I'd expect them to make sure it validates properly.
*(*model.LinuxProfile.SSH.PublicKeys)[0].KeyData = string(sshPublicKey) | ||
model.ManagedClusterProperties.DNSPrefix = aksDNSPrefix | ||
model.ManagedClusterProperties.ServicePrincipalProfile.ClientID = &a.azureCreds.ClientID | ||
model.ManagedClusterProperties.ServicePrincipalProfile.Secret = &a.azureCreds.ClientSecret |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to also set region and kubernetes version here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, parsing from the shared flags with aks-e currently
if err != nil { | ||
return fmt.Errorf("failed to get masterInternalIP: %v", err) | ||
} | ||
case "aks": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nilo19 do we need those env for aks-engine deployer?
could you fix the bazel errors and squash the commits? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alexeldeib, feiskyer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
var masterIP, masterInternalIP []byte | ||
|
||
switch o.deployment { | ||
case "gce": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
existing kubemark tests don't set --deployment=gce (it's not even correct value of deployment flag). We use --deployment=bash with --provider=gce
This breaks our kubermark tests in gce.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the inconvenience @mborsz. I started a revert, maybe you could take a look: #16733.
Looking again at the default deployment path, is it safe to assume --deployment=bash only supports the gce provider currently? (i.e., this behavior is correct barring that i've put "gce" and not "bash" in the switch)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that right now gce is an only supported provider in deplyment=bash mode, but I'm not sure if we want to use this assumption here (what if some day someone adds support for some other provider)?
Personally, I would go with
if deployment == "aks" {
use logic specific for aks
} else if deployment == "bash" and provider == "gce" {
use logic for gce
} else {
fatal error
}
What do you think about this solution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense to me. I’m happy to make this change on top of the revert, assuming that’s the sequence you’d prefer.
I wonder if there is some CI automation we can improve to avoid this situation in the future?
This PR breaks the single existing deployer for AKS into two Azure deployers, one using AKS-Engine and one for AKS. Currently they share a lot of flags, and I haven't gone into a deep effort to dedupe or shuffle some of these around since they're use somewhat deeply throughout kubetest. I wanted to open this as-is and start getting feedback/validation before I attempted any major cleanup.
I'll open a separate PR in k/k shortly with some of the even-less-polished bash to integrate with kubemark and clusterloader2 for scalability tests.