Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

create cluster -f ignored managed nodegroups #1861

Closed
fincd-aws opened this issue Feb 27, 2020 · 4 comments
Closed

create cluster -f ignored managed nodegroups #1861

fincd-aws opened this issue Feb 27, 2020 · 4 comments
Labels

Comments

@fincd-aws
Copy link

What happened?
ekcstl create cluster -f file.yaml did not make any managed nodegroups, did make the unmanaged nodegroups.
Output contains and 0 managed nodegroup stack(s) even though managedNodegroups are specified in the config file.

config file, nodeGroups have been snipped out, full config in Gist

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
 
metadata:
  name: eksctl-us-west-2-managed
  region: us-west-2
 
vpc:
  id: vpc-0b953326cd7a2f917
  subnets:
    public:
      us-west-2a: 
        id: subnet-0947f8d8495f046d6
      us-west-2c:
        id: subnet-0d4ae0a69651a70f3
  #totally isolated subnets, no NAT
    private:
      us-west-2a:
        id: subnet-0b23ee123b8b67b7a
      us-west-2c:
        id: subnet-0d1a5c002de859352
 
nodeGroups: 
  - name: standard-public
    <snipped>
 - name: gpu-public
  <snipped>
 
managedNodegroups:
  - name: managed-standard-public
    ssh:
      allow: true
      publicKeyName: dev
    # scalingConfig: #required in the schema but not known to eksctl v0.13.0 from brew
    #   desiredCapacity: 2
    #   maxSize: 4
    #   minSize: 2
    volumeSize: 200
 
cloudWatch:
  clusterLogging:
    enableTypes: ["*"]

output:

$ eksctl create cluster -f eksctl-config-us-west-2-managed.yaml
[?]  eksctl version 0.13.0
[?]  using region us-west-2
[?]  using existing VPC (vpc-0b953326cd7a2f917) and subnets (private:[subnet-0b23ee123b8b67b7a subnet-0d1a5c002de859352] public:[subnet-0947f8d8495f046d6 subnet-0d4ae0a69651a70f3])
[!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
[?]  nodegroup "standard-public" will use "ami-0c13bb9cbfd007e56" [AmazonLinux2/1.14]
[?]  using EC2 key pair "dev"
[?]  nodegroup "gpu-public" will use "ami-0ad9a8dc09680cfc2" [AmazonLinux2/1.14]
[?]  using EC2 key pair "dev"
[?]  using Kubernetes version 1.14
[?]  creating EKS cluster "eksctl-us-west-2-managed" in "us-west-2" region with un-managed nodes
[?]  2 nodegroups (gpu-public, standard-public) were included (based on the include/exclude rules)
[?]  will create a CloudFormation stack for cluster itself and 2 nodegroup stack(s)
[?]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
[?]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=eksctl-us-west-2-managed'
[?]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eksctl-us-west-2-managed" in "us-west-2"
[?]  3 sequential tasks: { create cluster control plane "eksctl-us-west-2-managed", 2 parallel sub-tasks: { create nodegroup "standard-public", create nodegroup "gpu-public" }, update CloudWatch logging configuration }
[?]  building cluster stack "eksctl-eksctl-us-west-2-managed-cluster"
[?]  deploying stack "eksctl-eksctl-us-west-2-managed-cluster"
[?]  building nodegroup stack "eksctl-eksctl-us-west-2-managed-nodegroup-gpu-public"
[?]  building nodegroup stack "eksctl-eksctl-us-west-2-managed-nodegroup-standard-public"
[?]  deploying stack "eksctl-eksctl-us-west-2-managed-nodegroup-standard-public"
[?]  deploying stack "eksctl-eksctl-us-west-2-managed-nodegroup-gpu-public"
[?]  configured CloudWatch logging for cluster "eksctl-us-west-2-managed" in "us-west-2" (enabled types: api, audit, authenticator, controllerManager, scheduler & no types disabled)
[?]  all EKS cluster resources for "eksctl-us-west-2-managed" have been created
[?]  saved kubeconfig as "/Users/user/.kube/config"
[?]  adding identity "arn:aws:iam::0123456789012:role/eksctl-eksctl-us-west-2-managed-nodegr-NodeInstanceRole-N7TOKVD9PRCC" to auth ConfigMap
[?]  nodegroup "standard-public" has 0 node(s)
[?]  waiting for at least 2 node(s) to become ready in "standard-public"
[?]  nodegroup "standard-public" has 2 node(s)
[?]  node "ip-192-168-0-29.us-west-2.compute.internal" is ready
[?]  node "ip-192-168-4-246.us-west-2.compute.internal" is ready
[?]  adding identity "arn:aws:iam::0123456789012:role/eksctl-eksctl-us-west-2-managed-nodegr-NodeInstanceRole-1BRK0Y6Y6ZX48" to auth ConfigMap
[?]  as you are using a GPU optimized instance type you will need to install NVIDIA Kubernetes device plugin.
[?]  	 see the following page for instructions: https://github.com/NVIDIA/k8s-device-plugin
[?]  kubectl command should work with "/Users/user/.kube/config", try 'kubectl get nodes'
[?]  EKS cluster "eksctl-us-west-2-managed" in "us-west-2" region is ready

What you expected to happen?
Both managed and unmanaged nodegroups should have been made, according to the managed nodegroup doc:

"It’s possible to have a cluster with both managed and unmanaged nodegroups. "

If you have to pass the '--managed' flag to 'create cluster', that needs to be added to the docs.

How to reproduce it?
Use the above config-file, replace the VPC/Subnets/SecurityGroups with your own.

Anything else we need to know?
What OS are you using, are you using a downloaded binary or did you compile eksctl, what type of AWS credentials are you using (i.e. default/named profile, MFA) - please don't include actual credentials though!

Mac OS X 10.14.6 Mojave
eksctl installed with Homebrew

Versions
Please paste in the output of these commands:

$ eksctl version
$ kubectl version
$  eksctl version
[?]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.13.0"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-13T18:06:54Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-502bfb", GitCommit:"502bfb383169b124d87848f89e17a04b9fc1f6f0", GitTreeState:"clean", BuildDate:"2020-02-07T01:31:02Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

Logs
Include the output of the command line when running eksctl. If possible, eksctl should be run with debug logs. For example:
eksctl get clusters -v 4
Make sure you redact any sensitive information before posting.
If the output is long, please consider a Gist.

It is hard to re-run this, without making a new cluster. Apparently you can't just re-run eksctl create cluster -f file.yaml to apply new settings to an existing cluster, since I get:

[?]  creating CloudFormation stack "eksctl-eksctl-us-west-2-managed-cluster": AlreadyExistsException: Stack [eksctl-eksctl-us-west-2-managed-cluster] already exists
	status code: 400, request id: a1be5069-5dfa-4c7e-9b03-6da51bdfb27e
@sayboras
Copy link
Contributor

sayboras commented Mar 1, 2020

There is a typo for managed node group, should be managedNodeGroups instead of managedNodegroups.

However, in my opinion, some schema validation should be done to avoid such occurrence.

@martina-if @marccarre @cPu1 FYI

@sayboras
Copy link
Contributor

sayboras commented Mar 1, 2020

Similar issue reported before #1602

@martina-if
Copy link
Contributor

@sayboras thank you. Yeah, I proposed validation in a case sensitive way before. But lately I have been thinking about versioning of the schema and if we are too strict in the validation we might make backwards and forward compatibility quite difficult.

Since this particular case was about this typo let's close this issue. For reference, the one about stricter validation is #753 but again, I'm not sure we should do just yet, until we have a versioning strategy in place.

@sayboras
Copy link
Contributor

sayboras commented Mar 2, 2020

Since this particular case was about this typo let's close this issue. For reference, the one about stricter validation is #753 but again, I'm not sure we should do just yet, until we have a versioning strategy in place.

@martina-if Saw your comment quite late. I thought this one should be easy fix, then started working on this #1874, then it got bigger then I expected. Anyway, thanks for clarification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants