Skip to content

Commit

Permalink
feat: Nodegroup as a resource
Browse files Browse the repository at this point in the history
`eksctl` now allows you to manage any number of nodegroups other than the initial nodegroup.

Changes:

- `eksctl create nodegroup --cluster CLUSTER_NAME [NODEGROUP_NAME]` is added

  Creates an additional nodegroup.
  The nodegroup name is randomly generated when omitted.

- `eksctl get nodegroup --cluster CLUSTER_NAME` is added

  Lists all the nodegroups including the initial one and the additional ones.

- `eksctl delete nodegroup --cluster CLUSTER_NAME NODEGROUP_NAME` is added

  Deletes a nodegroup by name.

- `eksctl create cluster` has been changed to accept an optional `--nodegroup NODEGROUP_NAME` that specifies the nodegroup name.

- `eksctl delete cluster CLUSTER_NAME` has been changed to recursively delete all the nodegroups including additional ones.

- `eksctl scale nodegroup --cluster CLUSTER_NAME NODEGROUP_NAME` has been changed to accept the target nodegroup name as the second argument

Checklist:

- [x] Code compiles correctly (i.e `make build`)
- [x] Added tests that cover your change (if possible)
- [x] All tests passing (i.e. `make test`)
- Added/modified documentation as required (such as the README)
- Added yourself to the `humans.txt` file

Acknowledgements:

This is a successor of eksctl-io#281 and eksctl-io#332.

All the original credits goes to Richard Case <richard.case@outlook.com> who has started eksctl-io#281. Thanks a lot, Richard!

Signed-off-by: Yusuke Kuoka <ykuoka@gmail.com>
  • Loading branch information
mumoshu authored and errordeveloper committed Dec 27, 2018
1 parent d7f665d commit 80178a8
Show file tree
Hide file tree
Showing 42 changed files with 1,347 additions and 136 deletions.
2 changes: 2 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ lint: ## Run linter over the codebase
ci: test lint ## Target for CI system to invoke to run tests and linting

TEST_CLUSTER ?= integration-test-dev
TEST_NODEGROUP ?= integration-test-dev
.PHONY: integration-test-dev
integration-test-dev: build ## Run the integration tests without cluster teardown. For use when developing integration tests.
@./eksctl utils write-kubeconfig \
Expand All @@ -55,6 +56,7 @@ integration-test-dev: build ## Run the integration tests without cluster teardow
$(TEST_ARGS) \
-args \
-eksctl.cluster=$(TEST_CLUSTER) \
-eksctl.nodegroup=$(TEST_NODEGROUP) \
-eksctl.create=false \
-eksctl.delete=false \
-eksctl.kubeconfig=$(HOME)/.kube/eksctl/clusters/$(TEST_CLUSTER)
Expand Down
32 changes: 29 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,12 +168,32 @@ To delete a cluster, run:
eksctl delete cluster --name=<name> [--region=<region>]
```

### Scaling nodegroup
### Managing nodegroups

The initial nodegroup can be scaled by using the `eksctl scale nodegroup` command. For example, to scale to 5 nodes:
You can add one or more nodegroups in addition to the initial nodegroup created along with the cluster.

To create an additional nodegroup, run:

```
eksctl create nodegroup --cluster=<cluser name>
```

To list the details about a nodegroup or all of the nodegroups, use:

```
eksctl get nodegroup --cluster=<cluster name> [<nodegroup name>]
```

A nodegroup can be scaled by using the `eksctl scale nodegroup` command:

```
eksctl scale nodegroup --name=<name> --nodes=5
eksctl delete nodegroup --cluster=<cluster name> --nodes=<desired count> <nodegroup name>
```

For example, to scale the nodegroup `ng-abcd1234` to 5 nodes:

```
eksctl scale nodegroup --cluster=<cluster name> --nodes=5 ng-abcd1234
```

If the desired number of nodes is greater than the current maximum set on the ASG then the maximum value will be increased to match the number of requested nodes. And likewise for the minimum.
Expand All @@ -182,6 +202,12 @@ Scaling a nodegroup works by modifying the nodegroup CloudFormation stack via a

> NOTE: Scaling a nodegroup down/in (i.e. reducing the number of nodes) may result in errors as we rely purely on changes to the ASG. This means that the node(s) being removed/terminated aren't explicitly drained. This may be an area for improvement in the future.
To delete a nodegroup, run:

```
eksctl delete nodegroup --cluster=<cluster name> <nodegroup name>
```

### VPC Networking

By default, `eksctl create cluster` instatiates a dedicated VPC, in order to avoid interference with any existing resources for a
Expand Down
1 change: 1 addition & 0 deletions humans.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Anton Gruebel @gruebel
Bryan Peterson @lazyshot
Josue Abreu @gotjosh
Timothy Mukaibo @mukaibot
Yusuke Kuoka @mumoshu

/* Thanks */

Expand Down
80 changes: 76 additions & 4 deletions integration/creategetdelete_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ var _ = Describe("(Integration) Create, Get, Scale & Delete", func() {
})

Describe("when creating a cluster with 1 node", func() {
firstNgName := "ng-0"
It("should not return an error", func() {
if !doCreate {
fmt.Fprintf(GinkgoWriter, "will use existing cluster %s", clusterName)
Expand All @@ -83,6 +84,7 @@ var _ = Describe("(Integration) Create, Get, Scale & Delete", func() {
args := []string{"create", "cluster",
"--name", clusterName,
"--tags", "eksctl.cluster.k8s.io/v1alpha1/description=eksctl integration test",
"--nodegroup", firstNgName,
"--node-type", "t2.medium",
"--nodes", "1",
"--region", region,
Expand All @@ -108,7 +110,7 @@ var _ = Describe("(Integration) Create, Get, Scale & Delete", func() {

It("should have the required cloudformation stacks", func() {
Expect(awsSession).To(HaveExistingStack(fmt.Sprintf("eksctl-%s-cluster", clusterName)))
Expect(awsSession).To(HaveExistingStack(fmt.Sprintf("eksctl-%s-nodegroup-%d", clusterName, 0)))
Expect(awsSession).To(HaveExistingStack(fmt.Sprintf("eksctl-%s-nodegroup-%s", clusterName, firstNgName)))
})

It("should have created a valid kubectl config file", func() {
Expand Down Expand Up @@ -184,12 +186,13 @@ var _ = Describe("(Integration) Create, Get, Scale & Delete", func() {
})
})

Context("and scale the cluster", func() {
Context("and scale the initial nodegroup", func() {
It("should not return an error", func() {
args := []string{"scale", "nodegroup",
"--name", clusterName,
"--cluster", clusterName,
"--region", region,
"--nodes", "2",
firstNgName,
}

command := exec.Command(eksctlPath, args...)
Expand All @@ -216,6 +219,75 @@ var _ = Describe("(Integration) Create, Get, Scale & Delete", func() {
})
})

Context("and add the second nodegroup", func() {
It("should not return an error", func() {
if nodegroupName == "" {
nodegroupName = "secondng"
}

args := []string{"create", "nodegroup",
"--cluster", clusterName,
"--region", region,
"--nodes", "1",
nodegroupName,
}

command := exec.Command(eksctlPath, args...)
cmdSession, err := gexec.Start(command, GinkgoWriter, GinkgoWriter)

if err != nil {
Fail(fmt.Sprintf("error starting process: %v", err), 1)
}

cmdSession.Wait(scaleTimeout)
Expect(cmdSession.ExitCode()).Should(Equal(0))
})

It("should make it 3 nodes total", func() {
test, err := newKubeTest()
Expect(err).ShouldNot(HaveOccurred())
defer test.Close()

test.WaitForNodesReady(3, scaleTimeout)

nodes := test.ListNodes(metav1.ListOptions{})

Expect(len(nodes.Items)).To(Equal(3))
})

Context("and delete the second nodegroup", func() {
It("should not return an error", func() {
args := []string{"delete", "nodegroup",
"--cluster", clusterName,
"--region", region,
nodegroupName,
}

command := exec.Command(eksctlPath, args...)
cmdSession, err := gexec.Start(command, GinkgoWriter, GinkgoWriter)

if err != nil {
Fail(fmt.Sprintf("error starting process: %v", err), 1)
}

cmdSession.Wait(deleteTimeout)
Expect(cmdSession.ExitCode()).Should(Equal(0))
})

It("should make it 2 nodes total", func() {
test, err := newKubeTest()
Expect(err).ShouldNot(HaveOccurred())
defer test.Close()

test.WaitForNodesReady(2, scaleTimeout)

nodes := test.ListNodes(metav1.ListOptions{})

Expect(len(nodes.Items)).To(Equal(2))
})
})
})

Context("and deleting the cluster", func() {
It("should not return an error", func() {
if !doDelete {
Expand Down Expand Up @@ -255,7 +327,7 @@ var _ = Describe("(Integration) Create, Get, Scale & Delete", func() {
}

Expect(awsSession).ToNot(HaveExistingStack(fmt.Sprintf("eksctl-%s-cluster", clusterName)))
Expect(awsSession).ToNot(HaveExistingStack(fmt.Sprintf("eksctl-%s-nodegroup-%d", clusterName, 0)))
Expect(awsSession).ToNot(HaveExistingStack(fmt.Sprintf("eksctl-%s-nodegroup-ng-%d", clusterName, 0)))
})
})
})
Expand Down
2 changes: 2 additions & 0 deletions integration/integration_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ var (

// Flags to help with the development of the integration tests
clusterName string
nodegroupName string
doCreate bool
doDelete bool
kubeconfigPath string
Expand All @@ -36,6 +37,7 @@ func init() {

// Flags to help with the development of the integration tests
flag.StringVar(&clusterName, "eksctl.cluster", "", "Cluster name (default: generate one)")
flag.StringVar(&nodegroupName, "eksctl.nodegroup", "", "Nodegroup name (default: generate one)")
flag.BoolVar(&doCreate, "eksctl.create", true, "Skip the creation tests. Useful for debugging the tests")
flag.BoolVar(&doDelete, "eksctl.delete", true, "Skip the cleanup after the tests have run")
flag.StringVar(&kubeconfigPath, "eksctl.kubeconfig", "", "Path to kubeconfig (default: create it a temporary file)")
Expand Down
12 changes: 6 additions & 6 deletions pkg/ami/auto_resolver_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ import (
"github.com/stretchr/testify/mock"
. "github.com/weaveworks/eksctl/pkg/ami"
"github.com/weaveworks/eksctl/pkg/eks"
"github.com/weaveworks/eksctl/pkg/testutils"
"github.com/weaveworks/eksctl/pkg/testutils/mockprovider"
)

type returnAmi struct {
Expand All @@ -22,7 +22,7 @@ var _ = Describe("AMI Auto Resolution", func() {
Describe("When resolving an AMI to use", func() {

var (
p *testutils.MockProvider
p *mockprovider.MockProvider
err error
region string
version string
Expand Down Expand Up @@ -166,8 +166,8 @@ var _ = Describe("AMI Auto Resolution", func() {
})
})

func createProviders() (*eks.ClusterProvider, *testutils.MockProvider) {
p := testutils.NewMockProvider()
func createProviders() (*eks.ClusterProvider, *mockprovider.MockProvider) {
p := mockprovider.NewMockProvider()

c := &eks.ClusterProvider{
Provider: p,
Expand All @@ -176,7 +176,7 @@ func createProviders() (*eks.ClusterProvider, *testutils.MockProvider) {
return c, p
}

func addMockDescribeImages(p *testutils.MockProvider, expectedNamePattern string, amiId string, amiState string, createdDate string) {
func addMockDescribeImages(p *mockprovider.MockProvider, expectedNamePattern string, amiId string, amiState string, createdDate string) {
p.MockEC2().On("DescribeImages",
mock.MatchedBy(func(input *ec2.DescribeImagesInput) bool {
for _, filter := range input.Filters {
Expand All @@ -200,7 +200,7 @@ func addMockDescribeImages(p *testutils.MockProvider, expectedNamePattern string
}, nil)
}

func addMockDescribeImagesMultiple(p *testutils.MockProvider, expectedNamePattern string, returnAmis []returnAmi) {
func addMockDescribeImagesMultiple(p *mockprovider.MockProvider, expectedNamePattern string, returnAmis []returnAmi) {
images := make([]*ec2.Image, len(returnAmis))
for index, ami := range returnAmis {
images[index] = &ec2.Image{
Expand Down
9 changes: 4 additions & 5 deletions pkg/az/az_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,21 +5,20 @@ import (

. "github.com/weaveworks/eksctl/pkg/az"
"github.com/weaveworks/eksctl/pkg/eks"
"github.com/weaveworks/eksctl/pkg/testutils"

"github.com/aws/aws-sdk-go/aws"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
"github.com/stretchr/testify/mock"

"github.com/aws/aws-sdk-go/service/ec2"
"github.com/weaveworks/eksctl/pkg/testutils/mockprovider"
)

var _ = Describe("AZ", func() {

Describe("When calling SelectZones", func() {
var (
p *testutils.MockProvider
p *mockprovider.MockProvider
err error
)

Expand Down Expand Up @@ -249,8 +248,8 @@ var _ = Describe("AZ", func() {
})
})

func createProviders() (*eks.ClusterProvider, *testutils.MockProvider) {
p := testutils.NewMockProvider()
func createProviders() (*eks.ClusterProvider, *mockprovider.MockProvider) {
p := mockprovider.NewMockProvider()

c := &eks.ClusterProvider{
Provider: p,
Expand Down
Loading

0 comments on commit 80178a8

Please sign in to comment.