Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make CloudFormation easier to compose #132

Merged
merged 15 commits into from
Sep 4, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
14 changes: 13 additions & 1 deletion Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions Gopkg.toml
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,8 @@ required = [
[[constraint]]
branch = "master"
name = "github.com/dlespiau/kube-test-harness"

[[constraint]]
name = "github.com/awslabs/goformation"
source = "https://github.com/errordeveloper/goformation"
revision = "1358de5008ca15a213cc432549977e09cd4d2beb"
15 changes: 12 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@ install-build-deps:
@cd build && dep ensure && ./install.sh

.PHONY: test
test:
test: generate
@git diff --exit-code pkg/nodebootstrap/assets.go > /dev/null || (git diff; exit 1)
@git diff --exit-code ./pkg/eks/mocks > /dev/null || (git diff; exit 1)
@go test -v -covermode=count -coverprofile=coverage.out ./pkg/... ./cmd/...
@test -z $(COVERALLS_TOKEN) || goveralls -coverprofile=coverage.out -service=circle-ci

Expand All @@ -26,13 +28,20 @@ integration-test-dev: build
-eksctl.delete=false \
-eksctl.kubeconfig=$(HOME)/.kube/eksctl/clusters/integration-test-dev

create-integration-test-dev-cluster: build
@./eksctl create cluster --name=integration-test-dev --auto-kubeconfig

delete-integration-test-dev-cluster: build
@./eksctl delete cluster --name=integration-test-dev --auto-kubeconfig

.PHONY: integration-test
integration-test: build
@go test -tags integration -v -timeout 21m ./tests/integration/...

.PHONY: generated
.PHONY: generate
generate:
@go generate ./pkg/eks ./pkg/eks/mocks
@chmod g-w ./pkg/nodebootstrap/assets/*
@go generate ./pkg/nodebootstrap ./pkg/eks/mocks

.PHONY: eksctl-build-image
eksctl-build-image:
Expand Down
35 changes: 16 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

[![Circle CI](https://circleci.com/gh/weaveworks/eksctl/tree/master.svg?style=shield)](https://circleci.com/gh/weaveworks/eksctl/tree/master) [![Coverage Status](https://coveralls.io/repos/github/weaveworks/eksctl/badge.svg?branch=master)](https://coveralls.io/github/weaveworks/eksctl?branch=master)[![Go Report Card](https://goreportcard.com/badge/github.com/weaveworks/eksctl)](https://goreportcard.com/report/github.com/weaveworks/eksctl)

`eksctl` is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, and based on Amazon's official CloudFormation templates.
`eksctl` is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, and uses CloudFormation.

You can create a cluster in minutes with just one command – **`eksctl create cluster`**!

Expand Down Expand Up @@ -49,24 +49,21 @@ able to use `kubectl`. You will need to make sure to use the same AWS API creden
Example output:
```
$ eksctl create cluster
2018-06-06T16:40:58+01:00 [ℹ] importing SSH public key "~/.ssh/id_rsa.pub" as "EKS-extravagant-sculpture-1528299658"
2018-06-06T16:40:58+01:00 [ℹ] creating EKS cluster "extravagant-sculpture-1528299658" in "us-west-2" region
2018-06-06T16:40:58+01:00 [ℹ] creating VPC stack "EKS-extravagant-sculpture-1528299658-VPC"
2018-06-06T16:40:58+01:00 [ℹ] creating ServiceRole stack "EKS-extravagant-sculpture-1528299658-ServiceRole"
2018-06-06T16:41:19+01:00 [✔] created ServiceRole stack "EKS-extravagant-sculpture-1528299658-ServiceRole"
2018-06-06T16:42:19+01:00 [✔] created VPC stack "EKS-extravagant-sculpture-1528299658-VPC"
2018-06-06T16:42:19+01:00 [ℹ] creating control plane "extravagant-sculpture-1528299658"
2018-06-06T16:50:41+01:00 [✔] created control plane "extravagant-sculpture-1528299658"
2018-06-06T16:50:41+01:00 [ℹ] creating DefaultNodeGroup stack "EKS-extravagant-sculpture-1528299658-DefaultNodeGroup"
2018-06-06T16:54:22+01:00 [✔] created DefaultNodeGroup stack "EKS-extravagant-sculpture-1528299658-DefaultNodeGroup"
2018-06-06T16:54:22+01:00 [✔] all EKS cluster "extravagant-sculpture-1528299658" resources has been created
2018-06-06T16:54:22+01:00 [ℹ] saved kubeconfig as "~/.kube/config"
2018-06-06T16:54:23+01:00 [ℹ] the cluster has 0 nodes
2018-06-06T16:54:23+01:00 [ℹ] waiting for at least 2 nodes to become ready
2018-06-06T16:54:49+01:00 [ℹ] the cluster has 2 nodes
2018-06-06T16:54:49+01:00 [ℹ] node "ip-192-168-185-142.ec2.internal" is ready
2018-06-06T16:54:49+01:00 [ℹ] node "ip-192-168-221-172.ec2.internal" is ready
2018-06-06T16:54:49+01:00 [ℹ] EKS cluster "extravagant-sculpture-1528299658" is ready in "us-west-2" region
2018-08-06T16:32:59+01:00 [ℹ] setting availability zones to [us-west-2c us-west-2b us-west-2a]
2018-08-06T16:32:59+01:00 [ℹ] creating EKS cluster "adorable-painting-1533569578" in "us-west-2" region
2018-08-06T16:32:59+01:00 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
2018-08-06T16:32:59+01:00 [ℹ] if you encounter any issues, check CloudFormation console first
2018-08-06T16:32:59+01:00 [ℹ] creating cluster stack "eksctl-adorable-painting-1533569578-cluster"
2018-08-06T16:43:43+01:00 [ℹ] creating nodegroup stack "eksctl-adorable-painting-1533569578-nodegroup-0"
2018-08-06T16:47:14+01:00 [✔] all EKS cluster resource for "adorable-painting-1533569578" had been created
2018-08-06T16:47:14+01:00 [✔] saved kubeconfig as "/Users/ilya/.kube/config"
2018-08-06T16:47:20+01:00 [ℹ] the cluster has 0 nodes
2018-08-06T16:47:20+01:00 [ℹ] waiting for at least 2 nodes to become ready
2018-08-06T16:47:57+01:00 [ℹ] the cluster has 2 nodes
2018-08-06T16:47:57+01:00 [ℹ] node "ip-192-168-115-52.us-west-2.compute.internal" is ready
2018-08-06T16:47:57+01:00 [ℹ] node "ip-192-168-217-205.us-west-2.compute.internal" is ready
2018-08-06T16:48:00+01:00 [ℹ] kubectl command should work with "~/.kube/config", try 'kubectl get nodes'
2018-08-06T16:48:00+01:00 [✔] EKS cluster "adorable-painting-1533569578" in "us-west-2" region is ready
```

To list the details about a cluster or all of the clusters, use:
Expand Down
1 change: 1 addition & 0 deletions build/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ RUN apk add --update \
curl \
git \
make \
gcc \
&& true

ENV DEP_VERSION v0.4.1
Expand Down
5 changes: 2 additions & 3 deletions build/Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 4 additions & 0 deletions build/Gopkg.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ required = [
"golang.org/x/tools/cmd/stringer",
]

[[constraint]]
name = "github.com/jteeuwen/go-bindata"
revision = "6025e8de665b31fa74ab1a66f2cddd8c0abf887e"

[[constraint]]
name = "github.com/goreleaser/goreleaser"
version = "v0.77.2"
37 changes: 22 additions & 15 deletions cmd/eksctl/create.go
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import (
"github.com/kubicorn/kubicorn/pkg/logger"

"github.com/weaveworks/eksctl/pkg/eks"
"github.com/weaveworks/eksctl/pkg/eks/api"
"github.com/weaveworks/eksctl/pkg/utils"
"github.com/weaveworks/eksctl/pkg/utils/kubeconfig"
)
Expand Down Expand Up @@ -46,7 +47,7 @@ var (
)

func createClusterCmd() *cobra.Command {
cfg := &eks.ClusterConfig{}
cfg := &api.ClusterConfig{}

cmd := &cobra.Command{
Use: "cluster",
Expand Down Expand Up @@ -75,25 +76,27 @@ func createClusterCmd() *cobra.Command {
fs.IntVarP(&cfg.MinNodes, "nodes-min", "m", 0, "minimum nodes in ASG")
fs.IntVarP(&cfg.MaxNodes, "nodes-max", "M", 0, "maximum nodes in ASG")

fs.IntVar(&cfg.MaxPodsPerNode, "max-pods-per-node", 0, "maximum number of pods per node (set automatically if unspecified)")
fs.StringSliceVar(&availabilityZones, "zones", nil, "(auto-select if unspecified)")

fs.BoolVar(&cfg.NodeSSH, "ssh-access", false, "control SSH access for nodes")

This comment was marked as abuse.

This comment was marked as abuse.

fs.StringVar(&cfg.SSHPublicKeyPath, "ssh-public-key", DEFAULT_SSH_PUBLIC_KEY, "SSH public key to use for nodes (import from local path, or use existing EC2 key pair)")

fs.BoolVar(&writeKubeconfig, "write-kubeconfig", true, "toggle writing of kubeconfig")
fs.BoolVar(&autoKubeconfigPath, "auto-kubeconfig", false, fmt.Sprintf("save kubconfig file by cluster name, e.g. %q", kubeconfig.AutoPath(exampleClusterName)))
fs.StringVar(&kubeconfigPath, "kubeconfig", kubeconfig.DefaultPath, "path to write kubeconfig (incompatible with --auto-kubeconfig)")
fs.BoolVar(&setContext, "set-kubeconfig-context", true, "if true then current-context will be set in kubeconfig; if a context is already set then it will be overwritten")

fs.DurationVar(&cfg.WaitTimeout, "aws-api-timeout", eks.DefaultWaitTimeout, "")
fs.DurationVar(&cfg.WaitTimeout, "aws-api-timeout", api.DefaultWaitTimeout, "")
fs.MarkHidden("aws-api-timeout") // TODO deprecate in 0.2.0
fs.DurationVar(&cfg.WaitTimeout, "timeout", eks.DefaultWaitTimeout, "max wait time in any polling operations")
fs.DurationVar(&cfg.WaitTimeout, "timeout", api.DefaultWaitTimeout, "max wait time in any polling operations")

fs.BoolVar(&cfg.Addons.WithIAM.PolicyAmazonEC2ContainerRegistryPowerUser, "full-ecr-access", false, "enable full access to ECR")
mikemorris marked this conversation as resolved.
Show resolved Hide resolved

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.


return cmd
}

func doCreateCluster(cfg *eks.ClusterConfig, name string) error {
func doCreateCluster(cfg *api.ClusterConfig, name string) error {
ctl := eks.New(cfg)

if err := ctl.CheckAuth(); err != nil {
Expand Down Expand Up @@ -133,18 +136,22 @@ func doCreateCluster(cfg *eks.ClusterConfig, name string) error {
logger.Info("creating EKS cluster %q in %q region", cfg.ClusterName, cfg.Region)

{ // core action
taskErr := make(chan error)
// create each of the core cloudformation stacks
go ctl.CreateCluster(taskErr)
stackManager := ctl.NewStackManager()
logger.Info("will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup")
logger.Info("if you encounter any issues, check CloudFormation console first")

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

errs := stackManager.CreateClusterWithInitialNodeGroup()
// read any errors (it only gets non-nil errors)
for err := range taskErr {
logger.Info("an error has occurred and cluster hasn't beend created properly")
if len(errs) > 0 {
logger.Info("%d error(s) occurred and cluster hasn't beend created properly, you may wish to check CloudFormation console", len(errs))
logger.Info("to cleanup resources, run 'eksctl delete cluster --region=%s --name=%s'", cfg.Region, cfg.ClusterName)
return err
for _, err := range errs {
logger.Critical("%s\n", err.Error())
}
return fmt.Errorf("failed to create cluster %q", cfg.ClusterName)
}
}

logger.Success("all EKS cluster %q resources has been created", cfg.ClusterName)
logger.Success("all EKS cluster resource for %q had been created", cfg.ClusterName)

// obtain cluster credentials, write kubeconfig

Expand Down Expand Up @@ -172,17 +179,17 @@ func doCreateCluster(cfg *eks.ClusterConfig, name string) error {
return err
}

if err := cfg.WaitForControlPlane(clientSet); err != nil {
if err := ctl.WaitForControlPlane(clientSet); err != nil {
return err
}

// authorise nodes to join
if err := cfg.CreateDefaultNodeGroupAuthConfigMap(clientSet); err != nil {
if err := ctl.CreateDefaultNodeGroupAuthConfigMap(clientSet); err != nil {
return err
}

// wait for nodes to join
if err := cfg.WaitForNodes(clientSet); err != nil {
if err := ctl.WaitForNodes(clientSet); err != nil {
return err
}

Expand All @@ -194,7 +201,7 @@ func doCreateCluster(cfg *eks.ClusterConfig, name string) error {
return err
}
if err := utils.CheckAllCommands(kubeconfigPath, setContext, clientConfigBase.ContextName, env); err != nil {
logger.Critical(err.Error())
logger.Critical("%s\n", err.Error())
logger.Info("cluster should be functional despite missing (or misconfigured) client binaries")
}
}
Expand Down
50 changes: 35 additions & 15 deletions cmd/eksctl/delete.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
"github.com/kubicorn/kubicorn/pkg/logger"

"github.com/weaveworks/eksctl/pkg/eks"
"github.com/weaveworks/eksctl/pkg/eks/api"
"github.com/weaveworks/eksctl/pkg/utils/kubeconfig"
)

Expand All @@ -27,7 +28,7 @@ func deleteCmd() *cobra.Command {
}

func deleteClusterCmd() *cobra.Command {
cfg := &eks.ClusterConfig{}
cfg := &api.ClusterConfig{}

cmd := &cobra.Command{
Use: "cluster",
Expand All @@ -47,10 +48,12 @@ func deleteClusterCmd() *cobra.Command {
fs.StringVarP(&cfg.Region, "region", "r", DEFAULT_EKS_REGION, "AWS region")
fs.StringVarP(&cfg.Profile, "profile", "p", "", "AWS creditials profile to use (overrides the AWS_PROFILE environment variable)")

fs.DurationVar(&cfg.WaitTimeout, "timeout", api.DefaultWaitTimeout, "max wait time in any polling operations")

return cmd
}

func doDeleteCluster(cfg *eks.ClusterConfig, name string) error {
func doDeleteCluster(cfg *api.ClusterConfig, name string) error {
ctl := eks.New(cfg)

if err := ctl.CheckAuth(); err != nil {
Expand All @@ -71,34 +74,51 @@ func doDeleteCluster(cfg *eks.ClusterConfig, name string) error {

logger.Info("deleting EKS cluster %q", cfg.ClusterName)

debugError := func(err error) {
logger.Debug("continue despite error: %v", err)
handleError := func(err error) bool {
if err != nil {
logger.Debug("continue despite error: %v", err)
return true
}
return false
}

if err := ctl.DeleteControlPlane(); err != nil {
debugError(err)
// We can remove all 'DeprecatedDelete*' calls in 0.2.0

stackManager := ctl.NewStackManager()

if err := stackManager.WaitDeleteNodeGroup(); err != nil {

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

This comment was marked as abuse.

handleError(err)
}
if err := ctl.DeleteStackControlPlane(); err != nil {
debugError(err)

if err := stackManager.DeleteCluster(); err != nil {
if handleError(err) {
if err := ctl.DeprecatedDeleteControlPlane(); err != nil {
if handleError(err) {
if err := stackManager.DeprecatedDeleteStackControlPlane(); err != nil {
handleError(err)
}
}
}
}
}

if err := ctl.DeleteStackServiceRole(); err != nil {
debugError(err)
if err := stackManager.DeprecatedDeleteStackServiceRole(); err != nil {
handleError(err)
}

if err := ctl.DeleteStackVPC(); err != nil {
debugError(err)
if err := stackManager.DeprecatedDeleteStackVPC(); err != nil {
handleError(err)
}

if err := ctl.DeleteStackDefaultNodeGroup(); err != nil {
debugError(err)
if err := stackManager.DeprecatedDeleteStackDefaultNodeGroup(); err != nil {
handleError(err)
}

ctl.MaybeDeletePublicSSHKey()

kubeconfig.MaybeDeleteConfig(cfg.ClusterName)

logger.Success("all EKS cluster %q resource will be deleted (if in doubt, check CloudFormation console)", cfg.ClusterName)
logger.Success("all EKS cluster resource for %q will be deleted (if in doubt, check CloudFormation console)", cfg.ClusterName)

return nil
}
9 changes: 5 additions & 4 deletions cmd/eksctl/get.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,11 @@ import (
"fmt"
"os"

"github.com/weaveworks/eksctl/pkg/eks"

"github.com/kubicorn/kubicorn/pkg/logger"
"github.com/spf13/cobra"

"github.com/weaveworks/eksctl/pkg/eks"
"github.com/weaveworks/eksctl/pkg/eks/api"
)

const (
Expand All @@ -34,7 +35,7 @@ func getCmd() *cobra.Command {
}

func getClusterCmd() *cobra.Command {
cfg := &eks.ClusterConfig{}
cfg := &api.ClusterConfig{}

cmd := &cobra.Command{
Use: "cluster",
Expand All @@ -60,7 +61,7 @@ func getClusterCmd() *cobra.Command {
return cmd
}

func doGetCluster(cfg *eks.ClusterConfig, name string) error {
func doGetCluster(cfg *api.ClusterConfig, name string) error {
ctl := eks.New(cfg)

if err := ctl.CheckAuth(); err != nil {
Expand Down
Loading