Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added CloudWatch Support #831

Merged
merged 1 commit into from
Jun 3, 2019

Conversation

toricls
Copy link
Contributor

@toricls toricls commented Jun 1, 2019

Description

This change enables nodegroups to put metrics and logs into CloudWatch via monitoring, log-forwarding agents including the CloudWatch agent (to use the Container Insights).

Users can enable this by adding cloudWatch: true under nodeGroups.[x].iam.withAddonPolicies as other addon policies.

At this time, the CloudWatchAgentServerPolicy managed policy has the following actions for any resources:

cloudwatch:PutMetricData
ec2:DescribeTags
logs:PutLogEvents
logs:DescribeLogStreams
logs:DescribeLogGroups
logs:CreateLogStream
logs:CreateLogGroup

and also has the following action for arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*

ssm:GetParameter

Checklist

  • Code compiles correctly (i.e make build)
  • Added tests that cover your change (if possible)
  • All unit tests passing (i.e. make test)
  • All integration tests passing (i.e. make integration-test)
  • [ ] Added/modified documentation as required (such as the README.md, and examples directory)
  • [ ] Added yourself to the humans.txt file

This change enables nodegroups to put metrics and logs into CloudWatch via monitoring, log-forwarding agents including the CloudWatch agent.

Users can enable this by adding “cloudWatch: true” under “nodeGroups.[x].iam.withAddonPolicies” as other addon policies.

At this time, the “CloudWatchAgentServerPolicy” managed policy has the following actions for any resources:
```
cloudwatch:PutMetricData
ec2:DescribeTags
logs:PutLogEvents
logs:DescribeLogStreams
logs:DescribeLogGroups
logs:CreateLogStream
logs:CreateLogGroup
```
and also has the following action for “arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*”
```
ssm:GetParameter
```
@toricls
Copy link
Contributor Author

toricls commented Jun 1, 2019

Tested additionally with ./eksctl create cluster -f cluster.yaml and the following cluster.yaml.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cwrole-test
  region: us-west-2

nodeGroups:
  - name: ng-1
    instanceType: m5.large
    desiredCapacity: 2
    ssh:
      publicKeyName: oregon
    iam:
      withAddonPolicies:
        albIngress: true
        xRay: true
        cloudWatch: true

Got the following:

[ℹ]  using region us-west-2
[ℹ]  setting availability zones to [us-west-2a us-west-2c us-west-2d]
[ℹ]  subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  nodegroup "ng-1" will use "ami-0923e4b35a30a5f53" [AmazonLinux2/1.12]
[ℹ]  using EC2 key pair "oregon"
[ℹ]  creating EKS cluster "cwrole-test" in "us-west-2" region
[ℹ]  1 nodegroup (ng-1) was included
[ℹ]  will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --name=cwrole-test'
[ℹ]  2 sequential tasks: { create cluster control plane "cwrole-test", create nodegroup "ng-1" }
[ℹ]  building cluster stack "eksctl-cwrole-test-cluster"
[ℹ]  deploying stack "eksctl-cwrole-test-cluster"
[ℹ]  building nodegroup stack "eksctl-cwrole-test-nodegroup-ng-1"
[ℹ]  --nodes-min=2 was set automatically for nodegroup ng-1
[ℹ]  --nodes-max=2 was set automatically for nodegroup ng-1
[ℹ]  deploying stack "eksctl-cwrole-test-nodegroup-ng-1"
[✔]  all EKS cluster resource for "cwrole-test" had been created
[✔]  saved kubeconfig as "/Users/xxxxxxxx/.kube/config"
[ℹ]  adding role "arn:aws:iam::xxxxxxxxxxxx:role/eksctl-cwrole-test-nodegroup-ng-1-NodeInstanceRole-DT9T2FZ16V6F" to auth ConfigMap
[ℹ]  nodegroup "ng-1" has 0 node(s)
[ℹ]  waiting for at least 2 node(s) to become ready in "ng-1"
[ℹ]  nodegroup "ng-1" has 2 node(s)
[ℹ]  node "ip-192-168-60-53.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-76-68.us-west-2.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/xxxxxxxx/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "cwrole-test" in "us-west-2" region is ready

@martina-if
Copy link
Contributor

Thanks for the PR! LGTM

Copy link
Contributor

@errordeveloper errordeveloper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@errordeveloper errordeveloper merged commit 4db606b into eksctl-io:master Jun 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants