Skip to content
This repository has been archived by the owner on Feb 9, 2022. It is now read-only.

Add support for AWS/EKS #293

Closed
falfaro opened this issue Dec 31, 2018 · 11 comments
Closed

Add support for AWS/EKS #293

falfaro opened this issue Dec 31, 2018 · 11 comments
Assignees

Comments

@falfaro
Copy link
Contributor

falfaro commented Dec 31, 2018

Add support in BKPR for EKS Kubernetes clusters running on AWS.

@falfaro falfaro self-assigned this Dec 31, 2018
@falfaro
Copy link
Contributor Author

falfaro commented Jan 14, 2019

After struggling to get oauth2_proxy and AWS Cognito to play along for days, I finally have found a configuration that works. Essentially, oauth2_proxy uses the Open ID Connect provider to interact with a user pool in AWS Cognito, and cookie refresh has to be disabled.

@falfaro
Copy link
Contributor Author

falfaro commented Jan 14, 2019

The aws branch contains my latest changes. Currently working: all the ingress stack, plus Prometheus, Fluentd, Elasticsearch and Kibana:

$ kubectl --namespace=kubeprod get pods
NAME                                        READY   STATUS    RESTARTS   AGE
alertmanager-0                              2/2     Running   0          2d
cert-manager-665685f7bb-lstjq               1/1     Running   0          2d
dex-7bb4575b68-wlgx7                        1/1     Running   0          2d
elasticsearch-logging-0                     2/2     Running   0          7m
elasticsearch-logging-1                     2/2     Running   0          7m
elasticsearch-logging-2                     2/2     Running   0          7m
external-dns-589f96c64c-hzm6l               1/1     Running   0          2d
fluentd-es-chg5l                            1/1     Running   0          7m
fluentd-es-qzbrp                            1/1     Running   0          7m
fluentd-es-vtwc9                            1/1     Running   0          7m
kibana-67bdd667c9-zc29r                     1/1     Running   0          7m
kube-state-metrics-59c787544d-pkvsv         2/2     Running   0          2d
nginx-ingress-controller-6b4866b86f-2r7j9   1/1     Running   0          2d
node-exporter-8bd7k                         1/1     Running   0          2d
node-exporter-8nvd8                         1/1     Running   0          2d
node-exporter-x45ms                         1/1     Running   0          2d
oauth2-proxy-5cbc9c9956-jswmp               1/1     Running   0          1h
prometheus-0                                2/2     Running   0          2d

@arapulido
Copy link
Contributor

@falfaro can you also describe a bit the configuration steps needed, please? So @anguslees can start making changes to the kubeprod tool. Thanks!

@falfaro
Copy link
Contributor Author

falfaro commented Jan 14, 2019

I have everything working under EKS, including Grafana. All the changes are in the AWS branch.

@falfaro
Copy link
Contributor Author

falfaro commented Jan 14, 2019

@arapulido @anguslees regarding the configuration the only "magic" here is the eksctl command used to create the cluster:

eksctl create cluster --name=felipe --asg-access --ssh-access --nodes=3 --auto-kubeconfig --version=1.11 --region=eu-central-1 --tags Name=eks-felipe,created_by=felipe

The --external-dns-access enables some magic IAM policy that allows the Kubelet that at the same time allows external-dns to change any Route53 zone. Yes, any Route53 zone. I need to figure out how to create a custom IAM policy that only allows access to the Route53 zone used for the BKPR deployment and apply this IAM role to the Kubelet or better yet just external-dns.

The other "magic" part is the AWS Cognito user pool. Essentially, it is just a User Pool that configures an application with the following callbacks:

https://grafana.my.zone.example.com/oauth2/callback, https://kibana.my.zone.example.com/oauth2/callback, https://prometheus.my.zone.example.com/oauth2/callback

And under the "App integration" settings make sure you allow "Authorization code grant" for the following scopes: "email", "openid", "profile".

The application ID and AWS regions have to be configured in the kubeprod-autogen.json file, like this:

{
  "dnsZone": "my.zone.example.com",
  "contactEmail": "xxx@yyy.com",
  "externalDns": {
    "credentials": "...",
    "project": "bkprtesting"
  },
  "oauthProxy": {
    "client_id": "42ohkq7cmp72akptr0ltomteci",
    "client_secret": "1ucuhu0...",
    "cookie_secret": "sXXaw...",
    "aws_region": "eu-central-1",
    "aws_user_pool_id": "eu-central-1_P7sIQ92Cd",
  }
}

I will write some automation to get this configured. For example, using Terraform.

@falfaro
Copy link
Contributor Author

falfaro commented Jan 15, 2019

I have successfully reconfigured External DNS to use a dedicated user / access key, associated with a custom IAM policy, that only allows R/O access to Route53 plus R/W access to the Route53 hosted zone used for BKPR.

To achieve this manually using the AWS Console, one has to browse to http://console.aws.amazon.com and from there to the IAM module. From the "Policies" section select "Create policy" and select the JSON editor (instead of the Visual editor). Then, copy the following contents:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetHostedZone",
                "route53:GetHostedZoneCount",
                "route53:ListHostedZones",
                "route53:ListHostedZonesByName",
                "route53:ListResourceRecordSets"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": [
                "arn:aws:route53:::hostedzone/${HOSTED_ZONE_ID}"
            ]
        }
    ]
}

Make sure to replace ${HOSTED_ZONE_ID} with the ID of the Route53 zone you are using in BKPR.

Next click on the "Review policy" button. Give the policy a meaningful name, like eks-${EKS_CLUSTER_NAME}-external-dns and a description and then click on the "Create policy" button.

Next, going back to the IAM module in the AWS Console and from the "Users" section, select "Add user" and give it a name like "eks-${EKS_CLUSTER_NAME}" and tick "Programmatic access". Then click the "Next: Permissions" button. Click on "Attach existing policies directly" and in the "Filter policies" text box enter the name of the IAM policy that you created before, like eks-${EKS_CLUSTER_NAME}-external-dns and from the filtered list of policies, search for it (should be the only entry) and check it. Click on "Next: Tags" button, where you can enter additional tags. Then click on "Next: Review" and make sure everything looks right. Finally, click on "Create user". On the final screen, please take note of the "Access key ID" and "Secret access key" or download the CSV file.

Now, you will need to generate the kubeprod-autogen.json file by hand using this:

{
  "dnsZone": "eks.felipe-alfaro.com",
  "contactEmail": "felipe@bitnami.com",
  "externalDns": {
    "aws_access_key_id": "${AWS_SECRET_KEY_ID}",
    "aws_access_key_secret": "${AWS_SECRET_KEY_SECRET}",
    "project": "bkprtesting"
  },
  "oauthProxy": {
    "client_id": "42ohk...",
    "client_secret": "1ucuh...",
    "cookie_secret": "sXXaw...",
    "aws_region": "${AWS_REGION}",
    "aws_user_pool_id": "${AWS_USER_POOL_ID}",
  }
}

Where ${AWS_REGION} is the AWS region name, like eu-central-1, ${AWS_USER_POOL_ID} is the ID of the user pool that you created in Cognito. Where ${AWS_SECRET_KEY_ID} and ${AWS_SECRET_KEY_SECRET} are the values for "Access key ID" and "Secret access key", respectively, that you noted down before. The client_id, client_secret and cookie_secret are also fields that have to be filled in from the application that you configured in your Cognito user pool.

@falfaro
Copy link
Contributor Author

falfaro commented Jan 15, 2019

The root manifest looks like this (the usual root manifest used in AKS and GKE):

$ cat kubeprod-manifest.jsonnet |more
(import "./platforms/eks.jsonnet") {
        config:: import "kubeprod-autogen.json",
}

Then use the following command to update/deploy:

kubecfg update --gc-tag=kube_prod_runtime --ignore-unknown kubeprod-manifest.jsonnet

@falfaro
Copy link
Contributor Author

falfaro commented Jan 15, 2019

Use the code from the aws branch for testing this.

@falfaro
Copy link
Contributor Author

falfaro commented Jan 25, 2019

We are making good progress on this. The Jsonnet manifests are already merged into master. Support for automating and configuring integration between External DNS and EKS is under review.

@falfaro
Copy link
Contributor Author

falfaro commented Feb 11, 2019

For the record, support for EKS has been merged into master: e27db05 and 376a421.

@arapulido
Copy link
Contributor

This is now released. Closing. 🎉

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants