Skip to content

Commit

Permalink
Introducing the Portworx module to the eks add-on
Browse files Browse the repository at this point in the history
* Added Portworx module, updated Kubernetes/main.tf to include thhe module
* uses module from terraform registry

Signed-off-by: pragrawal10 <pragrawal@purestorage.com>
Co-authored-by: pragrawal10 <pragrawal@purestorage.com>

Added examples for the Portworx add-on. (#5)

* Adding a getting-started example, this example showcases the following
** creates a new EKS cluster
** Installs Portworx on it
** this examples requires explicit passing of AWS credentials

* Portworx with IAM policy example does the above but uses IAM policy instead of AWS credentials

Signed-off-by: pragrawal10 <pragrawal@purestorage.com>

Signed-off-by: pragrawal10 <pragrawal@purestorage.com>

Ran the pre-commit check and fixed indentations (#6)

Signed-off-by: Tapas Sharma <tapas@portworx.com>

Signed-off-by: Tapas Sharma <tapas@portworx.com>
  • Loading branch information
pragrawal-px authored and Tapas Sharma committed Sep 13, 2022
1 parent 4b47ef0 commit 63c02f5
Show file tree
Hide file tree
Showing 14 changed files with 1,054 additions and 0 deletions.
142 changes: 142 additions & 0 deletions docs/add-ons/portworx.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# Portworx add-on for EKS Blueprints

## Introduction

[Portworx](https://portworx.com/) is a Kubernetes data services platform that provides persistent storage, data protection, disaster recovery, and other capabilities for containerized applications. This blueprint installs Portworx on Amazon Elastic Kubernetes Service (EKS) environment.

- [Helm chart](https://github.com/portworx/helm)

## Examples Blueprint

To get started look at these sample [blueprints](https://github.com/portworx/terraform-eksblueprints-portworx-addon/tree/main/blueprint).

## Requirements

For the add-on to work, Portworx needs additional permission to AWS resources which can be provided in the following two ways. The different flows are also covered in [sample blueprints](https://github.com/portworx/terraform-eksblueprints-portworx-addon/tree/main/blueprint):

## Method 1: Custom IAM policy

1. Add the below code block in your terraform script to create a policy with the required permissions. Make a note of the resource name for the policy you created:

```
resource "aws_iam_policy" "<policy-resource-name>" {
name = "<policy-name>"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"ec2:AttachVolume",
"ec2:ModifyVolume",
"ec2:DetachVolume",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteTags",
"ec2:DeleteVolume",
"ec2:DescribeTags",
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumesModifications",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:DescribeInstances",
"autoscaling:DescribeAutoScalingGroups"
]
Effect = "Allow"
Resource = "*"
},
]
})
}
```

2. Run `terraform apply` command for the policy (replace it with your resource name):

```bash
terraform apply -target="aws_iam_policy.<policy-resource-name>"
```
3. Attach the newly created AWS policy ARN to the node groups in your cluster:

```
managed_node_groups = {
node_group_1 = {
node_group_name = "my_node_group_1"
instance_types = ["t2.small"]
min_size = 3
max_size = 3
subnet_ids = module.vpc.private_subnets
#Add this line to the code block or add the new policy ARN to the list if it already exists
additional_iam_policies = [aws_iam_policy.<policy-resource-name>.arn]
}
}
```
4. Run the command below to apply the changes. (This step can be performed even if the cluster is up and running. The policy attachment happens without having to restart the nodes)
```bash
terraform apply -target="module.eks_blueprints"
```

## Method 2: AWS Security Credentials

Create a User with the same policy and generate an AWS access key ID and AWS secret access key pair and share it with Portworx.

It is recommended to pass the above values to the terraform script from your environment variable and is demonstrated below:


1. Pass the key pair to Portworx by setting these two environment variables.

```
export TF_VAR_aws_access_key_id=<access-key-id-value>
export TF_VAR_aws_secret_access_key=<access-key-secret>
```

2. To use Portworx add-on with this method, along with ```enable_portworx``` variable, pass these credentials in the following manner:

```
enable_portworx = true
portworx_chart_values ={
awsAccessKeyId = var.aws_access_key_id
awsSecretAccessKey = var.aws_secret_access_key
# other custom values for Portworx configuration
}
```

3. Define these two variables ```aws_access_key_id``` and ```aws_secret_access_key```. Terraform then automatically populates these variables from the environment variables.


```
variable "aws_access_key_id" {
type = string
default = ""
}
variable "aws_secret_access_key" {
type = string
default = ""
}
```

Alternatively, you can also provide the value of the secret key pair directly by hardcoding the values into the script.

## Usage

After completing the requirement step, installing Portworx is simple, set ```enable_portworx``` variable to true inside the Kubernetes add-on module.

```
enable_portworx = true
```

To customize Portworx installation, pass the configuration parameter as an object as shown below:

```
enable_portworx = true
portworx_chart_values ={
clusterName="testCluster"
imageVersion="2.11.1"
}
}
```
190 changes: 190 additions & 0 deletions examples/portworx/getting_started/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Portworx add-on for EKS Blueprint

This guide helps you install portworx on EKS environment using EKS Blueprints and its kubernetes add-on module. In this guide, we create a custom IAM policy and attach it to the node groups in EKS cluster to provide Portworx the required access.


The following list provides an overview of the components generated by this module:

- 1x VPC with private and public subnets, internet gateway, route table, NAT gateway, network interface and network ACL.
- 1x EKS cluster
- 1x EKS multi-nodes managed node group
- Installation of Portworx via Helm on the EKS cluster
- Portworx supports native integration with AWS APIs for drive creation and lifecycle management.
- User can provide drive specification to the Portworx add-on using the below format or instruct Portworx to use previously attached volumes

## Installation

### Prerequisites

Ensure that the following components are installed on your local system:

- [aws-cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
- [kubectl](https://kubernetes.io/docs/tasks/tools/)
- [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)

### Deployment steps

#### Step 1. Clone the repository:

```shell
git clone https://github.com/portworx/terraform-eksblueprints-portworx-addon.git
```

#### Step 2. Initialize the Terraform module:

```shell
cd blueprints/getting_started
terraform init
```

#### Step 3. Make any necessary adjustments to the `main.tf` file

Customise the values of variables like name, region, managed_node_groups configurations to set up the cluster according to your requirements.
To customise Portworx, refer to the [configuration](#portworx-configuration) section below.

#### Step 4. Export AWS Access key id and secret key pair as environment variable

```shell
export TF_VAR_aws_access_key_id=<access-key-id-value>
export TF_VAR_aws_secret_access_key=<access-key-secret>

```

#### Step 5. Use Terraform to plan a deployment:

```hcl
terraform plan
```

#### Step 6. Review the plan and apply the deployment with Terraform:
Verify the resources created by executing the below command

```hcl
terraform apply
```

#### Step 7 Use the AWS CLI to provision a kubeconfig profile for the cluster:

To get the name of the cluster, extract the EKS Cluster details from the "terraform output" command or from AWS Console.

```shell
aws eks --region <aws-region> update-kubeconfig --name <cluster-name>
```

#### Step 8. Check that the nodes are created and that Portworx is running:

```shell
kubectl get nodes
```

```shell
kubectl get stc -n kube-system
```

Result: A storage cluster with set name becomes active which implies Portworx cluster is online.

## Portworx Configuration

The following table lists the configurable parameters of the Portworx chart and their default values:

| Parameter | Description | Default |
|-----------|-------------| --------|
| `imageVersion` | The image tag to pull | "2.11.0" |
| `useAWSMarketplace` | Set this variable to true if you intend to use AWS marketplace license for Portworx | "false" |
| `clusterName` | Portworx Cluster Name| mycluster |
| `drives` | Semicolon separated list of drives to be used for storage. (example: "/dev/sda;/dev/sdb" or "type=gp2,size=200;type=gp3,size=500") | "type=gp2,size=200"|
| `useInternalKVDB` | Boolean variable to set internal KVDB on/off | true |
| `kvdbDevice` | specify a separate device to store KVDB data, only used when internalKVDB is set to true | type=gp2,size=150 |
| `envVars` | Semicolon separated list of environment variables that are going to be exported to portworx. (example: MYENV1=val1;MYENV2=val2) | "" |
| `maxStorageNodesPerZone` | The maximum number of storage nodes desired per zone| 3 |
| `useOpenshiftInstall` | boolean variable to install Portworx on Openshift .| false |
| `etcdEndPoint` | The ETCD endpoint. Should be in the format etcd:http://(your-etcd-endpoint):2379. If there are multiple etcd endpoints they need to be semicolon separated. | "" |
| `dataInterface` | Name of the interface ```<ethX>```.| none |
| `managementInterface` | Name of the interface ```<ethX>```.| none |
| `useStork` | [Storage Orchestration for Hyperconvergence](https://github.com/libopenstorage/stork).| true |
| `storkVersion` | Optional: version of Stork. For example: 2.11.0, when it's empty Portworx operator picks up the version based on the Portworx version. | "2.11.0" |
| `customRegistryURL` | URL where to pull Portworx image from | "" |
| `registrySecret` | Image registry credentials to pull Portworx Images from a secure registry | "" |
| `licenseSecret` | Kubernetes secret name that has Portworx licensing information | "" |
| `monitoring` | Enable Monitoring on Portworx cluster | false |
| `enableCSI` | Enable CSI | false |
| `enableAutopilot` | Enable Autopilot | false |
| `KVDBauthSecretName` | Refer https://docs.portworx.com/reference/etcd/securing-with-certificates-in-kubernetes to create a kvdb secret and specify the name of the secret here| none |
| `deleteType` | Specify which strategy to use while Uninstalling Portworx. "Uninstall" values only removes Portworx but with "UninstallAndWipe" value all data from your disks including the Portworx metadata is also wiped permanently | UninstallAndWipe |


## Uninstalling Portworx:

This section describes how to uninstall Portworx and remove its Kubernetes specs. When uninstalling, you may choose to either keep the the data on your drives, or wipe them completely.

1. Start by choosing between one of two deleteStrategy option and updating the terraform script.

```
portworx_chart_values = {
deleteType = # Valid values: "Uninstall" and "UninstallAndWipe"
# other custom values
}
```

```Uninstall``` only removes Portworx but ```UninstallAndWipe``` also remove all data from your disks permanently including the Portworx metadata. Use caution when applying the DeleteStrategy spec. Default value is set to ```UninstallAndWipe```.

2. Perform a Terraform apply to apply the change

```
terraform apply -target="module.eks_blueprints_kubernetes_addons"
```


Perform terraform destroy using it target functionality to uninstall in layers which prevents missed resources or errors.

### Destroy the add-ons

```hcl
terraform destroy -target="module.eks_blueprints_kubernetes_addons.module.portworx[0].module.helm_addon"
terraform destroy -target="module.eks_blueprints_kubernetes_addons"
```

### Destroy the EKS cluster

```hcl
terraform destroy -target="module.eks_blueprints"
```

### Destroy the VPC

```hcl
terraform destroy -target="module.vpc"
```

4. You may also want to login via the AWS console or CLI and manually delete
any remaining EBS snapshots and volumes, they are not deleted as part of the destroy process.




## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_eks_blueprints"></a> [eks\_blueprints](#module\_eks\_blueprints) | github.com/aws-ia/terraform-aws-eks-blueprints | n/a |
| <a name="module_eks_blueprints_kubernetes_addons"></a> [eks\_blueprints\_kubernetes\_addons](#module\_eks\_blueprints\_kubernetes\_addons) | github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons | n/a |
| <a name="module_vpc"></a> [vpc](#module\_vpc) | terraform-aws-modules/vpc/aws | ~> 3.0 |



## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_aws_access_key_id"></a> [aws_access_key_id](#input\_aws\_access\_key\_id) | AWS access key id value| `string` | `""` | yes |
| <a name="input_aws_secret_access_key"></a> [aws_secret_access_key](#input\_aws\_secret\_access\_key) | AWS secret access key value| `string` | `""` | yes |
| <a name="input_cluster_name"></a> [cluster\_name](#input\_cluster\_name) | Name of cluster - used by Terratest for e2e test automation | `string` | `""` | no |

## Outputs

| Name | Description |
|------|-------------|
| <a name="output_configure_kubectl"></a> [configure\_kubectl](#output\_configure\_kubectl) | Configure kubectl: make sure you qre logged in with the correct AWS profile and run the following command to update your kubeconfig |
| <a name="output_eks_cluster_id"></a> [eks\_cluster\_id](#output\_eks\_cluster\_id) | EKS cluster ID |
| <a name="output_region"></a> [region](#output\_region) | AWS region |
| <a name="output_vpc_cidr"></a> [vpc\_cidr](#output\_vpc\_cidr) | VPC CIDR |
Loading

0 comments on commit 63c02f5

Please sign in to comment.