Module | Provision | Scale | K8s Upgrade |
---|---|---|---|
aks | โ | โ | โ |
eks | โ | โ | โ |
gke | โ | โ | โ |
ec2_rke1 | โ | โ | โ |
ec2_rke2 | โ | โ | โ |
ec2_k3s | โ | โ | โ |
linode_rke1 | โ | โ | โ |
linode_rke2 | โ | โ | โ |
linode_k3s | โ | โ | โ |
rancher:
# define rancher specific configs here
terraform:
# define module specific configs here
terratest:
# define test specific configs here
๐บ Back to top
The rancher
configurations in the cattle-config.yaml
will remain consistent across all modules and tests. Fields to configure in this section are as follows:
Field | Description | Type | Example |
---|---|---|---|
host | url to rancher sercer without leading https:// and without trailing / | string | url-to-rancher-server.com |
adminToken | rancher admin bearer token | string | token-XXXXX:XXXXXXXXXXXXXXX |
insecure | must be set to true | boolean | true |
cleanup | If true, resources will be cleaned up upon test completion | boolean | true |
rancher:
host: url-to-rancher-server.com
adminToken: token-XXXXX:XXXXXXXXXXXXXXX
insecure: true
cleanup: true
๐บ Back to top
The terraform
configurations in the cattle-config.yaml
are module specific. Fields to configure vary per module. Module specific fields to configure in this section are as follows:
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | aks |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-aks |
azureClientID | provide azure client id | string | XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX |
azureClientSecret | provide azure client secret | string | XXXXXXXXXXXXXXXXXXXXXXXXXX |
azureSubscriptionID | provide azure subscription id | string | XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX |
clusterName | provide a unique name for cluster | string | jkeslar-cluster |
resourceGroup | provide an existing resource group from Azure | string | my-resource-group |
resourceLocation | provide location for Azure instances | string | eastus |
hostnamePrefix | provide a unique hostname prefix for resources | string | jkeslar |
networkPlugin | provide network plugin | string | kubenet |
availabilityZones | list of availablilty zones | []string |
- '1' - '2' - '3' |
osDiskSizeGB | os disk size in gigabytes | int64 | 128 |
vmSize | vm size to be used for instances | string | Standard_DS2_v2 |
terratest:
providerVersion: '1.25.0'
module: aks
cloudCredentialName: tf-aks
azureClientID: XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
azureClientSecret: XXXXXXXXXXXXXXXXXXXXXXXXXX
azureSubscriptionID: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
clusterName: jkeslar-cluster
resourceGroup: my-resource-group
resourceLocation: eastus
hostnamePrefix: jkeslar
networkPlugin: kubenet
availabilityZones:
- '1'
- '2'
- '3'
osDiskSizeGB: 128
vmSize: Standard_DS2_v2
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | eks |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-eks |
awsAccessKey | provide aws access key | string | XXXXXXXXXXXXXXXXXXXX |
awsSecretKey | provide aws secret key | string | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
awsInstanceType | provide aws instance type | string | t3.medium |
region | provide a region for resources to be created in | string | us-east-2 |
awsSubnets | list of valid subnet IDs | []string |
- subnet-xxxxxxxx - subnet-yyyyyyyy - subnet-zzzzzzzz |
awsSecurityGroups | list of security group IDs to be applied to AWS instances | []string | - sg-xxxxxxxxxxxxxxxxx |
clusterName | provide a unique name for your cluster | string | jkeslar-cluster |
hostnamePrefix | provide a unique hostname prefix for resources | string | jkeslar |
publicAccess | If true, public access will be enabled | boolean | true |
privateAccess | If true, private access will be enabled | boolean | true |
nodeRole | Optional with Rancher v2.7+ - if provided, this custom role will be used when creating instances for node groups | string | arn:aws:iam::############:role/my-custom-NodeInstanceRole-############ |
terratest:
providerVersion: '1.25.0'
module: eks
cloudCredentialName: tf-eks
awsAccessKey: XXXXXXXXXXXXXXXXXXXX
awsSecretKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
awsInstanceType: t3.medium
region: us-east-2
awsSubnets:
- subnet-xxxxxxxx
- subnet-yyyyyyyy
- subnet-zzzzzzzz
awsSecurityGroups:
- sg-xxxxxxxxxxxxxxxxx
clusterName: jkeslar-cluster
hostnamePrefix: jkeslar
publicAccess: true
privateAccess: true
nodeRole: arn:aws:iam::############:role/my-custom-NodeInstanceRole-############
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | eks |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-eks |
clusterName | provide a unique cluster name | string | jkeslar-cluster |
region | provide region for resources to be created in | string | us-central1-a |
gkeProjectID | provide gke project ID | string | my-project-id-here |
gkeNetwork | specify network here | string | default |
gkeSubnetwork | specify subnetwork here | string | default |
hostnamePrefix | provide a unique hostname prefix for resources | string | jkeslar |
terraform:
providerVersion: '1.25.0'
module: gke
cloudCredentialName: tf-creds-gke
clusterName: jkeslar-cluster
region: us-central1-a
gkeProjectID: my-project-id-here
gkeNetwork: default
gkeSubnetwork: default
hostnamePrefix: jkeslar
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | ec2_rke1 |
awsAccessKey | provide aws access key | string | XXXXXXXXXXXXXXXXXXXX |
awsSecretKey | provide aws secret key | string | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
ami | provide ami; (optional - may be left as empty string '') | string | '' |
awsInstanceType | provide aws instance type | string | t3.medium |
region | provide a region for resources to be created in | string | us-east-2 |
awsSecurityGroupNames | list of security groups to be applied to AWS instances | []string | - security-group-name |
awsSubnetID | provide a valid subnet ID | string | subnet-xxxxxxxx |
awsVpcID | provide a valid VPC ID | string | vpc-xxxxxxxx |
awsZoneLetter | provide zone letter to be used | string | a |
awsRootSize | root size in gigabytes | int64 | 80 |
clusterName | provide a unique name for your cluster | string | jkeslar-cluster |
networkPlugin | provide network plugin to be used | string | canal |
nodeTemplateName | provide a unique name for node template | string | tf-rke1-template |
hostnamePrefix | provide a unique hostname prefix for resources | string | jkeslar |
terratest:
providerVersion: '1.25.0'
module: ec2_rke1
awsAccessKey: XXXXXXXXXXXXXXXXXXXX
awsSecretKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ami:
awsInstanceType: t3.medium
region: us-east-2
awsSecurityGroupNames:
- security-group-name
awsSubnetID: subnet-xxxxxxxx
awsVpcID: vpc-xxxxxxxx
awsZoneLetter: a
awsRootSize: 80
clusterName: jkeslar-cluster
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: jkeslar
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | linode_rke1 |
linodeToken | provide linode token credential | string | XXXXXXXXXXXXXXXXXXXX |
region | provide a region for resources to be created in | string | us-east |
linodeRootPass | provide a unique root password | string | xxxxxxxxxxxxxxxx |
clusterName | provide a unique name for your cluster | string | jkeslar-cluster |
networkPlugin | provide network plugin to be used | string | canal |
nodeTemplateName | provide a unique name for node template | string | tf-rke1-template |
hostnamePrefix | provide a unique hostname prefix for resources | string | jkeslar |
terraform:
providerVersion: '1.25.0'
module: linode_rke1
linodeToken: XXXXXXXXXXXXXXXXXXXX
region: us-east
linodeRootPass: xxxxxxxxxxxxxxxx
clusterName: jkeslar
networkPlugin: canal
nodeTemplateName: tf-rke1-template
hostnamePrefix: jkeslar
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | ec2_rke2 |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-creds-rke2 |
awsAccessKey | provide aws access key | string | XXXXXXXXXXXXXXXXXXXX |
awsSecretKey | provide aws secret key | string | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
ami | provide ami; (optional - may be left as empty string '') | string | '' |
region | provide a region for resources to be created in | string | us-east-2 |
awsSecurityGroupNames | list of security groups to be applied to AWS instances | []string | - my-security-group |
awsSubnetID | provide a valid subnet ID | string | subnet-xxxxxxxx |
awsVpcID | provide a valid VPC ID | string | vpc-xxxxxxxx |
awsZoneLetter | provide zone letter to be used | string | a |
machineConfigName | provide a unique name for machine config | string | tf-rke2 |
clusterName | provide a unique name for your cluster | string | jkeslar-cluster |
enableNetworkPolicy | If true, Network Policy will be enabled | boolean | false |
defaultClusterRoleForProjectMembers | select default role to be used for project memebers | string | user |
terraform:
providerVersion: '1.25.0'
module: ec2_rke2
cloudCredentialName: tf-creds-rke2
awsAccessKey: XXXXXXXXXXXXXXXXXXXX
awsSecretKey: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
ami:
region: us-east-2
awsSecurityGroupNames:
- my-security-group
awsSubnetID: subnet-xxxxxxxx
awsVpcID: vpc-xxxxxxxx
awsZoneLetter: a
machineConfigName: tf-rke2
clusterName: jkeslar-cluster
enableNetworkPolicy: false
defaultClusterRoleForProjectMembers: user
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
providerVersion | rancher2 provider version | string | '1.25.0' |
module | specify terraform module here | string | linode_k3s |
cloudCredentialName | provide the name of unique cloud credentials to be created during testing | string | tf-linode |
linodeToken | provide linode token credential | string | XXXXXXXXXXXXXXXXXXXX |
linodeImage | specify image to be used for instances | string | linode/ubuntu20.04 |
region | provide a region for resources to be created in | string | us-east |
linodeRootPass | provide a unique root password | string | xxxxxxxxxxxxxxxx |
machineConfigName | provide a unique name for machine config | string | tf-k3s |
clusterName | provide a unique name for your cluster | string | jkeslar-cluster |
enableNetworkPolicy | If true, Network Policy will be enabled | boolean | false |
defaultClusterRoleForProjectMembers | select default role to be used for project memebers | string | user |
terraform:
providerVersion: '1.25.0'
module: linode_k3s
cloudCredentialName: tf-linode-creds
linodeToken: XXXXXXXXXXXXXXXXXXXX
linodeImage: linode/ubuntu20.04
region: us-east
linodeRootPass: xxxxxxxxxxxx
machineConfigName: tf-k3s
clusterName: jkeslar-cluster
enableNetworkPolicy: false
defaultClusterRoleForProjectMembers: user
๐บ Back to top
The terratest
configurations in the cattle-config.yaml
are test specific. Fields to configure vary per test. The nodepools
field in the below configurations will vary depending on the module. I will outline what each module expects first, then proceed to show the whole test specific configurations.
๐บ Back to top
type: []Nodepool
๐บ Back to top
AKS nodepools only need the quantity
of nodes per pool to be provided, of type int64
. The below example will create a cluster with three node pools, each with a single node.
nodepools:
- quantity: 1
- quantity: 1
- quantity: 1
๐บ Back to top
EKS nodepools require the instanceType
, as type string
, the desiredSize
of the nodepool, as type int64
, the maxSize
of the node pool, as type int64
, and the minSize
of the node pool, as type int64
. The minimum requirement for an EKS nodepool's desiredSize
is 2
. This must be respected or the cluster will fail to provision.
nodepools:
- instanceType: t3.medium
desiredSize: 3
maxSize: 3
minSize: 0
๐บ Back to top
GKE nodepools require the quantity
of the node pool, as type int64
, and the maxPodsContraint
, as type int64
.
nodepools:
- quantity: 2
maxPodsContraint: 110
๐บ Back to top
For these modules, the required nodepool fields are the quantity
, as type int64
, as well as the roles to be assigned, each to be set or toggled via boolean - [etcd
, controlplane
, worker
]. The following example will create three node pools, each with individual roles, and one node per pool.
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
That wraps up the sub-section on nodepools, circling back to the test specific configs now...
Test specific fields to configure in this section are as follows:
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
nodepools | provide nodepool configs to be initially provisioned | []Nodepool | view section on nodepools above or example yaml below |
kubernetesVersion | specify the kubernetes version to be used | string | view yaml below for all module specific expected k8s version formats |
nodeCount | provide the expected initial node count | int64 | 3 |
# this example is valid for RKE1 provision
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
kubernetesVersion: v1.24.9-rancher1-1
nodeCount: 3
# Below are the expected formats for all module kubernetes versions
# AKS without leading v
# e.g. '1.24.6'
# EKS without leading v or any tail ending
# e.g. '1.23' or '1.24'
# GKE without leading v but with tail ending included
# e.g. 1.23.12-gke.100
# RKE1 with leading v and -rancher1-1 tail
# e.g. v1.24.9-rancher1-1
# RKE2 with leading v and +rke2r# tail
# e.g. v1.24.9+rke2r1
# K3S with leading v and +k3s# tail
# e.g. v1.24.9+k3s1
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
nodepools | provide nodepool configs to be initially provisioned | []Nodepool | view section on nodepools above or example yaml below |
scaledUpNodepools | provide nodepool configs to be scaled up to, after initial provisioning | []Nodepool | view section on nodepools above or example yaml below |
scaledDownNodepools | provide nodepool configs to be scaled down to, after scaling up cluster | []Nodepool | view section on nodepools above or example yaml below |
kubernetesVersion | specify the kubernetes version to be used | string | view example yaml above for provisioning test for all module specific expected k8s version formats |
nodeCount | provide the expected initial node count | int64 | 3 |
scaledUpNodeCount | provide the expected node count of scaled up cluster | int64 | 8 |
scaledDownNodeCount | provide the expected node count of scaled down cluster | int64 | 6 |
# this example is valid for RKE1 scale
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
scaledUpNodepools:
- quantity: 3
etcd: true
controlplane: false
worker: false
- quantity: 2
etcd: false
controlplane: true
worker: false
- quantity: 3
etcd: false
controlplane: false
worker: true
scaledDownNodepools:
- quantity: 3
etcd: true
controlplane: false
worker: false
- quantity: 2
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
kubernetesVersion: v1.24.9-rancher1-1
nodeCount: 3
scaledUpNodeCount: 8
scaledDownNodeCount: 6
๐บ Back to top
Field | Description | Type | Example |
---|---|---|---|
nodepools | provide nodepool configs to be initially provisioned | []Nodepool | view section on nodepools above or example yaml below |
nodeCount | provide the expected initial node count | int64 | 3 |
kubernetesVersion | specify the kubernetes version to be used | string | view example yaml above for provisioning test for all module specific expected k8s version formats |
upgradedKubernetesVersion | specify the kubernetes version to be upgraded to | string | view example yaml above for provisioning test for all module specific expected k8s version formats |
# this example is valid for K3s kubernetes upgrade
terratest:
nodepools:
- quantity: 1
etcd: true
controlplane: false
worker: false
- quantity: 1
etcd: false
controlplane: true
worker: false
- quantity: 1
etcd: false
controlplane: false
worker: true
nodeCount: 3
kubernetesVersion: v1.23.14+k3s1
upgradedKubernetesVersion: v1.24.8+k3s1
๐บ Back to top
Build module test may be used and ran to create a main.tf terraform configuration file for the desired module. This is logged to the output for future reference and use.
Testing configurations for this are the same as outlined in provisioning test above. Please review provisioning test configurations for more details.
๐บ Back to top
Cleanup test may be used to clean up resources in situations where rancher config has cleanup
set to false
. This may be helpful in debugging. This test expects the same configurations used to initially create this environment, to properly clean them up.