Templates to launch fully functional CVP clusters in AWS.
-
- 6.1. Using a
.tfvars
file
- 6.1. Using a
Install terraform, ansible, AWS CLI (version 2) and use one of the provided .tfvars
examples.
This module is tested with terraform 1.0.1
, but should work with any terraform newer than the version shown below. You can download it from the official website.
Terraform is distributed as a single binary. Install Terraform by unzipping it and moving it to a directory included in your system's PATH
.
Name | Version |
---|---|
terraform | >= 0.14 |
Name | Version |
---|---|
aws | 3.48.0 |
local | 2.1.0 |
null | 3.1.0 |
random | 3.1.0 |
tls | 3.1.0 |
Name | Source | Version |
---|---|---|
cvp_cluster | ./modules/cvp-cluster | n/a |
cvp_provision_nodes | git::https://github.com/arista-netdevops-community/cvp-ansible-provisioning.git | v3.0.2 |
You must have the AWS CLI version 2 installed and authenticated. For installation details please see here.
We suggest that you create a profile and authenticate the cli using these steps. Feel free to change cvp-profile
to whatever profile name you prefer:
- Initialize your aws profile
$ aws configure --profile cvp-profile
AWS Access Key ID [None]: YOUR_ACCESS_KEY
AWS Secret Access Key [None]: YOUR_SECRET_ACCESS_KEY
Default region name [None]: us-east-2
Default output format [None]:
Note: If you're using temporary credentials to access AWS (such as STS or keys from Vault) you'll also need to add your
aws_session_token
in the profile in~/.aws/credentials
.
You must have ansible installed for provisioning to work. You can check installation instructions here.
These steps assume that you created a profile following the steps in the AWS CLI section. You must also be in the project's directory (cvp-in-aws
):
- Initialize Terraform (only needed on the first run):
$ terraform init
- Edit the
examples/one-node-cvp-deployment.tfvars
file and replace with the desired values.
$ vi examples/one-node-cvp-deployment.tfvars
- Plan your full provisioning run:
$ terraform plan -out=plan.out -var-file=examples/one-node-cvp-deployment.tfvars
- Review your plan
- Apply the generated plan:
terraform apply plan.out
- Go have a coffee. At this point, CVP should be starting in your instance and may take some time to finish bringing all services up. You can ssh into your cvp instances with the displayed
cvp_cluster_ssh_user
andcvp_cluster_nodes_ips
to check progress.
If devices are in a network that can't be reached by CVP they need to be added by configuring TerminAttr on the devices themselves (similar to any setup behind NAT). At the end of the
terraform run a suggested TerminAttr configuration line will be displayed containing the appropriate cvaddr
and cvauth
parameters:
Provisioning complete. To add devices use the following TerminAttr configuration:
exec /usr/bin/TerminAttr -cvaddr=34.71.81.254:9910 -cvcompression=gzip -cvauth=key,JkqAGsEyGPmUZ3X0 -smashexcludes=ale,flexCounter,hardware,kni,pulse,strata -ingestexclude=/Sysdb/cell/1/agent,/Sysdb/cell/2/agent -cvvrf=default -taillogs
The exec
configuration can be copy-pasted and should be usable in most scenarios.
Note It is highly recommended to use device authentication via certificates. In case of CVP behind NAT the onboarding token would have to be manually generated. This can be done on the UI starting from 2021.2.0. In older CVP versions the token can be generated from the CLI.
To enroll with certificates the following steps can be used:
1. Generate the token on CVP CLI with curl -d '{"reenrollDevices":["*"]}' -k https://127.0.0.1:9911/cert/createtoken
2. Save the token on EOS to a file:
>enable
#copy terminal: file:/tmp/token
<paste the generated token here and press Ctrl+D>
3. Configure TerminAttr
daemon TerminAttr
exec /usr/bin/TerminAttr -cvaddr=34.71.81.254:9910 -cvcompression=gzip -cvauth=token,/tmp/token -smashexcludes=ale,flexCounter,hardware,kni,pulse,strata -ingestexclude=/Sysdb/cell/1/agent,/Sysdb/cell/2/agent -cvvrf=default -taillogs
no shutdown
Required variables are asked at runtime unless specified on the command line. Using a .tfvars file is recommended in most cases.
Name | Description | Type | Default | Required |
---|---|---|---|---|
aws_network | The ID of the network in which clusters will be launched. Leaving this blank will create a new network. | string |
null |
no |
aws_network_cidr | CIDR for the AWS VPC that's created by the module. Only used when aws_network is NOT set. | string |
"10.128.0.0/20" |
no |
aws_profile | AWS CLI profile. Must match a valid profile in your ~/.aws/config and ~/.aws/credentials. | string |
null |
no |
aws_region | The region in which all AWS resources will be launched. | string |
n/a | yes |
aws_start_instances | Whether to start CVP instances when running terraform. | bool |
false |
no |
aws_subnet | The ID of the subnet in which clusters will be launched. Only used when aws_network is set. | string |
null |
no |
aws_subnet_cidr | The subnetwork CIDR in which clusters will be launched. Only used when aws_network is NOT set. | string |
"10.128.0.0/20" |
no |
aws_zone | The zone in which all GCP resources will be launched. | string |
n/a | yes |
cvp_cluster_centos_version | The Centos version used by CVP instances. | string |
null |
no |
cvp_cluster_name | The name of the CVP cluster | string |
n/a | yes |
cvp_cluster_public_eos_communitation | Whether the ports used by EOS devices to communicate to CVP are publically accessible over the internet. | bool |
false |
no |
cvp_cluster_public_management | Whether the cluster management interface (https/ssh) is publically accessible over the internet. | bool |
false |
no |
cvp_cluster_remove_disks | Whether data disks created for the instances will be removed when destroying them. | bool |
false |
no |
cvp_cluster_size | The number of nodes in the CVP cluster | number |
n/a | yes |
cvp_cluster_vm_admin_user | User that will be used to connect to CVP cluster instances. | string |
"cvpsshadmin" |
no |
cvp_cluster_vm_key | Public SSH key used to access instances in the CVP cluster. | string |
null |
no |
cvp_cluster_vm_password | Password used to access instances in the CVP cluster. | string |
null |
no |
cvp_cluster_vm_private_key | Private SSH key used to access instances in the CVP cluster. | string |
null |
no |
cvp_cluster_vm_type | The type of instances used for CVP | string |
"c5.4xlarge" |
no |
cvp_download_token | Arista Portal token used to download CVP. May be obtained on https://www.arista.com/en/users/profile under Portal Access. | string |
n/a | yes |
cvp_enable_advanced_login_options | Whether to enable advanced login options on CVP. | bool |
false |
no |
cvp_ingest_key | Key that will be used to authenticate devices to CVP. | string |
null |
no |
cvp_install_size | CVP installation size. | string |
null |
no |
cvp_k8s_cluster_network | Internal network that will be used inside the k8s cluster. Applies only to 2021.1.0+. | string |
"10.42.0.0/16" |
no |
cvp_ntp | NTP server used to keep time synchronization between CVP nodes. | string |
"time.google.com" |
no |
cvp_version | CVP version to install on the cluster. | string |
"2020.3.1" |
no |
cvp_vm_image | Image used to launch VMs. | string |
null |
no |
eos_ip_range | IP ranges used by EOS devices that will be managed by the CVP cluster. | list(any) |
[] |
no |
Name | Description |
---|---|
cvp_instances_credentials | Public IP addresses and usernames of the cluster instances. |
cvp_terminattr_instructions | Instructions to add EOS devices to the CVP cluster. |
Note: Before running this please replace cvp_download_token
with your Arista Portal token and change/remove aws_profile
to match your configuration.
$ terraform apply -var-file=examples/one-node-cvp-deployment.tfvars
In order to remove the environment you launched you can run the following command:
$ terraform destroy -var-file=examples/one-node-cvp-deployment.tfvars
This command removes everything from the AWS project.
- Resizing clusters is not supported at this time.
- This module connects to the instance using the
root
user instead of the declared user for provisioning due to limitations in the base image that's being used. If you know your way around terraform and understand what you're doing, this behavior can be changed by editing themodules/cvp-provision/main.tf
file. - CVP installation size auto-discovery only works for custom instances at this time.