Terraform module which creates a Domino deployment inside of GCP's GKE.
v3.0.0
you will need to update the input variable structure.
The following configuration has been removed:
description
static_ip_enabled
The following configuration has been moved:
Original variable | New variable | Notes |
---|---|---|
filestore_disabled |
storage.filestore.enabled |
|
filestore_capacity_gb |
storage.filestore.capacity_gb |
|
gcs_force_destroy |
storage.gcs.force_destroy_on_deletion |
|
kubeconfig_output_path |
gke.kubeconfig.path |
|
enable_network_policy |
gke.network_policies |
|
kubernetes_version |
gke.k8s_version |
|
gke_release_channel |
gke.release_channel |
|
enable_vertical_pod_autoscaling |
gke.vertical_pod_autoscaling |
|
master_firewall_ports |
gke.control_plane_ports |
|
master_authorized_networks_config |
gke.public_access.cidrs |
gke.public_access.enabled must also be set to take effect |
google_dns_managed_zone |
managed_dns |
|
database_encryption_key_name |
kms.database_encryption_key_name |
A new, enabled-by-default variable to control GKE dataplane v2 has been introduced: gke.advanced_datapath
. For existing infrastructure, make sure to set it to false
otherwise it will recreate your cluster.
module "gke_cluster" {
source = "github.com/dominodatalab/terraform-gcp-gke"
cluster = "cluster-name"
}
module "gke_cluster" {
source = "github.com/dominodatalab/terraform-gcp-gke"
cluster = "cluster-name"
project = "gcp-project"
location = "us-west1"
# Some more variables may need to be configured to meet specific needs
}
-
Install gcloud and configure the Terraform workspace
gcloud auth application-default login terraform init terraform workspace new [your-cluster-name]
-
With the environment setup, you can now apply the terraform module
terraform apply -auto-approve
-
Be sure to cleanup the cluster after you are done working
terraform destroy -auto-approve
The following project IAM permissions must be granted to the provisioning user/service:
- Cloud KMS Admin
- Compute Admin
- Compute Instance Admin (v1)
- Compute Network Admin
- Kubernetes Engine Admin
- DNS Administrator
- Cloud Filestore Editor
- Security Admin
- Service Account Admin
- Service Account User
- Storage Admin
It may be possible to lower the "admin" privilage levels to a "creator" level if provisioning cleanup is not required. However, the permissions level for "creator-only" has not been tested. It is assume that a cluster creator can also cleanup (i.e. destroy) the cluster.
Please submit any feature enhancements, bug fixes, or ideas via pull requests or issues.
Name | Version |
---|---|
terraform | >= 1.3 |
>= 5.0, < 6.0 | |
google-beta | >= 5.0, < 6.0 |
random | ~> 3.1 |
Name | Version |
---|---|
>= 5.0, < 6.0 |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
additional_node_pools | additional node pool definitions | map(object({ |
{} |
no |
allowed_ssh_ranges | CIDR ranges allowed to SSH to nodes in the cluster. | list(string) |
[ |
no |
deploy_id | Domino Deployment ID. | string |
n/a | yes |
gke | gke = { k8s_version = Cluster k8s version release_channel = GKE release channel public_access = { enabled = Enable API public endpoint cidrs = List of CIDR ranges permitted for accessing the public endpoint } control_plane_ports = Firewall ports to open from the master, e.g., webhooks advanced_datapath = Enable the ADVANCED_DATAPATH provider network_policies = Enable network policy switch. Cannot be enabled when enable_advanced_datapath is true vertical_pod_autoscaling = Enable GKE vertical scaling kubeconfig = { path = Specify where the cluster kubeconfig file should be generated. } } |
object({ |
{} |
no |
kms | kms = { database_encryption_key_name = Use an existing KMS key for the Application-layer Secrets Encryption settings. (Optional) } |
object({ |
{} |
no |
location | The location (region or zone) of the cluster. A zone creates a single master. Specifying a region creates replicated masters accross all zones | string |
"us-west1-b" |
no |
managed_dns | managed_dns = { enabled = Whether to create DNS records in the given zone name = Managed zone to modify dns_name = DNS record name to create service_prefixes = List of additional prefixes to the dns_name to create } |
object({ |
{} |
no |
migration_permissions | Add registry permissions to platform service account for migration purposes | bool |
false |
no |
namespaces | Namespace that are used for generating the service account bindings | object({ platform = string, compute = string }) |
n/a | yes |
node_pools | GKE node pool params | object( |
{ |
no |
project | GCP Project ID | string |
"domino-eng-platform-dev" |
no |
storage | storage = { filestore = { enabled = Provision a Filestore instance (for production installs) capacity_gb = Filestore Instance size (GB) for the cluster NFS shared storage } nfs_instance = { enabled = Provision an instance as an NFS server (to avoid filestore churn during testing) capacity_gb = NFS instance disk size } gcs = { force_destroy_on_deletion = Toogle to allow recursive deletion of all objects in the bucket. if 'false' terraform will NOT be able to delete non-empty buckets. } |
object({ |
{} |
no |
tags | Deployment tags. | map(string) |
{} |
no |
Name | Description |
---|---|
bucket_name | Name of the cloud storage bucket |
cluster | GKE cluster information |
dns | The external (public) DNS name for the Domino UI |
domino_artifact_repository | Domino Google artifact repository |
google_filestore_instance | Domino Google Cloud Filestore instance, name and ip_address |
nfs_instance | Domino Google Cloud Filestore instance, name and ip_address |
nfs_instance_ip | NFS instance IP |
project | GCP project ID |
region | Region where the cluster is deployed derived from 'location' input variable |
service_accounts | GKE cluster Workload Identity namespace IAM service accounts |
static_ip | The external (public) static IPv4 for the Domino UI |
uuid | Cluster UUID |