Skip to content

Terraform module for deploying a Domino on GKE

License

Notifications You must be signed in to change notification settings

dominodatalab/terraform-gcp-gke

Repository files navigation

Domino GKE Terraform

Terraform module which creates a Domino deployment inside of GCP's GKE.

CircleCI

⚠️ Important: If you have existing infrastructure created with a version of this module < v3.0.0 you will need to update the input variable structure.

The following configuration has been removed:

  • description
  • static_ip_enabled

The following configuration has been moved:

Original variable New variable Notes
filestore_disabled storage.filestore.enabled
filestore_capacity_gb storage.filestore.capacity_gb
gcs_force_destroy storage.gcs.force_destroy_on_deletion
kubeconfig_output_path gke.kubeconfig.path
enable_network_policy gke.network_policies
kubernetes_version gke.k8s_version
gke_release_channel gke.release_channel
enable_vertical_pod_autoscaling gke.vertical_pod_autoscaling
master_firewall_ports gke.control_plane_ports
master_authorized_networks_config gke.public_access.cidrs gke.public_access.enabled must also be set to take effect
google_dns_managed_zone managed_dns
database_encryption_key_name kms.database_encryption_key_name

A new, enabled-by-default variable to control GKE dataplane v2 has been introduced: gke.advanced_datapath. For existing infrastructure, make sure to set it to false otherwise it will recreate your cluster.

Usage

Create a Domino development GKE cluster

module "gke_cluster" {
  source  = "github.com/dominodatalab/terraform-gcp-gke"

  cluster = "cluster-name"
}

Create a prod GKE cluster

module "gke_cluster" {
  source   = "github.com/dominodatalab/terraform-gcp-gke"

  cluster  = "cluster-name"
  project  = "gcp-project"
  location = "us-west1"

  # Some more variables may need to be configured to meet specific needs
}

Manual Deployment

  1. Install gcloud and configure the Terraform workspace

    gcloud auth application-default login
    terraform init
    terraform workspace new [your-cluster-name]
    
  2. With the environment setup, you can now apply the terraform module

    terraform apply -auto-approve
    
  3. Be sure to cleanup the cluster after you are done working

    terraform destroy -auto-approve
    

IAM Permissions

The following project IAM permissions must be granted to the provisioning user/service:

  • Cloud KMS Admin
  • Compute Admin
  • Compute Instance Admin (v1)
  • Compute Network Admin
  • Kubernetes Engine Admin
  • DNS Administrator
  • Cloud Filestore Editor
  • Security Admin
  • Service Account Admin
  • Service Account User
  • Storage Admin

It may be possible to lower the "admin" privilage levels to a "creator" level if provisioning cleanup is not required. However, the permissions level for "creator-only" has not been tested. It is assume that a cluster creator can also cleanup (i.e. destroy) the cluster.

Development

Please submit any feature enhancements, bug fixes, or ideas via pull requests or issues.

Terraform Docs

Requirements

Name Version
terraform >= 1.3
google >= 5.0, < 6.0
google-beta >= 5.0, < 6.0
random ~> 3.1

Providers

Name Version
google >= 5.0, < 6.0

Modules

No modules.

Resources

Name Type
google_artifact_registry_repository.domino resource
google_artifact_registry_repository_iam_member.gcr resource
google_artifact_registry_repository_iam_member.platform resource
google_compute_disk.nfs resource
google_compute_firewall.iap_tcp_forwarding resource
google_compute_firewall.master_webhooks resource
google_compute_firewall.nfs resource
google_compute_global_address.static_ip resource
google_compute_instance.nfs resource
google_compute_network.vpc_network resource
google_compute_router.router resource
google_compute_router_nat.nat resource
google_compute_subnetwork.default resource
google_container_cluster.domino_cluster resource
google_container_node_pool.node_pools resource
google_dns_record_set.a resource
google_dns_record_set.a_services resource
google_dns_record_set.caa resource
google_dns_record_set.caa_services resource
google_filestore_instance.nfs resource
google_kms_crypto_key.crypto_key resource
google_kms_crypto_key_iam_binding.binding resource
google_kms_key_ring.key_ring resource
google_project_iam_member.platform_roles resource
google_project_iam_member.service_account resource
google_service_account.accounts resource
google_service_account_iam_binding.gcr resource
google_service_account_iam_binding.platform_gcs resource
google_storage_bucket.bucket resource
google_storage_bucket_iam_binding.bucket resource
google_project.domino data source
google_storage_project_service_account.gcs_account data source

Inputs

Name Description Type Default Required
additional_node_pools additional node pool definitions
map(object({
min_count = optional(number, 0)
max_count = optional(number, 10)
initial_count = optional(number, 1)
max_pods = optional(number, 30)
preemptible = optional(bool, false)
disk_size_gb = optional(number, 400)
image_type = optional(string, "COS_CONTAINERD")
instance_type = optional(string, "n2-standard-8")
gpu_accelerator = optional(string, "")
labels = optional(map(string), {})
taints = optional(list(string), [])
node_locations = optional(list(string), [])
}))
{} no
allowed_ssh_ranges CIDR ranges allowed to SSH to nodes in the cluster. list(string)
[
"35.235.240.0/20"
]
no
deploy_id Domino Deployment ID. string n/a yes
gke gke = {
k8s_version = Cluster k8s version
release_channel = GKE release channel
public_access = {
enabled = Enable API public endpoint
cidrs = List of CIDR ranges permitted for accessing the public endpoint
}
control_plane_ports = Firewall ports to open from the master, e.g., webhooks
advanced_datapath = Enable the ADVANCED_DATAPATH provider
network_policies = Enable network policy switch. Cannot be enabled when enable_advanced_datapath is true
vertical_pod_autoscaling = Enable GKE vertical scaling
kubeconfig = {
path = Specify where the cluster kubeconfig file should be generated.
}
}
object({
k8s_version = optional(string, "1.30"),
release_channel = optional(string, "STABLE"),
public_access = optional(object({
enabled = optional(bool, false),
cidrs = optional(list(string), [])
}), {}),
control_plane_ports = optional(list(string), [])
advanced_datapath = optional(bool, true),
network_policies = optional(bool, false),
vertical_pod_autoscaling = optional(bool, true),
kubeconfig = optional(object({
path = optional(string, null)
}), {})
})
{} no
kms kms = {
database_encryption_key_name = Use an existing KMS key for the Application-layer Secrets Encryption settings. (Optional)
}
object({
database_encryption_key_name = optional(string, null)
})
{} no
location The location (region or zone) of the cluster. A zone creates a single master. Specifying a region creates replicated masters accross all zones string "us-west1-b" no
managed_dns managed_dns = {
enabled = Whether to create DNS records in the given zone
name = Managed zone to modify
dns_name = DNS record name to create
service_prefixes = List of additional prefixes to the dns_name to create
}
object({
enabled = optional(bool, false)
name = optional(string, "")
dns_name = optional(string, "")
service_prefixes = optional(set(string), [])

})
{} no
migration_permissions Add registry permissions to platform service account for migration purposes bool false no
namespaces Namespace that are used for generating the service account bindings object({ platform = string, compute = string }) n/a yes
node_pools GKE node pool params
object(
{
compute = object({
min_count = optional(number, 0)
max_count = optional(number, 10)
initial_count = optional(number, 1)
max_pods = optional(number, 30)
preemptible = optional(bool, false)
disk_size_gb = optional(number, 400)
image_type = optional(string, "COS_CONTAINERD")
instance_type = optional(string, "n2-highmem-8")
gpu_accelerator = optional(string, "")
labels = optional(map(string), {
"dominodatalab.com/node-pool" = "default"
})
taints = optional(list(string), [])
node_locations = optional(list(string), [])
}),
platform = object({
min_count = optional(number, 1)
max_count = optional(number, 5)
initial_count = optional(number, 1)
max_pods = optional(number, 60)
preemptible = optional(bool, false)
disk_size_gb = optional(number, 100)
image_type = optional(string, "COS_CONTAINERD")
instance_type = optional(string, "n2-standard-8")
gpu_accelerator = optional(string, "")
labels = optional(map(string), {
"dominodatalab.com/node-pool" = "platform"
})
taints = optional(list(string), [])
node_locations = optional(list(string), [])
}),
gpu = object({
min_count = optional(number, 0)
max_count = optional(number, 2)
initial_count = optional(number, 0)
max_pods = optional(number, 30)
preemptible = optional(bool, false)
disk_size_gb = optional(number, 400)
image_type = optional(string, "COS_CONTAINERD")
instance_type = optional(string, "n1-highmem-8")
gpu_accelerator = optional(string, "nvidia-tesla-p100")
labels = optional(map(string), {
"dominodatalab.com/node-pool" = "default-gpu"
"nvidia.com/gpu" = "true"
})
taints = optional(list(string), [
"nvidia.com/gpu=true:NoExecute"
])
node_locations = optional(list(string), [])
})
})
{
"compute": {},
"gpu": {},
"platform": {}
}
no
project GCP Project ID string "domino-eng-platform-dev" no
storage storage = {
filestore = {
enabled = Provision a Filestore instance (for production installs)
capacity_gb = Filestore Instance size (GB) for the cluster NFS shared storage
}
nfs_instance = {
enabled = Provision an instance as an NFS server (to avoid filestore churn during testing)
capacity_gb = NFS instance disk size
}
gcs = {
force_destroy_on_deletion = Toogle to allow recursive deletion of all objects in the bucket. if 'false' terraform will NOT be able to delete non-empty buckets.
}
object({
filestore = optional(object({
enabled = optional(bool, true)
capacity_gb = optional(number, 1024)
}), {}),
nfs_instance = optional(object({
enabled = optional(bool, false)
capacity_gb = optional(number, 100)
}), {}),
gcs = optional(object({
force_destroy_on_deletion = optional(bool, false)
}), {})
})
{} no
tags Deployment tags. map(string) {} no

Outputs

Name Description
bucket_name Name of the cloud storage bucket
cluster GKE cluster information
dns The external (public) DNS name for the Domino UI
domino_artifact_repository Domino Google artifact repository
google_filestore_instance Domino Google Cloud Filestore instance, name and ip_address
nfs_instance Domino Google Cloud Filestore instance, name and ip_address
nfs_instance_ip NFS instance IP
project GCP project ID
region Region where the cluster is deployed derived from 'location' input variable
service_accounts GKE cluster Workload Identity namespace IAM service accounts
static_ip The external (public) static IPv4 for the Domino UI
uuid Cluster UUID