Skip to content

A wrapper module around several other modules, which sets up an OpenShift cluster

License

Notifications You must be signed in to change notification settings

terraform-ibm-modules/terraform-ibm-ocp-all-inclusive

Repository files navigation

Red Hat OCP (OpenShift Container Platform) All Inclusive Module

Graduated (Supported) Build status pre-commit latest release Renovate enabled semantic-release

This module is a wrapper module that groups the following modules:

  • base-ocp-vpc-module - Provisions a base (bare) Red Hat OpenShift Container Platform cluster on VPC Gen2 (supports passing Key Protect details to encrypt cluster).
  • observability-agents-module - Deploys Logs Agent and Cloud Monitoring agents to a cluster.

âť— Important: You can't update Red Hat OpenShift cluster nodes by using this module. The Terraform logic ignores updates to prevent possible destructive changes.

Before you begin

Default Worker Pool management

You can manage the default worker pool using Terraform, and make changes to it through this module. This option is enabled by default. Under the hood, the default worker pool is imported as a ibm_container_vpc_worker_pool resource. Advanced users may opt-out of this option by setting import_default_worker_pool_on_create parameter to false. For most use cases it is recommended to keep this variable to true.

Important Considerations for Terraform and Default Worker Pool

Terraform Destroy

When using the default behavior of handling the default worker pool as a stand-alone ibm_container_vpc_worker_pool, you must manually remove the default worker pool from the Terraform state before running a terraform destroy command on the module. This is due to a known limitation in IBM Cloud.

Terraform CLI Example

For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:

      $ terraform state list | grep ibm_container_vpc_worker_pool
        > module.ocp_all_inclusive.module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["default"]
        > module.ocp_all_inclusive.module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["secondarypool"]
        > module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool["default"]
        > module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool["secondarypool"]
        > ...

      $ terraform state rm "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"

Schematics Example: For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:

        $ ibmcloud schematics workspace state rm --id <workspace_id> --address "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"

Changes Requiring Re-creation of Default Worker Pool

If you need to make changes to the default worker pool that require its re-creation (e.g., changing the worker node operating_system), you must set the allow_default_worker_pool_replacement variable to true, perform the apply, and then set it back to false in the code before the subsequent apply. This is only necessary for changes that require the recreation the entire default pool and is not needed for scenarios that does not require recreating the worker pool such as changing the number of workers in the default worker pool.

This approach is due to a limitation in the Terraform provider that may be lifted in the future.

Overview

terraform-ibm-ocp-all-inclusive

Usage

##############################################################################
# Required providers
##############################################################################

provider "ibm" {
  ibmcloud_api_key = "XXXXXXXXXX" # pragma: allowlist secret
  region           = "us-south"
}

# data lookup required to initialse helm and kubernetes providers
data "ibm_container_cluster_config" "cluster_config" {
  cluster_name_id = module.ocp_all_inclusive.cluster_id
}

provider "helm" {
  kubernetes {
    host                   = data.ibm_container_cluster_config.cluster_config.host
    token                  = data.ibm_container_cluster_config.cluster_config.token
  }
  # IBM Cloud credentials are required to authenticate to the helm repo
  registry {
    url = "oci://icr.io/ibm/observe/logs-agent-helm"
    username = "iamapikey"
    password = "XXXXXXXXXXXXXXXXX" # replace with an IBM cloud apikey # pragma: allowlist secret
  }
}

provider "kubernetes" {
  host                   = data.ibm_container_cluster_config.cluster_config.host
  token                  = data.ibm_container_cluster_config.cluster_config.token
}

##############################################################################
# ocp-all-inclusive-module
##############################################################################

module "ocp_all_inclusive" {
  source  = "terraform-ibm-modules/ocp-all-inclusive/ibm"
  version = "latest" # Replace "latest" with a release version to lock into a specific release
  ibmcloud_api_key              = "XXXXXXXXXX" # pragma: allowlist secret
  resource_group_id             = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
  region                        = "us-south"
  cluster_name                  = "my-test-cluster"
  cos_name                      = "my-cos-instance"
  vpc_id                        = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
  vpc_subnets = {
    zone-1 = [
      for zone in module.vpc.subnet_zone_list :
      {
        id         = zone.id
        zone       = zone.zone
        cidr_block = zone.cidr
      }
    ]
  }
  cloud_logs_ingress_endpoint             = "<cloud-logs-instance-guid>.ingress.us-south.logs.cloud.ibm.com"
  cloud_logs_ingress_port                 = 443
  cloud_monitoring_instance_name          = "my-sysdig"
  cloud_monitoring_access_key             = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
}

Required IAM access policies

You need the following permissions to run this module.

  • Account Management
    • All Identity and Access Enabled service
      • Viewer platform access
    • All Resource Groups service
      • Viewer platform access
  • IAM Services
    • Cloud Object Storage service
      • Editor platform access
      • Manager service access
    • Kubernetes service
      • Administrator platform access
      • Manager service access
    • VPC Infrastructure service
      • Administrator platform access
      • Manager service access

Requirements

Name Version
terraform >= 1.3.0
external >= 2.2.3, < 3.0.0
helm >= 2.8.0, < 3.0.0
ibm >= 1.66.0, < 2.0.0
kubernetes >= 2.16.1, < 3.0.0
local >= 2.2.3, < 3.0.0
null >= 3.2.1, < 4.0.0
time >= 0.9.1, < 1.0.0

Modules

Name Source Version
observability_agents terraform-ibm-modules/observability-agents/ibm 2.3.3
ocp_base terraform-ibm-modules/base-ocp-vpc/ibm 3.35.4
trusted_profile terraform-ibm-modules/trusted-profile/ibm 1.0.4

Resources

No resources.

Inputs

Name Description Type Default Required
access_tags Optional list of access management tags to add to the OCP Cluster created by this module. list(string) [] no
additional_lb_security_group_ids Additional security group IDs to add to the load balancers associated with the cluster. These security groups are in addition to the IBM-maintained security group. list(string) [] no
additional_vpe_security_group_ids Additional security groups to add to all the load balancers. This comes in addition to the IBM maintained security group.
object({
master = optional(list(string), [])
registry = optional(list(string), [])
api = optional(list(string), [])
})
{} no
addons List of all addons supported by the ocp cluster.
object({
debug-tool = optional(string)
image-key-synchronizer = optional(string)
openshift-data-foundation = optional(string)
vpc-file-csi-driver = optional(string)
static-route = optional(string)
cluster-autoscaler = optional(string)
vpc-block-csi-driver = optional(string)
})
null no
allow_default_worker_pool_replacement (Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm_container_vpc_worker_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true. bool false no
attach_ibm_managed_security_group Whether to attach the IBM-defined default security group (named kube-<clusterid>) to all worker nodes. Applies only if custom_security_group_ids is set. bool true no
cloud_logs_ingress_endpoint The host for IBM Cloud Logs ingestion. It is required if logs_agent_enabled is set to true. Ensure you use the ingress endpoint. See https://cloud.ibm.com/docs/cloud-logs?topic=cloud-logs-endpoints_ingress. string null no
cloud_logs_ingress_port The target port for the IBM Cloud Logs ingestion endpoint. The port must be 443 if you connect by using a VPE gateway, or port 3443 when you connect by using CSEs. number 3443 no
cloud_monitoring_access_key Access key for the Cloud Monitoring agent to communicate with the instance. string null no
cloud_monitoring_add_cluster_name If true, configure the cloud monitoring agent to attach a tag containing the cluster name to all metric data. bool true no
cloud_monitoring_agent_name Cloud Monitoring agent name. Used for naming all kubernetes and helm resources on the cluster. string "sysdig-agent" no
cloud_monitoring_agent_namespace Namespace where to deploy the Cloud Monitoring agent. Default value is 'ibm-observe' string "ibm-observe" no
cloud_monitoring_agent_tags List of tags to associate with the cloud monitoring agents list(string) [] no
cloud_monitoring_agent_tolerations List of tolerations to apply to Cloud Monitoring agent.
list(object({
key = optional(string)
operator = optional(string)
value = optional(string)
effect = optional(string)
tolerationSeconds = optional(number)
}))
[
{
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master",
"operator": "Exists"
}
]
no
cloud_monitoring_container_filter To filter custom containers, specify the Cloud Monitoring containers to include or to exclude. See https://cloud.ibm.com/docs/monitoring?topic=monitoring-change_kube_agent#change_kube_agent_filter_data.
list(object({
type = string
parameter = string
name = string
}))
[] no
cloud_monitoring_enabled Deploy IBM Cloud Monitoring agent bool true no
cloud_monitoring_endpoint_type Specify the IBM Cloud Monitoring instance endpoint type (public or private) to use. Used to construct the ingestion endpoint. string "private" no
cloud_monitoring_instance_region The IBM Cloud Monitoring instance region. Used to construct the ingestion endpoint. string null no
cloud_monitoring_metrics_filter To filter custom metrics, specify the Cloud Monitoring metrics to include or to exclude. See https://cloud.ibm.com/docs/monitoring?topic=monitoring-change_kube_agent#change_kube_agent_inc_exc_metrics.
list(object({
type = string
name = string
}))
[] no
cloud_monitoring_secret_name The name of the secret which will store the access key. string "sysdig-agent" no
cluster_config_endpoint_type Specify which type of endpoint to use for for cluster config access: 'default', 'private', 'vpe', 'link'. 'default' value will use the default endpoint of the cluster. string "default" no
cluster_name The name to give the OCP cluster provisioned by the module. string n/a yes
cluster_ready_when The cluster is ready when one of the following: MasterNodeReady (not recommended), OneWorkerNodeReady, Normal, IngressReady string "IngressReady" no
cluster_tags List of metadata labels to add to cluster. list(string) [] no
cos_name Name of the COS instance to provision for OpenShift internal registry storage. New instance only provisioned if 'enable_registry_storage' is true and 'use_existing_cos' is false. Default: '<cluster_name>_cos' string null no
custom_security_group_ids Up to 4 additional security groups to add to all worker nodes. If use_ibm_managed_security_group is set to true, these security groups are in addition to the IBM-maintained security group. If additional groups are added, the default VPC security group is not assigned to the worker nodes. list(string) null no
disable_outbound_traffic_protection Whether to allow public outbound access from the cluster workers. This is only applicable for Red Hat OpenShift 4.15. bool false no
disable_public_endpoint Whether access to the public service endpoint is disabled when the cluster is created. Does not affect existing clusters. You can't disable a public endpoint on an existing cluster, so you can't convert a public cluster to a private cluster. To change a public endpoint to private, create another cluster with this input set to true. bool false no
enable_registry_storage Set to true to enable IBM Cloud Object Storage for the Red Hat OpenShift internal image registry. Set to false only for new cluster deployments in an account that is allowlisted for this feature. bool true no
existing_cos_id The COS id of an already existing COS instance to use for OpenShift internal registry storage. Only required if 'enable_registry_storage' and 'use_existing_cos' are true string null no
existing_kms_instance_guid The GUID of an existing KMS instance which will be used for cluster encryption. If no value passed, cluster data is stored in the Kubernetes etcd, which ends up on the local disk of the Kubernetes master (not recommended). string null no
existing_kms_root_key_id The Key ID of a root key, existing in the KMS instance passed in var.existing_kms_instance_guid, which will be used to encrypt the data encryption keys (DEKs) which are then used to encrypt the secrets in the cluster. Required if value passed for var.existing_kms_instance_guid. string null no
force_delete_storage Delete attached storage when destroying the cluster - Default: false bool false no
ignore_worker_pool_size_changes Enable if using worker autoscaling. Stops Terraform managing worker count bool false no
import_default_worker_pool_on_create (Advanced users) Whether to handle the default worker pool as a stand-alone ibm_container_vpc_worker_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource. bool true no
kms_account_id Id of the account that owns the KMS instance to encrypt the cluster. It is only required if the KMS instance is in another account. string null no
kms_use_private_endpoint Set as true to use the Private endpoint when communicating between cluster and KMS instance. bool true no
kms_wait_for_apply Set true to make terraform wait until KMS is applied to master and it is ready and deployed. Default value is true. bool true no
logs_agent_additional_log_source_paths The list of additional log sources. By default, the Logs agent collects logs from a single source at /var/log/containers/*.log. list(string) [] no
logs_agent_additional_metadata The list of additional metadata fields to add to the routed logs.
list(object({
key = optional(string)
value = optional(string)
}))
[] no
logs_agent_enabled Whether to deploy the Logs agent. bool true no
logs_agent_exclude_log_source_paths The list of log sources to exclude. Specify the paths that the Logs agent ignores. list(string) [] no
logs_agent_iam_api_key The IBM Cloud API key for the Logs agent to authenticate and communicate with the IBM Cloud Logs. It is required if logs_agent_enabled is true and logs_agent_iam_mode is set to IAMAPIKey. string null no
logs_agent_iam_environment IAM authentication Environment: Production or PrivateProduction or Staging or PrivateStaging. Production specifies the public endpoint & PrivateProduction specifies the private endpoint. string "PrivateProduction" no
logs_agent_iam_mode IAM authentication mode: TrustedProfile or IAMAPIKey. If TrustedProfile is selected, the module will create one. string "TrustedProfile" no
logs_agent_log_source_namespaces The list of namespaces from which logs should be forwarded by agent. If namespaces are not listed, logs from all namespaces will be sent. list(string) [] no
logs_agent_name The name of the Logs agent. The name is used in all Kubernetes and Helm resources in the cluster. string "logs-agent" no
logs_agent_namespace The namespace where the Logs agent is deployed. The default value is ibm-observe. string "ibm-observe" no
logs_agent_selected_log_source_paths The list of specific log sources paths. Logs will only be collected from the specified log source paths. If no paths are specified, it will send logs from /var/log/containers. list(string) [] no
logs_agent_tolerations List of tolerations to apply to Logs agent. The default value means a pod will run on every node.
list(object({
key = optional(string)
operator = optional(string)
value = optional(string)
effect = optional(string)
tolerationSeconds = optional(number)
}))
[
{
"operator": "Exists"
}
]
no
manage_all_addons Whether Terraform manages all cluster add-ons, even add-ons installed outside of the module. If set to 'true', this module destroys the add-ons installed by other sources. bool false no
number_of_lbs The number of load balancer to associate with the additional_lb_security_group_names security group. Must match the number of load balancers that are associated with the cluster number 1 no
ocp_entitlement Value that is applied to the entitlements for OCP cluster provisioning string "cloud_pak" no
ocp_version The version of the OpenShift cluster that should be provisioned (format 4.x). This is only used during initial cluster provisioning, but ignored for future updates. Supports passing the string 'default' (current IKS default recommended version). If no value is passed, it will default to 'default'. string null no
region The IBM Cloud region where all resources will be provisioned. string n/a yes
resource_group_id The IBM Cloud resource group ID to provision all resources in. string n/a yes
use_existing_cos Flag indicating whether or not to use an existing COS instance for OpenShift internal registry storage. Only applicable if 'enable_registry_storage' is true bool false no
verify_worker_network_readiness By setting this to true, a script will run kubectl commands to verify that all worker nodes can communicate successfully with the master. If the runtime does not have access to the kube cluster to run kubectl commands, this should be set to false. bool true no
vpc_id The ID of the VPC to use. string n/a yes
vpc_subnets Subnet metadata by VPC tier.
map(list(object({
id = string
zone = string
cidr_block = string
})))
n/a yes
worker_pools List of worker pools
list(object({
subnet_prefix = optional(string)
vpc_subnets = optional(list(object({
id = string
zone = string
cidr_block = string
})))
pool_name = string
machine_type = string
workers_per_zone = number
resource_group_id = optional(string)
operating_system = string
labels = optional(map(string))
minSize = optional(number)
secondary_storage = optional(string)
maxSize = optional(number)
enableAutoscaling = optional(bool)
boot_volume_encryption_kms_config = optional(object({
crk = string
kms_instance_id = string
kms_account_id = optional(string)
}))
additional_security_group_ids = optional(list(string))
}))
[
{
"enableAutoscaling": true,
"labels": {},
"machine_type": "bx2.4x16",
"maxSize": 3,
"minSize": 1,
"operating_system": "REDHAT_8_64",
"pool_name": "default",
"subnet_prefix": "zone-1",
"workers_per_zone": 2
},
{
"enableAutoscaling": true,
"labels": {
"dedicated": "zone-2"
},
"machine_type": "bx2.4x16",
"maxSize": 3,
"minSize": 1,
"operating_system": "REDHAT_8_64",
"pool_name": "zone-2",
"subnet_prefix": "zone-2",
"workers_per_zone": 2
},
{
"enableAutoscaling": true,
"labels": {
"dedicated": "zone-3"
},
"machine_type": "bx2.4x16",
"maxSize": 3,
"minSize": 1,
"operating_system": "REDHAT_8_64",
"pool_name": "zone-3",
"subnet_prefix": "zone-3",
"workers_per_zone": 2
}
]
no

Outputs

Name Description
cluster_crn CRN for the created cluster
cluster_id ID of cluster created
cluster_name Name of the created cluster
cos_crn The IBM Cloud Object Storage instance CRN used to back up the internal registry in the OCP cluster.
ingress_hostname The hostname that was assigned to the OCP clusters Ingress subdomain.
master_url The URL of the Kubernetes master.
ocp_version Openshift Version of the cluster
private_service_endpoint_url Private service endpoint URL
public_service_endpoint_url Public service endpoint URL
region Region cluster is deployed in
resource_group_id Resource group ID the cluster is deployed in
vpc_id ID of the clusters VPC
vpe_url The virtual private endpoint URL of the Kubernetes cluster.
workerpools Worker pools created

Contributing

You can report issues and request features for this module in GitHub issues in the module repo. See Report an issue or request a feature.

To set up your local development environment, see Local development setup in the project documentation.