Skip to content

Commit

Permalink
Add option to ignore size for nodepools and instance pools
Browse files Browse the repository at this point in the history
  • Loading branch information
robo-cap authored and devoncrouse committed Jul 8, 2024
1 parent c8e8410 commit 46ec5f4
Show file tree
Hide file tree
Showing 8 changed files with 318 additions and 29 deletions.
62 changes: 61 additions & 1 deletion docs/src/guide/extensions_cluster_autoscaler.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Extensions: Cluster Autoscaler
# Extensions: Standalone Cluster Autoscaler

Deployed using the [cluster-autoscaler Helm chart](https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler) with configuration from the `worker_pools` variable.

Expand All @@ -13,6 +13,66 @@ The following parameters may be added on each pool definition to enable manageme
* `min_size`: Define the minimum scale of a pool managed by the cluster autoscaler. Defaults to `size` when not provided.
* `max_size`: Define the maximum scale of a pool managed by the cluster autoscaler. Defaults to `size` when not provided.

The cluster-autoscaler will manage the size of the nodepools with the attribute `autoscale = true`. To avoid the conflict between the actual `size` of a nodepool and the `size` defined in the terraform configuration files, you can add the `ignore_initial_pool_size = true` attribute to the nodepool definition in the `worker_pools` variable. This parameter will allow terraform to ignore the [drift](https://developer.hashicorp.com/terraform/tutorials/state/resource-drift) of the size parameter for the specific nodepool.

This setting is strongly recommended for nodepools configured with `autoscale = true`.

Example:

```
worker_pools = {
np-autoscaled = {
description = "Node pool managed by cluster autoscaler",
size = 2,
min_size = 1,
max_size = 3,
autoscale = true,
ignore_initial_pool_size = true # allows nodepool size drift
},
np-autoscaler = {
description = "Node pool with cluster autoscaler scheduling allowed",
size = 1,
allow_autoscaler = true,
},
}
```


For existing deployments is necessary to use the [terraform state mv](https://developer.hashicorp.com/terraform/cli/commands/state/mv) command.

Example for `nodepool` resource:
```
$ terraform plan
...
Terraform will perform the following actions:
# module.oke.module.workers[0].oci_containerengine_node_pool.tfscaled_workers["np-autoscaled"] will be destroyed
...
# module.oke.module.workers[0].oci_containerengine_node_pool.autoscaled_workers["np-autoscaled"] will be created
$ terraform state mv module.oke.module.workers[0].oci_containerengine_node_pool.tfscaled_workers[\"np-autoscaled\"] module.oke.module.workers[0].oci_containerengine_node_pool.autoscaled_workers[\"np-autoscaled\"]
Successfully moved 1 object(s).
$ terraform plan
...
No changes. Your infrastructure matches the configuration.
```

Example for `instance_pool` resource:

```
$ terraform state mv module.oke.module.workers[0].oci_core_instance_pool.tfscaled_workers[\"np-autoscaled\"] module.oke.module.workers[0].oci_core_instance_pool.autoscaled_workers[\"np-autoscaled\"]
Successfully moved 1 object(s).
```

### Notes

Don't set `allow_autoscaler` and `autoscale` to `true` on the same pool. This will cause the cluster autoscaler pod to be unschedulable as the `oke.oraclecloud.com/cluster_autoscaler: managed` node label will override the `oke.oraclecloud.com/cluster_autoscaler: allowed` node label specified by the cluster autoscaler `nodeSelector` pod attribute.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/resources.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@
## Workers
<!-- BEGIN_TF_WORKERS -->

* [oci_containerengine_node_pool.workers](https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool)
* [oci_containerengine_node_pool.tfscaled_workers](https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool)
* [oci_containerengine_virtual_node_pool.workers](https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_virtual_node_pool)
* [oci_core_cluster_network.workers](https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_cluster_network)
* [oci_core_instance.workers](https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/core_instance)
Expand Down
34 changes: 18 additions & 16 deletions examples/workers/vars-workers-advanced.auto.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,26 @@ worker_pools = {
create = false
},
wg_np-vm-ol7 = {
description = "OKE-managed Node Pool with OKE Oracle Linux 7 image",
create = false,
mode = "node-pool",
size = 1,
size_max = 2,
os = "Oracle Linux",
os_version = "7",
autoscale = true,
description = "OKE-managed Node Pool with OKE Oracle Linux 7 image",
create = false,
mode = "node-pool",
size = 1,
size_max = 2,
os = "Oracle Linux",
os_version = "7",
autoscale = true,
ignore_initial_pool_size = true
},
wg_np-vm-ol8 = {
description = "OKE-managed Node Pool with OKE Oracle Linux 8 image",
create = false,
mode = "node-pool",
size = 1,
size_max = 3,
os = "Oracle Linux",
os_version = "8",
autoscale = true,
description = "OKE-managed Node Pool with OKE Oracle Linux 8 image",
create = false,
mode = "node-pool",
size = 1,
size_max = 3,
os = "Oracle Linux",
os_version = "8",
autoscale = true,
ignore_initial_pool_size = true
},
wg_np-vm-custom = {
description = "OKE-managed Node Pool with custom image",
Expand Down
11 changes: 6 additions & 5 deletions examples/workers/vars-workers-autoscaling.auto.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,12 @@

worker_pools = {
np-autoscaled = {
description = "Node pool managed by cluster autoscaler",
size = 2,
min_size = 1,
max_size = 3,
autoscale = true,
description = "Node pool managed by cluster autoscaler",
size = 2,
min_size = 1,
max_size = 3,
autoscale = true,
ignore_initial_pool_size = true
},
np-autoscaler = {
description = "Node pool with cluster autoscaler scheduling allowed",
Expand Down
10 changes: 10 additions & 0 deletions migration.tf
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,13 @@ moved {
from = module.oke.oci_containerengine_node_pool.nodepools
to = module.workers[0].oci_containerengine_node_pool.workers
}

moved {
from = module.workers[0].oci_containerengine_node_pool.workers
to = module.workers[0].oci_containerengine_node_pool.tfscaled_workers
}

moved {
from = module.workers[0].oci_core_instance_pool.workers
to = module.workers[0].oci_core_instance_pool.tfscaled_workers
}
64 changes: 62 additions & 2 deletions modules/workers/instancepools.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl

# Dynamic resource block for Instance Pool groups defined in worker_pools
resource "oci_core_instance_pool" "workers" {
resource "oci_core_instance_pool" "tfscaled_workers" {
# Create an OCI Instance Pool resource for each enabled entry of the worker_pools map with that mode.
for_each = local.enabled_instance_pools
for_each = { for key, value in local.enabled_instance_pools: key => value if tobool(lookup(value, "ignore_initial_pool_size", false)) == false }
compartment_id = each.value.compartment_id
display_name = each.key
size = each.value.size
Expand Down Expand Up @@ -61,3 +61,63 @@ resource "oci_core_instance_pool" "workers" {
}
}
}

resource "oci_core_instance_pool" "autoscaled_workers" {
# Create an OCI Instance Pool resource for each enabled entry of the worker_pools map with that mode.
for_each = { for key, value in local.enabled_instance_pools: key => value if tobool(lookup(value, "ignore_initial_pool_size", false)) == true }
compartment_id = each.value.compartment_id
display_name = each.key
size = each.value.size
instance_configuration_id = oci_core_instance_configuration.workers[each.key].id
defined_tags = each.value.defined_tags
freeform_tags = each.value.freeform_tags

dynamic "placement_configurations" {
for_each = each.value.availability_domains
iterator = ad

content {
availability_domain = ad.value
primary_subnet_id = each.value.subnet_id

# Value(s) specified on pool, or null to select automatically
fault_domains = try(each.value.placement_fds, null)

dynamic "secondary_vnic_subnets" {
for_each = lookup(each.value, "secondary_vnics", {})
iterator = vnic
content {
display_name = vnic.key
subnet_id = lookup(vnic.value, "subnet_id", each.value.subnet_id)
}
}
}
}

lifecycle {
ignore_changes = [
display_name, defined_tags, freeform_tags,
placement_configurations, size
]

precondition {
condition = coalesce(each.value.image_id, "none") != "none"
error_message = <<-EOT
Missing image_id; check provided value if image_type is 'custom', or image_os/image_os_version if image_type is 'oke' or 'platform'.
pool: ${each.key}
image_type: ${coalesce(each.value.image_type, "none")}
image_id: ${coalesce(each.value.image_id, "none")}
EOT
}

precondition {
condition = var.cni_type == "flannel"
error_message = "Instance Pools require a cluster with `cni_type = flannel`."
}

precondition {
condition = each.value.autoscale == false
error_message = "Instance Pools do not support cluster autoscaler management."
}
}
}
5 changes: 3 additions & 2 deletions modules/workers/locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ locals {
eviction_grace_duration = 300
force_node_delete = true
extended_metadata = {} # empty pool-specific default
ignore_initial_pool_size = false
image_id = var.image_id
image_type = var.image_type
kubernetes_version = var.kubernetes_version
Expand Down Expand Up @@ -231,9 +232,9 @@ locals {
}

# Maps of worker pool OCI resources by pool name enriched with desired/custom parameters for various modes
worker_node_pools = { for k, v in oci_containerengine_node_pool.workers : k => merge(v, lookup(local.worker_pools_final, k, {})) }
worker_node_pools = { for k, v in merge(oci_containerengine_node_pool.tfscaled_workers, oci_containerengine_node_pool.autoscaled_workers) : k => merge(v, lookup(local.worker_pools_final, k, {})) }
worker_virtual_node_pools = { for k, v in oci_containerengine_virtual_node_pool.workers : k => merge(v, lookup(local.worker_pools_final, k, {})) }
worker_instance_pools = { for k, v in oci_core_instance_pool.workers : k => merge(v, lookup(local.worker_pools_final, k, {})) }
worker_instance_pools = { for k, v in merge(oci_core_instance_pool.tfscaled_workers, oci_core_instance_pool.autoscaled_workers) : k => merge(v, lookup(local.worker_pools_final, k, {})) }
worker_cluster_networks = { for k, v in oci_core_cluster_network.workers : k => merge(v, lookup(local.worker_pools_final, k, {})) }
worker_instances = { for k, v in oci_core_instance.workers : k => merge(v, lookup(local.worker_pools_final, k, {})) }

Expand Down
Loading

0 comments on commit 46ec5f4

Please sign in to comment.