Skip to content

Commit

Permalink
Added variable skip_provisioners to skip 'local-exec'
Browse files Browse the repository at this point in the history
 * Fix terraform-google-modules#258
 * Added test `simple_regional_skip_local_exec`
 * Remove old upgrading guide from README's
  • Loading branch information
paulpalamarchuk committed Oct 18, 2019
1 parent 81eb717 commit 198b191
Show file tree
Hide file tree
Showing 24 changed files with 96 additions and 37 deletions.
17 changes: 17 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,22 @@ Then perform the following commands on the root folder:
- `terraform apply` to apply the infrastructure build
- `terraform destroy` to destroy the built infrastructure

## Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.

## Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.

## Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.

In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Inputs

Expand Down Expand Up @@ -153,6 +169,7 @@ Then perform the following commands on the root folder:
| regional | Whether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!) | bool | `"true"` | no |
| remove\_default\_node\_pool | Remove default node pool while setting up the cluster | bool | `"false"` | no |
| service\_account | The service account to run nodes as if not overridden in `node_pools`. The create_service_account variable default value (true) will cause a cluster-specific service account to be created. | string | `""` | no |
| skip\_provisioners | Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality. | bool | `"false"` | no |
| stub\_domains | Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server | map(list(string)) | `<map>` | no |
| subnetwork | The subnetwork to host the cluster in (required) | string | n/a | yes |
| upstream\_nameservers | If specified, the values replace the nameservers taken by default from the node’s /etc/resolv.conf | list | `<list>` | no |
Expand Down
17 changes: 9 additions & 8 deletions autogen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,19 +122,19 @@ Then perform the following commands on the root folder:
- `terraform apply` to apply the infrastructure build
- `terraform destroy` to destroy the built infrastructure

## Upgrade to v3.0.0
## Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.
v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.

## Upgrade to v2.0.0
## Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.
v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.

## Upgrade to v1.0.0
## Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.
Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.

In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.

Expand Down Expand Up @@ -201,6 +201,7 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o
| resource\_usage\_export\_dataset\_id | The dataset id for which network egress metering for this cluster will be enabled. If enabled, a daemonset will be created in the cluster to meter network egress traffic. | string | `""` | no |
| sandbox\_enabled | (Beta) Enable GKE Sandbox (Do not forget to set `image_type` = `COS_CONTAINERD` and `node_version` = `1.12.7-gke.17` or later to use it). | bool | `"false"` | no |
| service\_account | The service account to run nodes as if not overridden in `node_pools`. The create_service_account variable default value (true) will cause a cluster-specific service account to be created. | string | `""` | no |
| skip\_provisioners | Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality. | bool | `"false"` | no |
| stub\_domains | Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server | map(list(string)) | `<map>` | no |
| subnetwork | The subnetwork to host the cluster in (required) | string | n/a | yes |
| upstream\_nameservers | If specified, the values replace the nameservers taken by default from the node’s /etc/resolv.conf | list | `<list>` | no |
Expand Down
1 change: 1 addition & 0 deletions autogen/cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -352,6 +352,7 @@ resource "google_container_node_pool" "pools" {
}

resource "null_resource" "wait_for_cluster" {
count = var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/wait-for-cluster.sh ${var.project_id} ${var.name}"
Expand Down
2 changes: 1 addition & 1 deletion autogen/dns.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
Delete default kube-dns configmap
*****************************************/
resource "null_resource" "delete_default_kube_dns_configmap" {
count = local.custom_kube_dns_config || local.upstream_nameservers_config ? 1 : 0
count = (local.custom_kube_dns_config || local.upstream_nameservers_config) || var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/kubectl_wrapper.sh https://${local.cluster_endpoint} ${data.google_client_config.default.access_token} ${local.cluster_ca_certificate} ${path.module}/scripts/delete-default-resource.sh kube-system configmap kube-dns"
Expand Down
5 changes: 5 additions & 0 deletions autogen/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -304,6 +304,11 @@ variable "cluster_resource_labels" {
default = {}
}

variable "skip_provisioners" {
type = bool
description = "Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality."
default = false
}
{% if private_cluster %}

variable "deploy_using_private_endpoint" {
Expand Down
1 change: 1 addition & 0 deletions cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,7 @@ resource "google_container_node_pool" "pools" {
}

resource "null_resource" "wait_for_cluster" {
count = var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/wait-for-cluster.sh ${var.project_id} ${var.name}"
Expand Down
2 changes: 1 addition & 1 deletion dns.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
Delete default kube-dns configmap
*****************************************/
resource "null_resource" "delete_default_kube_dns_configmap" {
count = local.custom_kube_dns_config || local.upstream_nameservers_config ? 1 : 0
count = (local.custom_kube_dns_config || local.upstream_nameservers_config) || var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/kubectl_wrapper.sh https://${local.cluster_endpoint} ${data.google_client_config.default.access_token} ${local.cluster_ca_certificate} ${path.module}/scripts/delete-default-resource.sh kube-system configmap kube-dns"
Expand Down
1 change: 1 addition & 0 deletions examples/simple_regional/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ This example illustrates how to create a simple cluster.
| network | The VPC network to host the cluster in | string | n/a | yes |
| project\_id | The project ID to host the cluster in | string | n/a | yes |
| region | The region to host the cluster in | string | n/a | yes |
| skip\_provisioners | Flag to skip local-exec provisioners | bool | `"false"` | no |
| subnetwork | The subnetwork to host the cluster in | string | n/a | yes |

## Outputs
Expand Down
1 change: 1 addition & 0 deletions examples/simple_regional/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ module "gke" {
ip_range_services = var.ip_range_services
create_service_account = false
service_account = var.compute_engine_service_account
skip_provisioners = var.skip_provisioners
}

data "google_client_config" "default" {
Expand Down
5 changes: 5 additions & 0 deletions examples/simple_regional/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,8 @@ variable "compute_engine_service_account" {
description = "Service account to associate to the nodes in the cluster"
}

variable "skip_provisioners" {
type = bool
description = "Flag to skip local-exec provisioners"
default = false
}
17 changes: 9 additions & 8 deletions modules/beta-private-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,19 +115,19 @@ Then perform the following commands on the root folder:
- `terraform apply` to apply the infrastructure build
- `terraform destroy` to destroy the built infrastructure

## Upgrade to v3.0.0
## Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.
v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.

## Upgrade to v2.0.0
## Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.
v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.

## Upgrade to v1.0.0
## Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.
Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.

In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.

Expand Down Expand Up @@ -194,6 +194,7 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o
| resource\_usage\_export\_dataset\_id | The dataset id for which network egress metering for this cluster will be enabled. If enabled, a daemonset will be created in the cluster to meter network egress traffic. | string | `""` | no |
| sandbox\_enabled | (Beta) Enable GKE Sandbox (Do not forget to set `image_type` = `COS_CONTAINERD` and `node_version` = `1.12.7-gke.17` or later to use it). | bool | `"false"` | no |
| service\_account | The service account to run nodes as if not overridden in `node_pools`. The create_service_account variable default value (true) will cause a cluster-specific service account to be created. | string | `""` | no |
| skip\_provisioners | Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality. | bool | `"false"` | no |
| stub\_domains | Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server | map(list(string)) | `<map>` | no |
| subnetwork | The subnetwork to host the cluster in (required) | string | n/a | yes |
| upstream\_nameservers | If specified, the values replace the nameservers taken by default from the node’s /etc/resolv.conf | list | `<list>` | no |
Expand Down
1 change: 1 addition & 0 deletions modules/beta-private-cluster/cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -328,6 +328,7 @@ resource "google_container_node_pool" "pools" {
}

resource "null_resource" "wait_for_cluster" {
count = var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/wait-for-cluster.sh ${var.project_id} ${var.name}"
Expand Down
2 changes: 1 addition & 1 deletion modules/beta-private-cluster/dns.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
Delete default kube-dns configmap
*****************************************/
resource "null_resource" "delete_default_kube_dns_configmap" {
count = local.custom_kube_dns_config || local.upstream_nameservers_config ? 1 : 0
count = (local.custom_kube_dns_config || local.upstream_nameservers_config) || var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/kubectl_wrapper.sh https://${local.cluster_endpoint} ${data.google_client_config.default.access_token} ${local.cluster_ca_certificate} ${path.module}/scripts/delete-default-resource.sh kube-system configmap kube-dns"
Expand Down
5 changes: 5 additions & 0 deletions modules/beta-private-cluster/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,11 @@ variable "cluster_resource_labels" {
default = {}
}

variable "skip_provisioners" {
type = bool
description = "Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality."
default = false
}

variable "deploy_using_private_endpoint" {
type = bool
Expand Down
17 changes: 9 additions & 8 deletions modules/beta-public-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,19 +110,19 @@ Then perform the following commands on the root folder:
- `terraform apply` to apply the infrastructure build
- `terraform destroy` to destroy the built infrastructure

## Upgrade to v3.0.0
## Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.
v3.0.0 is a breaking release. Refer to the
[Upgrading to v3.0 guide][upgrading-to-v3.0] for details.

## Upgrade to v2.0.0
## Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.
v2.0.0 is a breaking release. Refer to the
[Upgrading to v2.0 guide][upgrading-to-v2.0] for details.

## Upgrade to v1.0.0
## Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.
Version 1.0.0 of this module introduces a breaking change: adding the `disable-legacy-endpoints` metadata field to all node pools. This metadata is required by GKE and [determines whether the `/0.1/` and `/v1beta1/` paths are available in the nodes' metadata server](https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-metadata#disable-legacy-apis). If your applications do not require access to the node's metadata server, you can leave the default value of `true` provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field to `false` to allow your applications access to the above metadata server paths.

In either case, upgrading to module version `v1.0.0` will trigger a recreation of all node pools in the cluster.

Expand Down Expand Up @@ -185,6 +185,7 @@ In either case, upgrading to module version `v1.0.0` will trigger a recreation o
| resource\_usage\_export\_dataset\_id | The dataset id for which network egress metering for this cluster will be enabled. If enabled, a daemonset will be created in the cluster to meter network egress traffic. | string | `""` | no |
| sandbox\_enabled | (Beta) Enable GKE Sandbox (Do not forget to set `image_type` = `COS_CONTAINERD` and `node_version` = `1.12.7-gke.17` or later to use it). | bool | `"false"` | no |
| service\_account | The service account to run nodes as if not overridden in `node_pools`. The create_service_account variable default value (true) will cause a cluster-specific service account to be created. | string | `""` | no |
| skip\_provisioners | Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality. | bool | `"false"` | no |
| stub\_domains | Map of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS server | map(list(string)) | `<map>` | no |
| subnetwork | The subnetwork to host the cluster in (required) | string | n/a | yes |
| upstream\_nameservers | If specified, the values replace the nameservers taken by default from the node’s /etc/resolv.conf | list | `<list>` | no |
Expand Down
1 change: 1 addition & 0 deletions modules/beta-public-cluster/cluster.tf
Original file line number Diff line number Diff line change
Expand Up @@ -323,6 +323,7 @@ resource "google_container_node_pool" "pools" {
}

resource "null_resource" "wait_for_cluster" {
count = var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/wait-for-cluster.sh ${var.project_id} ${var.name}"
Expand Down
2 changes: 1 addition & 1 deletion modules/beta-public-cluster/dns.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
Delete default kube-dns configmap
*****************************************/
resource "null_resource" "delete_default_kube_dns_configmap" {
count = local.custom_kube_dns_config || local.upstream_nameservers_config ? 1 : 0
count = (local.custom_kube_dns_config || local.upstream_nameservers_config) || var.skip_provisioners ? 1 : 0

provisioner "local-exec" {
command = "${path.module}/scripts/kubectl_wrapper.sh https://${local.cluster_endpoint} ${data.google_client_config.default.access_token} ${local.cluster_ca_certificate} ${path.module}/scripts/delete-default-resource.sh kube-system configmap kube-dns"
Expand Down
5 changes: 5 additions & 0 deletions modules/beta-public-cluster/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -302,6 +302,11 @@ variable "cluster_resource_labels" {
default = {}
}

variable "skip_provisioners" {
type = bool
description = "Flag to skip all local-exec provisioners. It breaks down `stub_domains` and `upstream_nameservers` variables functionality."
default = false
}

variable "istio" {
description = "(Beta) Enable Istio addon"
Expand Down
Loading

0 comments on commit 198b191

Please sign in to comment.