diff --git a/.changelog/5103.txt b/.changelog/5103.txt new file mode 100644 index 0000000000..28a0913c09 --- /dev/null +++ b/.changelog/5103.txt @@ -0,0 +1,3 @@ +```release-note:none +Several updates on documentation and examples +``` diff --git a/website/docs/d/compute_health_check.html.markdown b/website/docs/d/compute_health_check.html.markdown index 243ed1c422..077664e5e8 100644 --- a/website/docs/d/compute_health_check.html.markdown +++ b/website/docs/d/compute_health_check.html.markdown @@ -14,7 +14,7 @@ Get information about a HealthCheck. ## Example Usage ```tf -data "google_compute_health_check" "health_chceck" { +data "google_compute_health_check" "health_check" { name = "my-hc" } ``` diff --git a/website/docs/d/compute_instance_template.html.markdown b/website/docs/d/compute_instance_template.html.markdown index f98d608272..7ea8183904 100644 --- a/website/docs/d/compute_instance_template.html.markdown +++ b/website/docs/d/compute_instance_template.html.markdown @@ -44,7 +44,7 @@ The following arguments are supported: --- * `project` - (Optional) The ID of the project in which the resource belongs. - If `project` is not provideded, the provider project is used. + If `project` is not provided, the provider project is used. ## Attributes Reference diff --git a/website/docs/d/runtimeconfig_variable.html.markdown b/website/docs/d/runtimeconfig_variable.html.markdown index bfd04a0045..300842cceb 100644 --- a/website/docs/d/runtimeconfig_variable.html.markdown +++ b/website/docs/d/runtimeconfig_variable.html.markdown @@ -4,7 +4,7 @@ layout: "google" page_title: "Google: google_runtimeconfig_variable" sidebar_current: "docs-google-datasource-runtimeconfig-variable" description: |- - Get information about a Google Cloud RuntimeConfig varialbe. + Get information about a Google Cloud RuntimeConfig variable. --- # google\_runtimeconfig\_variable diff --git a/website/docs/guides/iam_deleted_members.html.markdown b/website/docs/guides/iam_deleted_members.html.markdown index 19dc249083..2e172d4a3c 100644 --- a/website/docs/guides/iam_deleted_members.html.markdown +++ b/website/docs/guides/iam_deleted_members.html.markdown @@ -28,7 +28,7 @@ After this intermediate phase, attempting to grant permissions to a principal th ## Using `*_iam_policy` resources -`_iam_policy` allows you to declare the entire IAM policy from within Terraform. Users may see diffs on `deleted:` members in some cirtumstances, but applying the policy should succeed and resolve any issues. Specifying `deleted:` members is not allowed in Terraform, so any policy entirely managed by Terraform should automatically remove any deleted members when Terraform is run. +`_iam_policy` allows you to declare the entire IAM policy from within Terraform. Users may see diffs on `deleted:` members in some circumstances, but applying the policy should succeed and resolve any issues. Specifying `deleted:` members is not allowed in Terraform, so any policy entirely managed by Terraform should automatically remove any deleted members when Terraform is run. During the intermediate period it may be required to `taint` the `_iam_policy` resource to ensure any deleted principals are removed *before* the new principal is granted permission. This should only be necessary if you are continuing to see diffs after successful applies. For more information on using `taint` see the [official documentation](https://www.terraform.io/docs/commands/taint.html). diff --git a/website/docs/guides/version_2_upgrade.html.markdown b/website/docs/guides/version_2_upgrade.html.markdown index 2dc82e9ce7..0adc73dc88 100644 --- a/website/docs/guides/version_2_upgrade.html.markdown +++ b/website/docs/guides/version_2_upgrade.html.markdown @@ -78,8 +78,8 @@ to either `terraform import` them or delete them by hand. - [Data Sources](#data-sources) - [Resource: `google_bigquery_dataset`](#resource-google_bigquery_dataset) - [Resource: `google_bigtable_instance`](#resource-google_bigtable_instance) -- [Resource: `google_binary_authorizaton_attestor`](#resource-google_binary_authorization_attestor) -- [Resource: `google_binary_authorizaton_policy`](#resource-google_binary_authorization_policy) +- [Resource: `google_binary_authorization_attestor`](#resource-google_binary_authorization_attestor) +- [Resource: `google_binary_authorization_policy`](#resource-google_binary_authorization_policy) - [Resource: `google_cloudbuild_trigger`](#resource-google_cloudbuild_trigger) - [Resource: `google_cloudfunctions_function`](#resource-google_cloudfunctions_function) - [Resource: `google_compute_backend_service`](#resource-google_compute_backend_service) diff --git a/website/docs/guides/version_3_upgrade.html.markdown b/website/docs/guides/version_3_upgrade.html.markdown index 97d01999d1..7be1802a0f 100644 --- a/website/docs/guides/version_3_upgrade.html.markdown +++ b/website/docs/guides/version_3_upgrade.html.markdown @@ -549,7 +549,7 @@ Previously documentation suggested Terraform could use the same range of valid IP Address formats for `ip_address` as accepted by the API (e.g. named addresses or URLs to GCP Address resources). However, the server returns only literal IP addresses and thus caused diffs on re-apply (i.e. a permadiff). We amended -documenation to say Terraform only accepts literal IP addresses. +documentation to say Terraform only accepts literal IP addresses. This is now strictly validated. While this shouldn't have a large breaking impact as users would have already run into permadiff issues on re-apply, diff --git a/website/docs/r/compute_instance.html.markdown b/website/docs/r/compute_instance.html.markdown index 56a99b9af7..fc5b4ed6f1 100644 --- a/website/docs/r/compute_instance.html.markdown +++ b/website/docs/r/compute_instance.html.markdown @@ -137,7 +137,7 @@ The following arguments are supported: startup-script metadata key on the created instance and thus the two mechanisms are not allowed to be used simultaneously. Users are free to use either mechanism - the only distinction is that this separate attribute - willl cause a recreate on modification. On import, `metadata_startup_script` + will cause a recreate on modification. On import, `metadata_startup_script` will be set, but `metadata.startup-script` will not - if you choose to use the other mechanism, you will see a diff immediately after import, which will cause a destroy/recreate operation. You may want to modify your state file manually diff --git a/website/docs/r/container_node_pool.html.markdown b/website/docs/r/container_node_pool.html.markdown index 03ac7fd491..03e79535c9 100644 --- a/website/docs/r/container_node_pool.html.markdown +++ b/website/docs/r/container_node_pool.html.markdown @@ -120,7 +120,7 @@ resource "google_container_cluster" "primary" { may change this value in your existing cluster, which will trigger destruction and recreation on the next Terraform run (to rectify the discrepancy). If you don't need this value, don't set it. If you do need it, you can [use a lifecycle block to - ignore subsqeuent changes to this field](https://github.com/hashicorp/terraform-provider-google/issues/6901#issuecomment-667369691). + ignore subsequent changes to this field](https://github.com/hashicorp/terraform-provider-google/issues/6901#issuecomment-667369691). * `management` - (Optional) Node management configuration, wherein auto-repair and auto-upgrade is configured. Structure is documented below. diff --git a/website/docs/r/dataproc_cluster.html.markdown b/website/docs/r/dataproc_cluster.html.markdown index aa46fb713a..1d464960a6 100644 --- a/website/docs/r/dataproc_cluster.html.markdown +++ b/website/docs/r/dataproc_cluster.html.markdown @@ -139,7 +139,7 @@ resource "google_dataproc_cluster" "accelerated_cluster" { * `cluster_config` - (Optional) Allows you to configure various aspects of the cluster. Structure defined below. -* `graceful_decommission_timout` - (Optional) Allows graceful decomissioning when you change the number of worker nodes directly through a terraform apply. +* `graceful_decommission_timeout` - (Optional) Allows graceful decomissioning when you change the number of worker nodes directly through a terraform apply. Does not affect auto scaling decomissioning from an autoscaling policy. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). @@ -190,7 +190,7 @@ The `cluster_config` block supports: * `preemptible_worker_config` (Optional) The Google Compute Engine config settings for the additional instances in a cluster. Structure defined below. * **NOTE** : `preemptible_worker_config` is - an alias for the api's [secondaryWorkerConfig](https://cloud.google.com/dataproc/docs/reference/rest/v1/ClusterConfig#InstanceGroupConfig). The name doesn't neccasarily mean it is preemptible and is named as + an alias for the api's [secondaryWorkerConfig](https://cloud.google.com/dataproc/docs/reference/rest/v1/ClusterConfig#InstanceGroupConfig). The name doesn't necessarily mean it is preemptible and is named as such for legacy/compatibility reasons. * `software_config` (Optional) The config settings for software inside the cluster. diff --git a/website/docs/r/gke_hub_feature_membership.html.markdown b/website/docs/r/gke_hub_feature_membership.html.markdown index 8fbc495989..7edc9e274e 100644 --- a/website/docs/r/gke_hub_feature_membership.html.markdown +++ b/website/docs/r/gke_hub_feature_membership.html.markdown @@ -106,7 +106,7 @@ The `configmanagement` block supports: * `binauthz` - (Optional) - Binauthz conifguration for the cluster. + Binauthz configuration for the cluster. * `config_sync` - (Optional) diff --git a/website/docs/r/google_project.html.markdown b/website/docs/r/google_project.html.markdown index 43e5137219..95c7664033 100644 --- a/website/docs/r/google_project.html.markdown +++ b/website/docs/r/google_project.html.markdown @@ -77,7 +77,7 @@ The following arguments are supported: * `billing_account` - (Optional) The alphanumeric ID of the billing account this project belongs to. The user or service account performing this operation with Terraform - must have at mininum Billing Account User privileges (`roles/billing.user`) on the billing account. + must have at minimum Billing Account User privileges (`roles/billing.user`) on the billing account. See [Google Cloud Billing API Access Control](https://cloud.google.com/billing/docs/how-to/billing-access) for more details. diff --git a/website/docs/r/logging_billing_account_sink.html.markdown b/website/docs/r/logging_billing_account_sink.html.markdown index 74d16d0d20..ab4f7bee20 100644 --- a/website/docs/r/logging_billing_account_sink.html.markdown +++ b/website/docs/r/logging_billing_account_sink.html.markdown @@ -23,7 +23,7 @@ typical IAM roles granted on a project. ```hcl resource "google_logging_billing_account_sink" "my-sink" { name = "my-sink" - description = "some explaination on what this is" + description = "some explanation on what this is" billing_account = "ABCDEF-012345-GHIJKL" # Can export to pubsub, cloud storage, or bigquery diff --git a/website/docs/r/logging_folder_sink.html.markdown b/website/docs/r/logging_folder_sink.html.markdown index 1c90f368b6..625b2a205d 100644 --- a/website/docs/r/logging_folder_sink.html.markdown +++ b/website/docs/r/logging_folder_sink.html.markdown @@ -20,7 +20,7 @@ Manages a folder-level logging sink. For more information see: ```hcl resource "google_logging_folder_sink" "my-sink" { name = "my-sink" - description = "some explaination on what this is" + description = "some explanation on what this is" folder = google_folder.my-folder.name # Can export to pubsub, cloud storage, or bigquery diff --git a/website/docs/r/logging_organization_sink.html.markdown b/website/docs/r/logging_organization_sink.html.markdown index a194f039ed..2726019fa0 100644 --- a/website/docs/r/logging_organization_sink.html.markdown +++ b/website/docs/r/logging_organization_sink.html.markdown @@ -19,7 +19,7 @@ Manages a organization-level logging sink. For more information see: ```hcl resource "google_logging_organization_sink" "my-sink" { name = "my-sink" - description = "some explaination on what this is" + description = "some explanation on what this is" org_id = "123456789" # Can export to pubsub, cloud storage, or bigquery diff --git a/website/docs/r/logging_project_sink.html.markdown b/website/docs/r/logging_project_sink.html.markdown index 1297edccc6..292629b6d3 100644 --- a/website/docs/r/logging_project_sink.html.markdown +++ b/website/docs/r/logging_project_sink.html.markdown @@ -72,7 +72,7 @@ resource "google_storage_bucket" "log-bucket" { # Our sink; this logs all activity related to our "my-logged-instance" instance resource "google_logging_project_sink" "instance-sink" { name = "my-instance-sink" - description = "some explaination on what this is" + description = "some explanation on what this is" destination = "storage.googleapis.com/${google_storage_bucket.log-bucket.name}" filter = "resource.type = gce_instance AND resource.labels.instance_id = \"${google_compute_instance.my-logged-instance.instance_id}\"" diff --git a/website/docs/r/storage_transfer_job.html.markdown b/website/docs/r/storage_transfer_job.html.markdown index ab9050604d..575f018d62 100644 --- a/website/docs/r/storage_transfer_job.html.markdown +++ b/website/docs/r/storage_transfer_job.html.markdown @@ -137,7 +137,7 @@ The `object_conditions` block supports: * `min_time_elapsed_since_last_modification` - (Optional) A duration in seconds with up to nine fractional digits, terminated by 's'. Example: "3.5s". -* `include_prefixes` - (Optional) If `include_refixes` is specified, objects that satisfy the object conditions must have names that start with one of the `include_prefixes` and that do not start with any of the `exclude_prefixes`. If `include_prefixes` is not specified, all objects except those that have names starting with one of the `exclude_prefixes` must satisfy the object conditions. See [Requirements](https://cloud.google.com/storage-transfer/docs/reference/rest/v1/TransferSpec#ObjectConditions). +* `include_prefixes` - (Optional) If `include_prefixes` is specified, objects that satisfy the object conditions must have names that start with one of the `include_prefixes` and that do not start with any of the `exclude_prefixes`. If `include_prefixes` is not specified, all objects except those that have names starting with one of the `exclude_prefixes` must satisfy the object conditions. See [Requirements](https://cloud.google.com/storage-transfer/docs/reference/rest/v1/TransferSpec#ObjectConditions). * `exclude_prefixes` - (Optional) `exclude_prefixes` must follow the requirements described for `include_prefixes`. See [Requirements](https://cloud.google.com/storage-transfer/docs/reference/rest/v1/TransferSpec#ObjectConditions).