From 360adec02fde0b2ecbc687f063d442cf8af32211 Mon Sep 17 00:00:00 2001 From: yikeke Date: Fri, 24 May 2019 17:25:03 +0800 Subject: [PATCH 01/25] tidb-operator: fix documentation usability issues in GCP document --- deploy/gcp/README.md | 82 +++++++++++++++++++++++++++++--------------- 1 file changed, 54 insertions(+), 28 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index db9575b59e..5bd9da5d40 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -1,28 +1,26 @@ # Deploy TiDB Operator and TiDB cluster on GCP GKE -## Requirements: -* [gcloud](https://cloud.google.com/sdk/install) +This document describes how to deploy TiDB Operator and a TiDB cluster on GCP GKE with your laptop (Linux or macOS) for development or testing. + +## Prerequisites + +First of all, make sure the following items are installed: + +* [Google Cloud SDK](https://cloud.google.com/sdk/install) * [terraform](https://www.terraform.io/downloads.html) * [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl) >= 1.11 * [helm](https://github.com/helm/helm/blob/master/docs/install.md#installing-the-helm-client) >= 2.9.0 * [jq](https://stedolan.github.io/jq/download/) -## Configure gcloud - -https://cloud.google.com/sdk/docs/initializing - -## Setup +## Configure -The default setup will create a new VPC, two subnetworks, and an f1-micro instance as a bastion machine. The GKE cluster is created with the following instance types as worker nodes: +Before deploying, you need to configure the following items to guarantee a smooth deployment. -* 3 n1-standard-4 instances for PD -* 3 n1-highmem-8 instances for TiKV -* 3 n1-standard-16 instances for TiDB -* 3 n1-standard-2 instances for monitor +### Configure Cloud SDK -> *NOTE*: The number of nodes created depends on how many availability zones there are in the chosen region. Most have 3 zones, but us-central1 has 4. See https://cloud.google.com/compute/docs/regions-zones/ for more information. Please refer to the `Customize` section for information on how to customize node pools in a regional cluster. +After you have installed Google Cloud SDK, you need to [perform initial setup tasks](https://cloud.google.com/sdk/docs/initializing). -> *NOTE*: The default setup, as listed above, will exceed the default CPU quota of a GCP project. To increase your project's quota, please follow the instructions [here](https://cloud.google.com/compute/quotas). The default setup will require at least 91 CPUs, more if you need to scale out. +### Configure Terraform The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them ahead of time. If you choose to export them, they are: @@ -30,10 +28,10 @@ The terraform script expects three environment variables. You can let Terraform * `TF_VAR_GCP_REGION`: The region to create the resources in, for example: `us-west1` * `TF_VAR_GCP_PROJECT`: The name of the GCP project - - The service account should have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. +### Configure APIs + If the GCP project is new, make sure the relevant APIs are enabled: ```bash @@ -44,7 +42,20 @@ gcloud services enable compute.googleapis.com && \ gcloud services enable container.googleapis.com ``` -Now we can launch the script: +## Deploy + +The default setup will create a new VPC, two subnetworks, and an f1-micro instance as a bastion machine. The GKE cluster is created with the following instance types as worker nodes: + +* 3 n1-standard-4 instances for PD +* 3 n1-highmem-8 instances for TiKV +* 3 n1-standard-16 instances for TiDB +* 3 n1-standard-2 instances for monitor + +> *NOTE*: The number of nodes created depends on how many availability zones there are in the chosen region. Most have 3 zones, but us-central1 has 4. See [Regions and Zones](https://cloud.google.com/compute/docs/regions-zones/) for more information and see the [Customize](#customize) section on how to customize node pools in a regional cluster. + +The default setup, as listed above, will exceed the default CPU quota of a GCP project. To increase your project's quota, please follow the instructions [here](https://cloud.google.com/compute/quotas). The default setup will require at least 91 CPUs, more if you need to scale out. + +Now that you have configured everything needed, you can launch the script to deploy the TiDB cluster: ```bash git clone --depth=1 https://github.com/pingcap/tidb-operator @@ -53,13 +64,17 @@ terraform init terraform apply ``` +## Access the database + After `terraform apply` is successful, the TiDB cluster can be accessed by SSHing into the bastion machine and connecting via MySQL: + ```bash gcloud compute ssh bastion --zone mysql -h -P 4000 -u root ``` -It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, it can be changed in `variables.tf` +It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, it can be changed in `variables.tf`: + ```bash # By specifying --kubeconfig argument kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb @@ -71,16 +86,13 @@ kubectl get po -n tidb helm ls ``` -When done, the infrastructure can be torn down by running `terraform destroy` - - -## Upgrade TiDB cluster +## Upgrade To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`. > *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `watch kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb` -## Scale TiDB cluster +## Scale To scale TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count, and then run `terraform apply`. @@ -104,23 +116,37 @@ The cluster is created as a regional, as opposed to a zonal cluster. This means > *NOTE*: GKE node pools are managed instance groups, so a node deleted by `gcloud compute instances delete` will be automatically recreated and added back to the cluster. -Suppose we wish to delete a node from the monitor pool, we can do +Suppose we wish to delete a node from the monitor pool, we can do: + ```bash $ gcloud compute instance-groups managed list | grep monitor ``` -and the result will be something like this +And the result will be something like this: + ```bash gke-my-cluster-monitor-pool-08578e18-grp us-west1-b zone gke-my-cluster-monitor-pool-08578e18 0 0 gke-my-cluster-monitor-pool-08578e18 no gke-my-cluster-monitor-pool-7e31100f-grp us-west1-c zone gke-my-cluster-monitor-pool-7e31100f 1 1 gke-my-cluster-monitor-pool-7e31100f no gke-my-cluster-monitor-pool-78a961e5-grp us-west1-a zone gke-my-cluster-monitor-pool-78a961e5 1 1 gke-my-cluster-monitor-pool-78a961e5 no ``` -The first column is the name of the managed instance group, and the second column is the zone it was created in. We will also need the name of the instance in that group, we can get it as follows + +The first column is the name of the managed instance group, and the second column is the zone it was created in. We will also need the name of the instance in that group, we can get it as follows: + ```bash $ gcloud compute instance-groups managed list-instances gke-my-cluster-monitor-pool-08578e18-grp --zone us-west1-b NAME ZONE STATUS ACTION INSTANCE_TEMPLATE VERSION_NAME LAST_ERROR gke-my-cluster-monitor-pool-08578e18-c7vd us-west1-b RUNNING NONE gke-my-cluster-monitor-pool-08578e18 ``` -Now we can delete the instance + +Now we can delete the instance: + ```bash $ gcloud compute instance-groups managed delete-instances gke-my-cluster-monitor-pool-08578e18-grp --instances=gke-my-cluster-monitor-pool-08578e18-c7vd --zone us-west1-b -``` \ No newline at end of file +``` + +## Destroy + +When you are done, the infrastructure can be torn down by running: + +``` shell +$ terraform destroy +``` From 713afe8488204e355c185e04e7dc120c65b933bf Mon Sep 17 00:00:00 2001 From: yikeke Date: Fri, 24 May 2019 18:01:29 +0800 Subject: [PATCH 02/25] update wording --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 5bd9da5d40..f1c8f47acc 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -110,7 +110,7 @@ GCP allows attaching a local SSD to any instance type that is `n1-standard-1` or Currently, there are not too many parameters exposed to be customized. However, you can modify `templates/tidb-cluster-values.yaml.tpl` before deploying. If you modify it after the cluster is created and then run `terraform apply`, it will not take effect unless the pod(s) is manually deleted. -### Customizing node pools +### Customize node pools The cluster is created as a regional, as opposed to a zonal cluster. This means that GKE will replicate node pools to each availability zone. This is desired to maintain high availability, however for the monitoring services, like Grafana, this is potentially unnecessary. It is possible to manually remove nodes if desired via `gcloud`. From bbd543c2cd65f81dd3eea1c501942d7603124d76 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 11:31:00 +0800 Subject: [PATCH 03/25] address hailong's comments --- deploy/gcp/README.md | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index f1c8f47acc..6a9301c05a 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -24,10 +24,17 @@ After you have installed Google Cloud SDK, you need to [perform initial setup ta The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them ahead of time. If you choose to export them, they are: -* `TF_VAR_GCP_CREDENTIALS_PATH`: Path to a valid GCP credentials file. It is generally considered a good idea to create a service account to be used by Terraform. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-accounts) for more information on how to manage them. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) for creating and managing service account keys which, when downloaded, will be the needed credentials file. +* `TF_VAR_GCP_CREDENTIALS_PATH`: Path to a valid GCP credentials file. It is generally considered a good idea to create a service account to be used by Terraform. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-accounts) to create a service account and grant `Project Editor` role to it. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) to create service account keys, choose `JSON` key type during creation, the auto-downloaded json key file will be the needed credentials file. * `TF_VAR_GCP_REGION`: The region to create the resources in, for example: `us-west1` * `TF_VAR_GCP_PROJECT`: The name of the GCP project +Here is an example in ~/.bash_profile: +```bash +export TF_VAR_GCP_CREDENTIALS_PATH="/Path/to/key" +export TF_VAR_GCP_REGION="us-west1" +export TF_VAR_GCP_PROJECT="my-project" +``` + The service account should have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. ### Configure APIs @@ -73,6 +80,8 @@ gcloud compute ssh bastion --zone mysql -h -P 4000 -u root ``` +## Interact with the cluster + It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, it can be changed in `variables.tf`: ```bash @@ -147,6 +156,6 @@ $ gcloud compute instance-groups managed delete-instances gke-my-cluster-monitor When you are done, the infrastructure can be torn down by running: -``` shell +```bash $ terraform destroy ``` From ad5e2d62c82cf2cee97300b369094bc0dc287641 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 12:36:36 +0800 Subject: [PATCH 04/25] update configure terraform --- deploy/gcp/README.md | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 6a9301c05a..836a96fec7 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -22,20 +22,26 @@ After you have installed Google Cloud SDK, you need to [perform initial setup ta ### Configure Terraform -The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them ahead of time. If you choose to export them, they are: +The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them in the `~/.bash_profile` file ahead of time. If you choose to export them, they are: -* `TF_VAR_GCP_CREDENTIALS_PATH`: Path to a valid GCP credentials file. It is generally considered a good idea to create a service account to be used by Terraform. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-accounts) to create a service account and grant `Project Editor` role to it. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) to create service account keys, choose `JSON` key type during creation, the auto-downloaded json key file will be the needed credentials file. -* `TF_VAR_GCP_REGION`: The region to create the resources in, for example: `us-west1` -* `TF_VAR_GCP_PROJECT`: The name of the GCP project +* `TF_VAR_GCP_CREDENTIALS_PATH`: Path to a valid GCP credentials file. + - It is recommended to create a new service account to be used by Terraform. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-accounts) to create a service account and grant `Project Editor` role to it. + - See [this page](https://cloud.google.com/iam/docs/creating-managing-service-account-keys) to create service account keys, and choose `JSON` key type during creation. The downloaded `JSON` file that contains the private key is the credentials file you need. +* `TF_VAR_GCP_REGION`: The region to create the resources in, for example: `us-west1`. +* `TF_VAR_GCP_PROJECT`: The name of the GCP project. -Here is an example in ~/.bash_profile: +> *Note*: The service account must have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. + +To set the three environment variables, you can first run `vi ~/.bash_profile` and insert the following `export` statements in it. Here is an example in `~/.bash_profile`: + ```bash -export TF_VAR_GCP_CREDENTIALS_PATH="/Path/to/key" +# Replace the values with the path to the JSON file you download, the GCP region and your GCP project name. +export TF_VAR_GCP_CREDENTIALS_PATH="/Path/to/my-project.json" export TF_VAR_GCP_REGION="us-west1" export TF_VAR_GCP_PROJECT="my-project" ``` -The service account should have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. + ### Configure APIs From 7fc8b8230cf7440d0099b275e61d2e58c64adf61 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 12:38:50 +0800 Subject: [PATCH 05/25] Update README.md --- deploy/gcp/README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 836a96fec7..be67c37350 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -35,14 +35,12 @@ The terraform script expects three environment variables. You can let Terraform To set the three environment variables, you can first run `vi ~/.bash_profile` and insert the following `export` statements in it. Here is an example in `~/.bash_profile`: ```bash -# Replace the values with the path to the JSON file you download, the GCP region and your GCP project name. +# Replace the values with the path to the JSON file you have downloaded, the GCP region and your GCP project name. export TF_VAR_GCP_CREDENTIALS_PATH="/Path/to/my-project.json" export TF_VAR_GCP_REGION="us-west1" export TF_VAR_GCP_PROJECT="my-project" ``` - - ### Configure APIs If the GCP project is new, make sure the relevant APIs are enabled: From 69f675f6d19f5117a64e3cf13fe4a0dd4a6a91d6 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 12:45:36 +0800 Subject: [PATCH 06/25] add more descriptions for configure terraform --- deploy/gcp/README.md | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index be67c37350..bd059d78c0 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -20,6 +20,18 @@ Before deploying, you need to configure the following items to guarantee a smoot After you have installed Google Cloud SDK, you need to [perform initial setup tasks](https://cloud.google.com/sdk/docs/initializing). +### Configure APIs + +If the GCP project is new, make sure the relevant APIs are enabled: + +```bash +gcloud services enable cloudresourcemanager.googleapis.com && \ +gcloud services enable cloudbilling.googleapis.com && \ +gcloud services enable iam.googleapis.com && \ +gcloud services enable compute.googleapis.com && \ +gcloud services enable container.googleapis.com +``` + ### Configure Terraform The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them in the `~/.bash_profile` file ahead of time. If you choose to export them, they are: @@ -41,18 +53,6 @@ export TF_VAR_GCP_REGION="us-west1" export TF_VAR_GCP_PROJECT="my-project" ``` -### Configure APIs - -If the GCP project is new, make sure the relevant APIs are enabled: - -```bash -gcloud services enable cloudresourcemanager.googleapis.com && \ -gcloud services enable cloudbilling.googleapis.com && \ -gcloud services enable iam.googleapis.com && \ -gcloud services enable compute.googleapis.com && \ -gcloud services enable container.googleapis.com -``` - ## Deploy The default setup will create a new VPC, two subnetworks, and an f1-micro instance as a bastion machine. The GKE cluster is created with the following instance types as worker nodes: @@ -75,6 +75,8 @@ terraform init terraform apply ``` +When you run `terraform apply`, you may be asked to set three environment variables for the terraform script to run if you don't export them in advance. See [Configure Terraform](#configure-terraform) for details. + ## Access the database After `terraform apply` is successful, the TiDB cluster can be accessed by SSHing into the bastion machine and connecting via MySQL: From 1a9f650a884f2f517d8e24ff018d4b97f6c0564b Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 12:48:45 +0800 Subject: [PATCH 07/25] update wording --- deploy/gcp/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index bd059d78c0..8e55e4466a 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -44,7 +44,7 @@ The terraform script expects three environment variables. You can let Terraform > *Note*: The service account must have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. -To set the three environment variables, you can first run `vi ~/.bash_profile` and insert the following `export` statements in it. Here is an example in `~/.bash_profile`: +To set the three environment variables, you can first run `vi ~/.bash_profile` and insert the `export` statements in it. Here is an example in `~/.bash_profile`: ```bash # Replace the values with the path to the JSON file you have downloaded, the GCP region and your GCP project name. @@ -75,7 +75,7 @@ terraform init terraform apply ``` -When you run `terraform apply`, you may be asked to set three environment variables for the terraform script to run if you don't export them in advance. See [Configure Terraform](#configure-terraform) for details. +When you run `terraform apply`, you may be asked to set three environment variables for the script to run if you don't export them in advance. See [Configure Terraform](#configure-terraform) for details. ## Access the database From 83456183dc6d0b2135f20c8ba50fb8279e05e850 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 12:53:53 +0800 Subject: [PATCH 08/25] update wording --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 8e55e4466a..0effe0fa15 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -34,7 +34,7 @@ gcloud services enable container.googleapis.com ### Configure Terraform -The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them in the `~/.bash_profile` file ahead of time. If you choose to export them, they are: +The terraform script expects three environment variables. You can let Terraform prompt you for them, or `export` them in the `~/.bash_profile` file ahead of time. The required environment variables are: * `TF_VAR_GCP_CREDENTIALS_PATH`: Path to a valid GCP credentials file. - It is recommended to create a new service account to be used by Terraform. See [this page](https://cloud.google.com/iam/docs/creating-managing-service-accounts) to create a service account and grant `Project Editor` role to it. From 0ca94989ccd9a5be3cbef1ffb79154e64e13eef9 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 16:49:46 +0800 Subject: [PATCH 09/25] add descriptions and examples --- deploy/gcp/README.md | 63 +++++++++++++++++++++++++++++++++++++------- 1 file changed, 53 insertions(+), 10 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 0effe0fa15..0fe20eaaaa 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -4,7 +4,7 @@ This document describes how to deploy TiDB Operator and a TiDB cluster on GCP GK ## Prerequisites -First of all, make sure the following items are installed: +First of all, make sure the following items are installed on your machine: * [Google Cloud SDK](https://cloud.google.com/sdk/install) * [terraform](https://www.terraform.io/downloads.html) @@ -18,7 +18,7 @@ Before deploying, you need to configure the following items to guarantee a smoot ### Configure Cloud SDK -After you have installed Google Cloud SDK, you need to [perform initial setup tasks](https://cloud.google.com/sdk/docs/initializing). +After you have installed Google Cloud SDK, you need to run `gcloud init` to [perform initial setup tasks](https://cloud.google.com/sdk/docs/initializing). ### Configure APIs @@ -64,7 +64,7 @@ The default setup will create a new VPC, two subnetworks, and an f1-micro instan > *NOTE*: The number of nodes created depends on how many availability zones there are in the chosen region. Most have 3 zones, but us-central1 has 4. See [Regions and Zones](https://cloud.google.com/compute/docs/regions-zones/) for more information and see the [Customize](#customize) section on how to customize node pools in a regional cluster. -The default setup, as listed above, will exceed the default CPU quota of a GCP project. To increase your project's quota, please follow the instructions [here](https://cloud.google.com/compute/quotas). The default setup will require at least 91 CPUs, more if you need to scale out. +The default setup, as listed above, requires at least 91 CPUs which exceed the default CPU quota of a GCP project. To increase your project's quota, follow the instructions [here](https://cloud.google.com/compute/quotas). You need more CPUs if you need to scale out. Now that you have configured everything needed, you can launch the script to deploy the TiDB cluster: @@ -75,20 +75,43 @@ terraform init terraform apply ``` -When you run `terraform apply`, you may be asked to set three environment variables for the script to run if you don't export them in advance. See [Configure Terraform](#configure-terraform) for details. +When you run `terraform apply`, you may be asked to set three environment variables for the script to run if you have not exported them in advance. See [Configure Terraform](#configure-terraform) for details. + +It might take 10 minutes or more to finish the process. A successful deployment gives the output like: + +``` +Apply complete! Resources: 8 added, 0 changed, 1 destroyed. + +Outputs: + +cluster_id = my-cluster +cluster_name = my-cluster +how_to_connect_to_mysql_from_bastion = mysql -h 172.31.252.20 -P 4000 -u root +how_to_ssh_to_bastion = gcloud compute ssh bastion --zone us-west1-a +kubeconfig_file = ./credentials/kubeconfig_my-cluster +monitor_ilb_ip = 35.227.134.146 +monitor_port = 3000 +region = us-west1 +tidb_ilb_ip = 172.31.252.20 +tidb_port = 4000 +tidb_version = v2.1.8 +``` ## Access the database After `terraform apply` is successful, the TiDB cluster can be accessed by SSHing into the bastion machine and connecting via MySQL: ```bash +# Replace the `<>` parts with values from the output. gcloud compute ssh bastion --zone mysql -h -P 4000 -u root ``` +> *NOTE*: Make sure that you have installed the MySQL client before you connect to TiDB via MySQL. + ## Interact with the cluster -It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, it can be changed in `variables.tf`: +It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, and it can be changed in `variables.tf`: ```bash # By specifying --kubeconfig argument @@ -103,20 +126,39 @@ helm ls ## Upgrade -To upgrade TiDB cluster, modify `tidb_version` variable to a higher version in variables.tf and run `terraform apply`. +To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in `variables.tf` and run `terraform apply`. + +For example, to upgrade the cluster to version 2.1.10, modify the `tidb_version` to `v2.1.10`: -> *Note*: The upgrading doesn't finish immediately. You can watch the upgrading process by `watch kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb` +``` +variable "tidb_version" { +description = "tidb cluster version" +default = "v2.1.10" +} +``` + +The upgrading does not finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. ## Scale -To scale TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count, and then run `terraform apply`. +To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count, and then run `terraform apply`. + +Currently, scaling in is not supported since we cannot determine which node to remove. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. -> *Note*: Currently, scaling in is not supported since we cannot determine which node to remove. Scaling out needs a few minutes to complete, you can watch the scaling out by `watch kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb` +For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3: -> *Note*: Incrementing the node count will create a node per GCP availability zones. +``` +variable "tidb_count" { +default = 3 +} +``` + +> *Note*: Incrementing the node count will create a node per GCP availability zone. ## Customize +You can change default values in the `variables.tf` file (such as the cluster name and image versions) as needed. + ### Customize GCP resources GCP allows attaching a local SSD to any instance type that is `n1-standard-1` or greater. This allows for good customizability. @@ -136,6 +178,7 @@ Suppose we wish to delete a node from the monitor pool, we can do: ```bash $ gcloud compute instance-groups managed list | grep monitor ``` + And the result will be something like this: ```bash From c7545b49cb9a7a939aaf4a142a5c4e2079984533 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 16:57:02 +0800 Subject: [PATCH 10/25] update wording --- deploy/gcp/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 0fe20eaaaa..045461db2d 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -111,7 +111,7 @@ mysql -h -P 4000 -u root ## Interact with the cluster -It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, and it can be changed in `variables.tf`: +It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, which can be changed in `variables.tf`: ```bash # By specifying --kubeconfig argument @@ -128,7 +128,7 @@ helm ls To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in `variables.tf` and run `terraform apply`. -For example, to upgrade the cluster to version 2.1.10, modify the `tidb_version` to `v2.1.10`: +For example, to upgrade the cluster to the 2.1.10 version, modify the `tidb_version` to `v2.1.10`: ``` variable "tidb_version" { @@ -141,9 +141,9 @@ The upgrading does not finish immediately. You can watch the upgrading process b ## Scale -To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count, and then run `terraform apply`. +To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count, and run `terraform apply`. -Currently, scaling in is not supported since we cannot determine which node to remove. Scaling out needs a few minutes to complete, you can watch the scaling out by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. +Currently, scaling in is not supported since we cannot determine which node to remove. Scaling out needs a few minutes to complete, you can watch the scaling-out process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3: From 6197efb5ecf090686b0922d6d8bdda2367e7f50b Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 18:26:01 +0800 Subject: [PATCH 11/25] add some examples and update wording --- deploy/gcp/README.md | 49 +++++++++++++++++++++++++++++++------------- 1 file changed, 35 insertions(+), 14 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 045461db2d..2c36131d50 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -14,7 +14,7 @@ First of all, make sure the following items are installed on your machine: ## Configure -Before deploying, you need to configure the following items to guarantee a smooth deployment. +Before deploying, you need to configure several items to guarantee a smooth deployment. ### Configure Cloud SDK @@ -55,7 +55,7 @@ export TF_VAR_GCP_PROJECT="my-project" ## Deploy -The default setup will create a new VPC, two subnetworks, and an f1-micro instance as a bastion machine. The GKE cluster is created with the following instance types as worker nodes: +The default setup creates a new VPC, two subnetworks, and an f1-micro instance as a bastion machine. The GKE cluster is created with the following instance types as worker nodes: * 3 n1-standard-4 instances for PD * 3 n1-highmem-8 instances for TiKV @@ -87,7 +87,7 @@ Outputs: cluster_id = my-cluster cluster_name = my-cluster how_to_connect_to_mysql_from_bastion = mysql -h 172.31.252.20 -P 4000 -u root -how_to_ssh_to_bastion = gcloud compute ssh bastion --zone us-west1-a +how_to_ssh_to_bastion = gcloud compute ssh bastion --zone us-west1-b kubeconfig_file = ./credentials/kubeconfig_my-cluster monitor_ilb_ip = 35.227.134.146 monitor_port = 3000 @@ -126,7 +126,7 @@ helm ls ## Upgrade -To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in `variables.tf` and run `terraform apply`. +To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in the `variables.tf` file and run `terraform apply`. For example, to upgrade the cluster to the 2.1.10 version, modify the `tidb_version` to `v2.1.10`: @@ -137,11 +137,25 @@ default = "v2.1.10" } ``` -The upgrading does not finish immediately. You can watch the upgrading process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. +The upgrading does not finish immediately. You can run `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch` to verify that all pods are in `Running` state. Then you can [access the database](#access-the-database) and use `tidb_version()` to see whether the TiDB cluster has been successfully upgraded: + +```sh +MySQL [(none)]> select tidb_version()\G +*************************** 1. row *************************** +tidb_version(): Release Version: 2.1.10 +Git Commit Hash: v2.1.10 +Git Branch: master +UTC Build Time: 2019-05-22 11:12:14 +GoVersion: go version go1.12.4 linux/amd64 +Race Enabled: false +TiKV Min Version: 2.1.0-alpha.1-ff3dd160846b7d1aed9079c389fc188f7f5ea13e +Check Table Before Drop: false +1 row in set (0.001 sec) +``` ## Scale -To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count, and run `terraform apply`. +To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count in the `variables.tf` file, and run `terraform apply`. Currently, scaling in is not supported since we cannot determine which node to remove. Scaling out needs a few minutes to complete, you can watch the scaling-out process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. @@ -153,27 +167,27 @@ default = 3 } ``` -> *Note*: Incrementing the node count will create a node per GCP availability zone. +> *Note*: Incrementing the node count creates a node per GCP availability zone. ## Customize -You can change default values in the `variables.tf` file (such as the cluster name and image versions) as needed. +You can change default values in `variables.tf` (such as the cluster name and the TiDB version) as needed. ### Customize GCP resources GCP allows attaching a local SSD to any instance type that is `n1-standard-1` or greater. This allows for good customizability. -### Customize TiDB Parameters +### Customize TiDB parameters -Currently, there are not too many parameters exposed to be customized. However, you can modify `templates/tidb-cluster-values.yaml.tpl` before deploying. If you modify it after the cluster is created and then run `terraform apply`, it will not take effect unless the pod(s) is manually deleted. +Currently, there are not too many parameters exposed to be customized. However, you can modify `templates/tidb-cluster-values.yaml.tpl` before deploying. If you modify it after the cluster is created and then run `terraform apply`, it can not take effect unless the pod(s) is manually deleted. ### Customize node pools -The cluster is created as a regional, as opposed to a zonal cluster. This means that GKE will replicate node pools to each availability zone. This is desired to maintain high availability, however for the monitoring services, like Grafana, this is potentially unnecessary. It is possible to manually remove nodes if desired via `gcloud`. +The cluster is created as a regional, as opposed to a zonal cluster. This means that GKE replicates node pools to each availability zone. This is desired to maintain high availability, however for the monitoring services, like Grafana, this is potentially unnecessary. It is possible to manually remove nodes if desired via `gcloud`. > *NOTE*: GKE node pools are managed instance groups, so a node deleted by `gcloud compute instances delete` will be automatically recreated and added back to the cluster. -Suppose we wish to delete a node from the monitor pool, we can do: +Suppose you need to delete a node from the monitor pool, and you can do: ```bash $ gcloud compute instance-groups managed list | grep monitor @@ -187,15 +201,22 @@ gke-my-cluster-monitor-pool-7e31100f-grp us-west1-c zone gke-my-cluster-moni gke-my-cluster-monitor-pool-78a961e5-grp us-west1-a zone gke-my-cluster-monitor-pool-78a961e5 1 1 gke-my-cluster-monitor-pool-78a961e5 no ``` -The first column is the name of the managed instance group, and the second column is the zone it was created in. We will also need the name of the instance in that group, we can get it as follows: +The first column is the name of the managed instance group, and the second column is the zone it was created in. You also need the name of the instance in that group, and you can get it as follows: + +```bash +gcloud compute instance-groups managed list-instances --zone +``` + +For example: ```bash $ gcloud compute instance-groups managed list-instances gke-my-cluster-monitor-pool-08578e18-grp --zone us-west1-b + NAME ZONE STATUS ACTION INSTANCE_TEMPLATE VERSION_NAME LAST_ERROR gke-my-cluster-monitor-pool-08578e18-c7vd us-west1-b RUNNING NONE gke-my-cluster-monitor-pool-08578e18 ``` -Now we can delete the instance: +Now you can delete the instance by specifying the name of the managed instance group and the name of the instance, for example: ```bash $ gcloud compute instance-groups managed delete-instances gke-my-cluster-monitor-pool-08578e18-grp --instances=gke-my-cluster-monitor-pool-08578e18-c7vd --zone us-west1-b From b12270aa926248893087038e7198e10625dfb708 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 18:41:50 +0800 Subject: [PATCH 12/25] update examples and wording --- deploy/gcp/README.md | 27 ++++++++++++++------------- 1 file changed, 14 insertions(+), 13 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 2c36131d50..ac3a6dec8e 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -18,7 +18,7 @@ Before deploying, you need to configure several items to guarantee a smooth depl ### Configure Cloud SDK -After you have installed Google Cloud SDK, you need to run `gcloud init` to [perform initial setup tasks](https://cloud.google.com/sdk/docs/initializing). +After you install Google Cloud SDK, you need to run `gcloud init` to [perform initial setup tasks](https://cloud.google.com/sdk/docs/initializing). ### Configure APIs @@ -75,7 +75,7 @@ terraform init terraform apply ``` -When you run `terraform apply`, you may be asked to set three environment variables for the script to run if you have not exported them in advance. See [Configure Terraform](#configure-terraform) for details. +When you run `terraform apply`, you may be asked to set three environment variables if you have not exported them in advance. See [Configure Terraform](#configure-terraform) for details. It might take 10 minutes or more to finish the process. A successful deployment gives the output like: @@ -107,18 +107,18 @@ gcloud compute ssh bastion --zone mysql -h -P 4000 -u root ``` -> *NOTE*: Make sure that you have installed the MySQL client before you connect to TiDB via MySQL. +> *NOTE*: You need to install the MySQL client before you connect to TiDB via MySQL. ## Interact with the cluster -It is possible to interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, which can be changed in `variables.tf`: +You can interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, which can be changed in `variables.tf`: ```bash -# By specifying --kubeconfig argument +# By specifying --kubeconfig argument. kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb helm --kubeconfig credentials/kubeconfig_ ls -# Or setting KUBECONFIG environment variable +# Or setting KUBECONFIG environment variable. export KUBECONFIG=$PWD/credentials/kubeconfig_ kubectl get po -n tidb helm ls @@ -126,18 +126,18 @@ helm ls ## Upgrade -To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in the `variables.tf` file and run `terraform apply`. +To upgrade the TiDB cluster, modify the `tidb_version` variable to a higher version in the `variables.tf` file, and run `terraform apply`. For example, to upgrade the cluster to the 2.1.10 version, modify the `tidb_version` to `v2.1.10`: ``` variable "tidb_version" { -description = "tidb cluster version" -default = "v2.1.10" + description = "TiDB version" + default = "v2.1.10" } ``` -The upgrading does not finish immediately. You can run `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch` to verify that all pods are in `Running` state. Then you can [access the database](#access-the-database) and use `tidb_version()` to see whether the TiDB cluster has been successfully upgraded: +The upgrading does not finish immediately. You can run `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch` to verify that all pods are in `Running` state. Then you can [access the database](#access-the-database) and use `tidb_version()` to see whether the cluster has been upgraded successfully: ```sh MySQL [(none)]> select tidb_version()\G @@ -155,15 +155,16 @@ Check Table Before Drop: false ## Scale -To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` to your desired count in the `variables.tf` file, and run `terraform apply`. +To scale the TiDB cluster, modify `tikv_count`, `tikv_replica_count`, `tidb_count`, and `tidb_replica_count` in the `variables.tf` file to your desired count, and run `terraform apply`. Currently, scaling in is not supported since we cannot determine which node to remove. Scaling out needs a few minutes to complete, you can watch the scaling-out process by `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch`. -For example, to scale out the cluster, you can modify the number of TiDB instances from 2 to 3: +For example, to scale out the cluster, you can modify the number of TiDB instances from 1 to 2: ``` variable "tidb_count" { -default = 3 + description = "Number of TiDB nodes per availability zone" + default = 2 } ``` From d8da9fc07a19622f18eae39618f82d0ac19d25cf Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:03:31 +0800 Subject: [PATCH 13/25] address hailong's comment --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index ac3a6dec8e..0f768814f1 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -44,7 +44,7 @@ The terraform script expects three environment variables. You can let Terraform > *Note*: The service account must have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. -To set the three environment variables, you can first run `vi ~/.bash_profile` and insert the `export` statements in it. Here is an example in `~/.bash_profile`: +To set the three environment variables, you can first run `vi ~/.bash_profile`, append the `export` statements to it and run `source ~/.bash_profile`. Here is an example in `~/.bash_profile`: ```bash # Replace the values with the path to the JSON file you have downloaded, the GCP region and your GCP project name. From c4868b47405e2489dea29de4ac56fccf59b20ab1 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:04:33 +0800 Subject: [PATCH 14/25] improve display --- deploy/gcp/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 0f768814f1..90c8dd865a 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -44,7 +44,9 @@ The terraform script expects three environment variables. You can let Terraform > *Note*: The service account must have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. -To set the three environment variables, you can first run `vi ~/.bash_profile`, append the `export` statements to it and run `source ~/.bash_profile`. Here is an example in `~/.bash_profile`: +To set the three environment variables, you can first run `vi ~/.bash_profile`, append the `export` statements to it and run `source ~/.bash_profile`. + +Here is an example in `~/.bash_profile`: ```bash # Replace the values with the path to the JSON file you have downloaded, the GCP region and your GCP project name. From 02e9e27e9be7abf2a35a8b7a48e204f28c4ed31f Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:10:03 +0800 Subject: [PATCH 15/25] update code block --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 90c8dd865a..fe0a14e8ee 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -141,7 +141,7 @@ variable "tidb_version" { The upgrading does not finish immediately. You can run `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch` to verify that all pods are in `Running` state. Then you can [access the database](#access-the-database) and use `tidb_version()` to see whether the cluster has been upgraded successfully: -```sh +```sql MySQL [(none)]> select tidb_version()\G *************************** 1. row *************************** tidb_version(): Release Version: 2.1.10 From 05ac1c3cac09549b3ff0309b76cdbd8351089703 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:19:11 +0800 Subject: [PATCH 16/25] unify wording --- deploy/gcp/README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index fe0a14e8ee..4bc9c19301 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -64,7 +64,7 @@ The default setup creates a new VPC, two subnetworks, and an f1-micro instance a * 3 n1-standard-16 instances for TiDB * 3 n1-standard-2 instances for monitor -> *NOTE*: The number of nodes created depends on how many availability zones there are in the chosen region. Most have 3 zones, but us-central1 has 4. See [Regions and Zones](https://cloud.google.com/compute/docs/regions-zones/) for more information and see the [Customize](#customize) section on how to customize node pools in a regional cluster. +> *Note*: The number of nodes created depends on how many availability zones there are in the chosen region. Most have 3 zones, but us-central1 has 4. See [Regions and Zones](https://cloud.google.com/compute/docs/regions-zones/) for more information and see the [Customize](#customize) section on how to customize node pools in a regional cluster. The default setup, as listed above, requires at least 91 CPUs which exceed the default CPU quota of a GCP project. To increase your project's quota, follow the instructions [here](https://cloud.google.com/compute/quotas). You need more CPUs if you need to scale out. @@ -109,7 +109,7 @@ gcloud compute ssh bastion --zone mysql -h -P 4000 -u root ``` -> *NOTE*: You need to install the MySQL client before you connect to TiDB via MySQL. +> *Note*: You need to install the MySQL client before you connect to TiDB via MySQL. ## Interact with the cluster @@ -188,12 +188,12 @@ Currently, there are not too many parameters exposed to be customized. However, The cluster is created as a regional, as opposed to a zonal cluster. This means that GKE replicates node pools to each availability zone. This is desired to maintain high availability, however for the monitoring services, like Grafana, this is potentially unnecessary. It is possible to manually remove nodes if desired via `gcloud`. -> *NOTE*: GKE node pools are managed instance groups, so a node deleted by `gcloud compute instances delete` will be automatically recreated and added back to the cluster. +> *Note*: GKE node pools are managed instance groups, so a node deleted by `gcloud compute instances delete` will be automatically recreated and added back to the cluster. Suppose you need to delete a node from the monitor pool, and you can do: ```bash -$ gcloud compute instance-groups managed list | grep monitor +gcloud compute instance-groups managed list | grep monitor ``` And the result will be something like this: @@ -222,7 +222,7 @@ gke-my-cluster-monitor-pool-08578e18-c7vd us-west1-b RUNNING NONE gke-my-c Now you can delete the instance by specifying the name of the managed instance group and the name of the instance, for example: ```bash -$ gcloud compute instance-groups managed delete-instances gke-my-cluster-monitor-pool-08578e18-grp --instances=gke-my-cluster-monitor-pool-08578e18-c7vd --zone us-west1-b +gcloud compute instance-groups managed delete-instances gke-my-cluster-monitor-pool-08578e18-grp --instances=gke-my-cluster-monitor-pool-08578e18-c7vd --zone us-west1-b ``` ## Destroy @@ -230,5 +230,5 @@ $ gcloud compute instance-groups managed delete-instances gke-my-cluster-monitor When you are done, the infrastructure can be torn down by running: ```bash -$ terraform destroy +terraform destroy ``` From c2237305154de082c33def226a441f74f10fdb4e Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:24:03 +0800 Subject: [PATCH 17/25] update wording --- deploy/gcp/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 4bc9c19301..ee60d7db0c 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -190,7 +190,7 @@ The cluster is created as a regional, as opposed to a zonal cluster. This means > *Note*: GKE node pools are managed instance groups, so a node deleted by `gcloud compute instances delete` will be automatically recreated and added back to the cluster. -Suppose you need to delete a node from the monitor pool, and you can do: +Suppose that you need to delete a node from the monitor pool. You can first do: ```bash gcloud compute instance-groups managed list | grep monitor @@ -204,7 +204,7 @@ gke-my-cluster-monitor-pool-7e31100f-grp us-west1-c zone gke-my-cluster-moni gke-my-cluster-monitor-pool-78a961e5-grp us-west1-a zone gke-my-cluster-monitor-pool-78a961e5 1 1 gke-my-cluster-monitor-pool-78a961e5 no ``` -The first column is the name of the managed instance group, and the second column is the zone it was created in. You also need the name of the instance in that group, and you can get it as follows: +The first column is the name of the managed instance group, and the second column is the zone in which it was created. You also need the name of the instance in that group, and you can get it by running: ```bash gcloud compute instance-groups managed list-instances --zone From eaf5853fab61bbbb0f00d69f2a847a7dac7f46c6 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:25:41 +0800 Subject: [PATCH 18/25] update descriptions --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index ee60d7db0c..8eae7da678 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -113,7 +113,7 @@ mysql -h -P 4000 -u root ## Interact with the cluster -You can interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_`. The default `cluster_name` is `my-cluster`, which can be changed in `variables.tf`: +You can interact with the cluster using `kubectl` and `helm` with the kubeconfig file `credentials/kubeconfig_` as follows. The default `cluster_name` is `my-cluster`, which can be changed in `variables.tf`. ```bash # By specifying --kubeconfig argument. From 7a90aa9b481988eb913de37424627dc625e12061 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:28:00 +0800 Subject: [PATCH 19/25] Update README.md --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 8eae7da678..af19bbbbcc 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -46,7 +46,7 @@ The terraform script expects three environment variables. You can let Terraform To set the three environment variables, you can first run `vi ~/.bash_profile`, append the `export` statements to it and run `source ~/.bash_profile`. -Here is an example in `~/.bash_profile`: +Here is an example of `~/.bash_profile`: ```bash # Replace the values with the path to the JSON file you have downloaded, the GCP region and your GCP project name. From 90d1732366aa18d03cadebe348588ba61e73b610 Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:41:34 +0800 Subject: [PATCH 20/25] fix messy code --- deploy/gcp/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index af19bbbbcc..ac3d990cb9 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -142,7 +142,7 @@ variable "tidb_version" { The upgrading does not finish immediately. You can run `kubectl --kubeconfig credentials/kubeconfig_ get po -n tidb --watch` to verify that all pods are in `Running` state. Then you can [access the database](#access-the-database) and use `tidb_version()` to see whether the cluster has been upgraded successfully: ```sql -MySQL [(none)]> select tidb_version()\G +MySQL [(none)]> select tidb_version(); *************************** 1. row *************************** tidb_version(): Release Version: 2.1.10 Git Commit Hash: v2.1.10 From 43adae14d5b01a77f5d2d361002cdce6963314ae Mon Sep 17 00:00:00 2001 From: yikeke Date: Mon, 27 May 2019 19:43:24 +0800 Subject: [PATCH 21/25] fix messy code for dind tutorial --- docs/local-dind-tutorial.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/local-dind-tutorial.md b/docs/local-dind-tutorial.md index e16d71e6c4..73af503f06 100644 --- a/docs/local-dind-tutorial.md +++ b/docs/local-dind-tutorial.md @@ -297,7 +297,7 @@ Use `kubectl get pod -n tidb` to verify the number of each compoments equal to v Use `kubectl get pod -n tidb` to verify that all pods are in `Running` state. Then you can connect to the database and use `tidb_version()` function to verify the version: ```sh -MySQL [(none)]> select tidb_version()\G +MySQL [(none)]> select tidb_version(); *************************** 1. row *************************** tidb_version(): Release Version: 2.1.10 Git Commit Hash: v2.1.10 From 03717ef9f93b59688378e4653ae6c3e0668a7843 Mon Sep 17 00:00:00 2001 From: Jacob Lerche Date: Mon, 27 May 2019 19:31:49 -0700 Subject: [PATCH 22/25] Update README.md --- deploy/gcp/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index ac3d990cb9..3b202cf54a 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -232,3 +232,7 @@ When you are done, the infrastructure can be torn down by running: ```bash terraform destroy ``` +> *NOTE*: You have to manually delete disks in the Google Cloud Console, or with `gcloud` after running `terraform destroy` if you do not need the data anymore. +> *NOTE*: When `terraform destroy` is running, an error with the following message might occur: `Error reading Container Cluster "my-cluster": Cluster "my-cluster" has status "RECONCILING" with message""`. This happens when GCP is upgrading the kubernetes master node, which it does automatically at times. While this is happening, it is not possible to delete the cluster. When it is done, run `terraform destroy` again. + +> *NOTE*: When `terraform destroy` is running, an error with the following message might occur: `Error deleting NodePool: googleapi: Error 400: Operation operation-1558952543255-89695179 is currently deleting a node pool for cluster my-cluster. Please wait and try again once it is done., failedPrecondition`. This happens when terraform issues delete requests to cluster resources concurrently. To resolve, wait a little bit and then run `terraform destroy` again. From ad026597f708c04c493788f82fafb8baa12c69dd Mon Sep 17 00:00:00 2001 From: yikeke Date: Tue, 28 May 2019 10:59:21 +0800 Subject: [PATCH 23/25] address jacob's comments --- deploy/gcp/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 3b202cf54a..49dd010db0 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -44,9 +44,7 @@ The terraform script expects three environment variables. You can let Terraform > *Note*: The service account must have sufficient permissions to create resources in the project. The `Project Editor` primitive will accomplish this. -To set the three environment variables, you can first run `vi ~/.bash_profile`, append the `export` statements to it and run `source ~/.bash_profile`. - -Here is an example of `~/.bash_profile`: +To set the three environment variables, for example, you can enter in your terminal: ```bash # Replace the values with the path to the JSON file you have downloaded, the GCP region and your GCP project name. @@ -232,7 +230,9 @@ When you are done, the infrastructure can be torn down by running: ```bash terraform destroy ``` -> *NOTE*: You have to manually delete disks in the Google Cloud Console, or with `gcloud` after running `terraform destroy` if you do not need the data anymore. + +You have to manually delete disks in the Google Cloud Console, or with `gcloud` after running `terraform destroy` if you do not need the data anymore. + > *NOTE*: When `terraform destroy` is running, an error with the following message might occur: `Error reading Container Cluster "my-cluster": Cluster "my-cluster" has status "RECONCILING" with message""`. This happens when GCP is upgrading the kubernetes master node, which it does automatically at times. While this is happening, it is not possible to delete the cluster. When it is done, run `terraform destroy` again. > *NOTE*: When `terraform destroy` is running, an error with the following message might occur: `Error deleting NodePool: googleapi: Error 400: Operation operation-1558952543255-89695179 is currently deleting a node pool for cluster my-cluster. Please wait and try again once it is done., failedPrecondition`. This happens when terraform issues delete requests to cluster resources concurrently. To resolve, wait a little bit and then run `terraform destroy` again. From 2a2a8a46664f692508ef6c3b6607c498b5e76eef Mon Sep 17 00:00:00 2001 From: yikeke Date: Tue, 28 May 2019 11:02:22 +0800 Subject: [PATCH 24/25] update wording --- deploy/gcp/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 49dd010db0..2438f7c962 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -233,6 +233,6 @@ terraform destroy You have to manually delete disks in the Google Cloud Console, or with `gcloud` after running `terraform destroy` if you do not need the data anymore. -> *NOTE*: When `terraform destroy` is running, an error with the following message might occur: `Error reading Container Cluster "my-cluster": Cluster "my-cluster" has status "RECONCILING" with message""`. This happens when GCP is upgrading the kubernetes master node, which it does automatically at times. While this is happening, it is not possible to delete the cluster. When it is done, run `terraform destroy` again. +> *Note*: When `terraform destroy` is running, an error with the following message might occur: `Error reading Container Cluster "my-cluster": Cluster "my-cluster" has status "RECONCILING" with message""`. This happens when GCP is upgrading the kubernetes master node, which it does automatically at times. While this is happening, it is not possible to delete the cluster. When it is done, run `terraform destroy` again. -> *NOTE*: When `terraform destroy` is running, an error with the following message might occur: `Error deleting NodePool: googleapi: Error 400: Operation operation-1558952543255-89695179 is currently deleting a node pool for cluster my-cluster. Please wait and try again once it is done., failedPrecondition`. This happens when terraform issues delete requests to cluster resources concurrently. To resolve, wait a little bit and then run `terraform destroy` again. +> *Note*: When `terraform destroy` is running, an error with the following message might occur: `Error deleting NodePool: googleapi: Error 400: Operation operation-1558952543255-89695179 is currently deleting a node pool for cluster my-cluster. Please wait and try again once it is done., failedPrecondition`. This happens when terraform issues delete requests to cluster resources concurrently. To resolve, wait a little bit and then run `terraform destroy` again. From 69289114f9d4b7683cc7c27cc15b89fde0b6eb3a Mon Sep 17 00:00:00 2001 From: yikeke Date: Tue, 28 May 2019 11:37:57 +0800 Subject: [PATCH 25/25] update descriptions --- deploy/gcp/README.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/deploy/gcp/README.md b/deploy/gcp/README.md index 2438f7c962..06575cfee2 100644 --- a/deploy/gcp/README.md +++ b/deploy/gcp/README.md @@ -53,6 +53,8 @@ export TF_VAR_GCP_REGION="us-west1" export TF_VAR_GCP_PROJECT="my-project" ``` +You can also append them in your `~/.bash_profile` so they will be exported automatically next time. + ## Deploy The default setup creates a new VPC, two subnetworks, and an f1-micro instance as a bastion machine. The GKE cluster is created with the following instance types as worker nodes: @@ -80,7 +82,7 @@ When you run `terraform apply`, you may be asked to set three environment variab It might take 10 minutes or more to finish the process. A successful deployment gives the output like: ``` -Apply complete! Resources: 8 added, 0 changed, 1 destroyed. +Apply complete! Resources: 17 added, 0 changed, 0 destroyed. Outputs: