Skip to content

Commit

Permalink
en: update doc for tiflash on eks and ack (#276)
Browse files Browse the repository at this point in the history
* en: update tiflash on ack

* en: update ack and eks

* update tiflash scaling in

* Apply suggestions from code review

Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>
Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com>

* Update en/scale-a-tidb-cluster.md

Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com>

* update wording

Co-authored-by: TomShawn <41534398+TomShawn@users.noreply.github.com>
Co-authored-by: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>
  • Loading branch information
3 people authored May 28, 2020
1 parent 30f9ec7 commit 3cba71c
Show file tree
Hide file tree
Showing 3 changed files with 122 additions and 9 deletions.
40 changes: 37 additions & 3 deletions en/deploy-on-alibaba-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,8 @@ All the instances except ACK mandatory workers are deployed across availability
operator_version = "v1.1.0-rc.1"
```

If you need to deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`.

> **Note:**
>
> Check the `operator_version` in the `variables.tf` file for the default TiDB Operator version of the current scripts. If the default version is not your desired one, configure `operator_version` in `terraform.tfvars`.
Expand Down Expand Up @@ -167,10 +169,35 @@ All the instances except ACK mandatory workers are deployed across availability

To complete the CR file configuration, refer to [TiDB Operator API documentation](api-references.md) and [Configuring TiDB Cluster](configure-cluster-using-tidbcluster.md).

If you need to deploy TiFlash, configure `spec.tiflash` in `db.yaml` as follows:

```yaml
spec
...
tiflash:
baseImage: pingcap/tiflash
maxFailoverCount: 3
nodeSelector:
dedicated: TIDB_CLUSTER_NAME-tiflash
replicas: 1
storageClaims:
- resources:
requests:
storage: 100Gi
storageClassName: local-volume
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: TIDB_CLUSTER_NAME-tiflash
```

Modify `replicas`, `storageClaims[].resources.requests.storage`, and `storageClassName` according to your needs.

> **Note:**
>
> * Replace all the `TIDB_CLUSTER_NAME` in the `db.yaml` and `db-monitor.yaml` files with `tidb_cluster_name` configured in the deployment of ACK.
> * Make sure the number of PD, TiKV, and TiDB nodes is the same as the `replicas` value of the corresponding component in `db.yaml`.
> * Make sure the number of PD, TiKV, TiFlash, or TiDB nodes is >= the `replicas` value of the corresponding component in `db.yaml`.
> * Make sure `spec.initializer.version` in `db-monitor.yaml` is the same as `spec.version` in `db.yaml`. Otherwise, the monitor might not display correctly.

2. Create `Namespace`:
Expand Down Expand Up @@ -237,12 +264,17 @@ This may take a while to complete. You can watch the process using the following
kubectl get pods --namespace ${namespace} -o wide --watch
```

## Scale
## Scale out TiDB cluster

To scale the TiDB cluster, modify `tikv_count` or `tidb_count` in the `terraform.tfvars` file, and then run `terraform apply` to scale out the number of nodes for the corresponding components.
To scale out the TiDB cluster, modify `tikv_count`, `tiflash_count` or `tidb_count` in the `terraform.tfvars` file, and then run `terraform apply` to scale out the number of nodes for the corresponding components.

After the nodes scale out, modify the `replicas` of the corresponding components by running `kubectl --kubeconfig credentials/kubeconfig edit tc ${tidb_cluster_name} -n ${namespace}`.

> **Note:**
>
> - Because it is impossible to determine which node will be taken offline during the scale-in process, the scale-in of TiDB clusters is currently not supported.
> - The scale-out process takes a few minutes. You can watch the status by running `kubectl --kubeconfig credentials/kubeconfig get po -n ${namespace} --watch`.

## Configure

### Configure TiDB Operator
Expand Down Expand Up @@ -317,6 +349,8 @@ All the configurable parameters in `tidb-cluster` are as follows:
| `pd_instance_type` | The PD instance type | `ecs.g5.large` |
| `tikv_count` | The number of TiKV nodes | 3 |
| `tikv_instance_type` | The TiKV instance type | `ecs.i2.2xlarge` |
| `tiflash_count` | The count of TiFlash nodes | 2 |
| `tiflash_instance_type` | The TiFlash instance type | `ecs.i2.2xlarge` |
| `tidb_count` | The number of TiDB nodes | 2 |
| `tidb_instance_type` | The TiDB instance type | `ecs.c5.4xlarge` |
| `monitor_instance_type` | The instance type of monitoring components | `ecs.c5.xlarge` |
Expand Down
37 changes: 32 additions & 5 deletions en/deploy-on-aws-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,9 @@ Before deploying a TiDB cluster on AWS EKS, make sure the following requirements

This section describes how to deploy EKS, TiDB operator, TiDB cluster and monitor.

### Deploy EKS and TiDB Operator
### Deploy EKS, TiDB Operator, and TiDB cluster node pool

Use the following commands to deploy EKS and TiDB Operator.
Use the following commands to deploy EKS, TiDB Operator, and the TiDB cluster node pool.

Get the code from Github:

Expand Down Expand Up @@ -105,6 +105,8 @@ eks_name = "my-cluster"
operator_version = "v1.1.0-rc.1"
```

If you need to deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `cluster_tiflash_count` and `cluster_tiflash_instance_type`. By default, the value of `cluster_tiflash_count` is `2`, and the value of `cluster_tiflash_instance_type` is `i3.4xlarge`.

> **Note:**
>
> Check the `operator_version` in the `variables.tf` file for the default TiDB Operator version of the current scripts. If the default version is not your desired one, configure `operator_version` in `terraform.tfvars`.
Expand Down Expand Up @@ -159,12 +161,37 @@ You can use the `terraform output` command to get the output again.
cp manifests/db.yaml.example db.yaml && cp manifests/db-monitor.yaml.example db-monitor.yaml
```

To complete the CR file configuration, refer to [API documentation](api-references.md).
To complete the CR file configuration, refer to [API documentation](api-references.md) and [Configure a TiDB Cluster Using TidbCluster](configure-cluster-using-tidbcluster.md).

To deploy TiFlash, configure `spec.tiflash` in `db.yaml` as follows:

```yaml
spec:
...
tiflash:
baseImage: pingcap/tiflash
maxFailoverCount: 3
nodeSelector:
dedicated: CLUSTER_NAME-tiflash
replicas: 1
storageClaims:
- resources:
requests:
storage: 100Gi
storageClassName: local-storage
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: CLUSTER_NAME-tiflash
```

Modify `replicas`, `storageClaims[].resources.requests.storage`, and `storageClassName` according to your needs.

> **Note:**
>
> * Replace all `CLUSTER_NAME` in `db.yaml` and `db-monitor.yaml` files with `default_cluster_name` configured during EKS deployment.
> * Make sure that during EKS deployment, the number of PD, TiKV or TiDB nodes is consistent with the value of the `replicas` field of the corresponding component in `db.yaml`.
> * Make sure that during EKS deployment, the number of PD, TiKV, TiFlash, or TiDB nodes is >= the value of the `replicas` field of the corresponding component in `db.yaml`.
> * Make sure that `spec.initializer.version` in `db-monitor.yaml` and `spec.version` in `db.yaml` are the same to ensure normal monitor display.

2. Create `Namespace`:
Expand Down Expand Up @@ -263,7 +290,7 @@ The upgrading doesn't finish immediately. You can watch the upgrading progress b
## Scale
To scale the TiDB cluster, modify the `default_cluster_tikv_count` or `default_cluster_tidb_count` variable in the `terraform.tfvars` file to your desired count, and then run `terraform apply` to scale out the number of the corresponding component nodes.
To scale out the TiDB cluster, modify the `default_cluster_tikv_count`, `cluster_tiflash_count`, or `default_cluster_tidb_count` variable in the `terraform.tfvars` file to your desired count, and then run `terraform apply` to scale out the number of the corresponding component nodes.
After the scaling, modify the `replicas` of the corresponding component by the following command:
Expand Down
54 changes: 53 additions & 1 deletion en/scale-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,61 @@ Currently, the TiDB cluster supports management by Helm or by TidbCluster Custom

### Horizontal scaling operations (CR)

#### Scale PD, TiDB, and TiKV

Modify `spec.pd.replicas`, `spec.tidb.replicas`, and `spec.tikv.replicas` in the `TidbCluster` object of the cluster to a desired value using kubectl.

If TiFlash is deployed in the cluster, you can scale in and out TiFlash by modifying `spec.tiflash.replicas`.
#### Scale out TiFlash

If TiFlash is deployed in the cluster, you can scale out TiFlash by modifying `spec.tiflash.replicas`.

#### Scale in TiFlash

1. Expose the PD service by using `port-forward`:

{{< copyable "shell-regular" >}}

```shell
kubectl port-forward -n ${namespace} svc/${cluster_name}-pd 2379:2379
```

2. Open a **new** terminal tab or window. Check the maximum number (`N`) of replicas of all data tables with which TiFlash is enabled by running the following command:

{{< copyable "shell-regular" >}}

```shell
curl 127.0.0.1:2379/pd/api/v1/config/rules/group/tiflash | grep count
```

In the printed result, the largest value of `count` is the maximum number (`N`) of replicas of all data tables.

3. Go back to the terminal window in Step 1, where `port-forward` is running. Press <kbd>Ctrl</kbd>+<kbd>C</kbd> to stop `port-forward`.

4. After the scale-in operation, if the number of remaining Pods in TiFlash >= `N`, skip to Step 6. Otherwise, take the following steps:

1. Refer to [Access TiDB](access-tidb.md) and connect to the TiDB service.

2. For all the tables that have more replicas than the remaining Pods in TiFlash, run the following command:

{{< copyable "sql" >}}

```sql
alter table <db-name>.<table-name> set tiflash replica 0;
```

5. Wait for TiFlash replicas in the related tables to be deleted.

Connect to the TiDB service, and run the following command:

{{< copyable "sql" >}}

```sql
SELECT * FROM information_schema.tiflash_replica WHERE TABLE_SCHEMA = '<db_name>' and TABLE_NAME = '<table_name>';
```

If you cannot view the replication information of related tables, the TiFlash replicas are successfully deleted.

6. Modify `spec.tiflash.replicas` to scale in TiFlash.

### Horizontal scaling operations (Helm)

Expand Down

0 comments on commit 3cba71c

Please sign in to comment.