Skip to content

Commit

Permalink
Add scripts to verify link anchors (#530)
Browse files Browse the repository at this point in the history
* add link achor check

* fix dead anchors

* reorder

* fix anchors

* fix an anchor according to hailong's suggestion

Co-authored-by: Ran <huangran@pingcap.com>
  • Loading branch information
yikeke and ran-huang authored Jul 6, 2020
1 parent c149778 commit 384922e
Show file tree
Hide file tree
Showing 14 changed files with 43 additions and 34 deletions.
9 changes: 7 additions & 2 deletions .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,15 @@ jobs:
steps:
- name: Check out
uses: actions/checkout@v2
- name: Verify links
run: ./hack/verify-links.sh
- uses: actions/setup-node@v1
with:
node-version: '12'
- name: Markdown lint
uses: avto-dev/markdown-lint@v1
with:
config: './.markdownlint.yaml'
args: '.'
- name: Verify links
run: ./hack/verify-links.sh
- name: Verify link anchors
run: ./hack/verify-link-anchors.sh
6 changes: 3 additions & 3 deletions en/notes-tidb-operator-v1.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ You can modify `version`, `replicas`, `storageClassName`, `requests.storage`, an
After TiDB Operator is upgraded to v1.1, you can configure the scheduled full backup using BackupSchedule CR:

- If the TiDB cluster version < v3.1, refer to [Scheduled backup using mydumper](backup-to-s3.md#scheduled-full-backup-to-s3-compatible-storage)
- If the TiDB cluster version >= v3.1, refer to [Scheduled backup using BR](backup-to-aws-s3-using-br.md#scheduled-full-backup-to-s3-compatible-storage)
- If the TiDB cluster version >= v3.1, refer to [Scheduled backup using BR](backup-to-aws-s3-using-br.md#scheduled-full-backup)

> **Note:**
>
Expand All @@ -103,7 +103,7 @@ After TiDB Operator is upgraded to v1.1, you can configure the scheduled full ba

### Drainer

- If Drainer is not deployed before TiDB Operator is upgraded to v1.1, you can deploy Drainer as in [Deploy multiple drainers](deploy-tidb-binlog.md#deploy-multiple-drainers).
- If Drainer is not deployed before TiDB Operator is upgraded to v1.1, you can deploy Drainer as in [Deploy multiple drainers](deploy-tidb-binlog.md#deploy-drainer).
- If Drainer is already deployed using the tidb-drainer chart before TiDB Operator is upgraded to v1.1, it is recommended to continue managing Drainer using the tidb-drainer chart.
- If Drainer is already deployed using the tidb-cluster chart before TiDB Operator is upgraded to v1.1, it is recommended to manage Drainer using kubectl.

Expand All @@ -120,7 +120,7 @@ This section describes how to switch other components and features managed by th

After TiDB Operator is upgraded to v1.1, you can perform full backup using the Backup CR.

- If the TiDB cluster version < v3.1, refer to [Ad-hoc full backup using Mydumper](backup-to-s3.md#ad-hoc-full-backup).
- If the TiDB cluster version < v3.1, refer to [Ad-hoc full backup using Mydumper](backup-to-s3.md#ad-hoc-full-backup-to-s3-compatible-storage).
- If the TiDB cluster version >= v3.1, refer to [Ad-hoc full backup using BR](backup-to-aws-s3-using-br.md#ad-hoc-full-backup).

> **Note:**
Expand Down
2 changes: 1 addition & 1 deletion en/restore-from-s3.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ This document shows an example in which the backup data stored in the specified

## Prerequisites

Refer to [Prerequisites](restore-from-aws-s3-using-br.md#prerequisites-for-ad-hoc-full-backup).
Refer to [Prerequisites](restore-from-aws-s3-using-br.md#prerequisites).

## Restoration process

Expand Down
4 changes: 2 additions & 2 deletions en/tidb-operator-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,8 @@ After the deployment is complete, see the following documents to use, operate, a

+ [Access the TiDB Cluster](access-tidb.md)
+ [Scale TiDB Cluster](scale-a-tidb-cluster.md)
+ [Upgrade TiDB Cluster](upgrade-a-tidb-cluster.md#upgrade-the-version-of-tidb-cluster)
+ [Change the Configuration of TiDB Cluster](upgrade-a-tidb-cluster.md#change-the-configuration-of-tidb-cluster)
+ [Upgrade TiDB Cluster](upgrade-a-tidb-cluster.md#upgrade-the-version-of-tidb-using-tidbcluster-cr)
+ [Change the Configuration of TiDB Cluster](configure-a-tidb-cluster.md)
+ [Back up a TiDB Cluster](backup-to-aws-s3-using-br.md)
+ [Restore a TiDB Cluster](restore-from-aws-s3-using-br.md)
+ [Automatic Failover](use-auto-failover.md)
Expand Down
2 changes: 1 addition & 1 deletion en/tidb-scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,4 +94,4 @@ The scheduling process of a Pod is as follows:
- Then, `kube-scheduler` sends a request to the `tidb-scheduler` service. Then `tidb-scheduler` filters the sent nodes through the customized scheduling rules (as mentioned above), and returns schedulable nodes to `kube-scheduler`.
- Finally, `kube-scheduler` determines the nodes to be scheduled.

If a Pod cannot be scheduled, see the [troubleshooting document](troubleshoot.md#the-Pod-is-in-the-Pending-state) to diagnose and solve the issue.
If a Pod cannot be scheduled, see the [troubleshooting document](troubleshoot.md#the-pod-is-in-the-pending-state) to diagnose and solve the issue.
4 changes: 2 additions & 2 deletions en/upgrade-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ If the TiDB cluster is deployed directly using TidbCluster CR, or deployed using

### Force an upgrade of TiDB cluster using TidbCluster CR

If the PD cluster is unavailable due to factors such as PD configuration error, PD image tag error and NodeAffinity, then [scaling the TiDB cluster](scale-a-tidb-cluster.md), [upgrading the TiDB cluster](#upgrade-the-version-of-tidb-cluster) and [changing the TiDB cluster configuration](#change-the-configuration-of-tidb-cluster) cannot be done successfully.
If the PD cluster is unavailable due to factors such as PD configuration error, PD image tag error and NodeAffinity, then [scaling the TiDB cluster](scale-a-tidb-cluster.md), [upgrading the TiDB cluster](#upgrade-the-version-of-tidb-using-tidbcluster-cr) and changing the TiDB cluster configuration cannot be done successfully.

In this case, you can use `force-upgrade` to force an upgrade of the cluster to recover cluster functionality.

Expand Down Expand Up @@ -96,7 +96,7 @@ If you continue to manage your cluster using Helm, refer to the following steps

### Force an upgrade of TiDB cluster using Helm

If the PD cluster is unavailable due to factors such as PD configuration error, PD image tag error and NodeAffinity, then [scaling the TiDB cluster](scale-a-tidb-cluster.md), [upgrading the TiDB cluster](#upgrade-the-version-of-tidb-cluster) and [changing the TiDB cluster configuration](#change-the-configuration-of-tidb-cluster) cannot be done successfully.
If the PD cluster is unavailable due to factors such as PD configuration error, PD image tag error and NodeAffinity, then [scaling the TiDB cluster](scale-a-tidb-cluster.md), [upgrading the TiDB cluster](#upgrade-the-version-of-tidb-using-helm) and changing the TiDB cluster configuration cannot be done successfully.

In this case, you can use `force-upgrade` (the version of TiDB Operator must be later than v1.0.0-beta.3) to force an upgrade of the cluster to recover cluster functionality.

Expand Down
16 changes: 16 additions & 0 deletions hack/verify-link-anchors.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#!/bin/bash
#
# In addition to verify-links.sh, this script additionally check anchors.
#
# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.

set -euo pipefail

ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT

npm install remark-cli remark-lint breeswish/remark-lint-pingcap-docs-anchor

echo "info: checking links anchors under $ROOT directory..."

npx remark --ignore-path .gitignore -u lint -u remark-lint-pingcap-docs-anchor . --frail --quiet
18 changes: 3 additions & 15 deletions hack/verify-links.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,31 +2,19 @@
#
# This script is used to verify links in markdown docs.
#
# See https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally if you meet permission problems when executing npm install.

ROOT=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/.. && pwd)
cd $ROOT

if ! which markdown-link-check &>/dev/null; then
sudo npm install -g markdown-link-check@3.8.0
fi
npm install markdown-link-check@3.8.1

VERBOSE=${VERBOSE:-}
CONFIG_TMP=$(mktemp)
ERROR_REPORT=$(mktemp)

trap 'rm -f $CONFIG_TMP $ERROR_REPORT' EXIT

function in_array() {
local i=$1
shift
local a=("${@}")
local e
for e in "${a[@]}"; do
[[ "$e" == "$i" ]] && return 0;
done
return 1
}

# Check all directories starting with 'v\d.*' and dev.
for d in zh en; do
echo "info: checking links under $ROOT/$d directory..."
Expand All @@ -39,7 +27,7 @@ for d in zh en; do
while read -r tasks; do
for task in $tasks; do
(
output=$(markdown-link-check --color --config "$CONFIG_TMP" "$task" -q)
output=$(npx markdown-link-check --config "$CONFIG_TMP" "$task" -q)
if [ $? -ne 0 ]; then
printf "$output" >> $ERROR_REPORT
fi
Expand Down
2 changes: 1 addition & 1 deletion zh/backup-to-gcs-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ Ad-hoc 全量备份通过创建一个自定义的 `Backup` custom resource (CR)
kubectl get bk -l tidb.pingcap.com/backup-schedule=demo1-backup-schedule-gcs -n test1
```

从以上示例可知,`backupSchedule` 的配置由两部分组成。一部分是 `backupSchedule` 独有的配置,另一部分是 `backupTemplate``backupTemplate` 指定 GCS 存储相关的配置,该配置与 Ad-hoc 全量备份到 GCS 的配置完全一样,可参考[备份数据到 GCS](#备份数据到-gcs)。下面介绍 `backupSchedule` 独有的配置项:
从以上示例可知,`backupSchedule` 的配置由两部分组成。一部分是 `backupSchedule` 独有的配置,另一部分是 `backupTemplate``backupTemplate` 指定 GCS 存储相关的配置,该配置与 Ad-hoc 全量备份到 GCS 的配置完全一样,可参考[Ad-hoc 全量备份过程](#ad-hoc-全量备份过程)。下面介绍 `backupSchedule` 独有的配置项:

+ `.spec.maxBackups`:一种备份保留策略,决定定时备份最多可保留的备份个数。超过该数目,就会将过时的备份删除。如果将该项设置为 `0`,则表示保留所有备份。
+ `.spec.maxReservedTime`:一种备份保留策略,按时间保留备份。比如将该参数设置为 `24h`,表示只保留最近 24 小时内的备份条目。超过这个时间的备份都会被清除。时间设置格式参考[`func ParseDuration`](https://golang.org/pkg/time/#ParseDuration)。如果同时设置最大备份保留个数和最长备份保留时间,则以最长备份保留时间为准。
Expand Down
2 changes: 1 addition & 1 deletion zh/deploy-on-aws-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -575,7 +575,7 @@ module example-cluster {

修改完成后,执行 `terraform init``terraform apply` 为集群创建节点池。

最后,参考[部署 TiDB 集群和监控](#部署-TiDB-集群和监控) 部署新集群及其监控。
最后,参考[部署 TiDB 集群和监控](#部署-tidb-集群和监控) 部署新集群及其监控。

## 销毁集群

Expand Down
4 changes: 2 additions & 2 deletions zh/notes-tidb-operator-v1.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,8 +112,8 @@ spec

升级到 TiDB Operator v1.1 之后,可以通过 Backup CR 进行全量备份:

- 如果 TiDB 集群版本 < v3.1,可以参考 [mydumper Ad-hoc 全量备份](backup-to-s3.md#Ad-hoc-全量备份)
- 如果 TiDB 集群版本 >= v3.1,可以参考 [BR Ad-hoc 全量备份](backup-to-aws-s3-using-br.md#Ad-hoc-全量备份)
- 如果 TiDB 集群版本 < v3.1,可以参考 [Mydumper Ad-hoc 全量备份](backup-to-s3.md#ad-hoc-全量备份)
- 如果 TiDB 集群版本 >= v3.1,可以参考 [BR Ad-hoc 全量备份](backup-to-aws-s3-using-br.md#ad-hoc-全量备份)

> **注意:**
>
Expand Down
2 changes: 1 addition & 1 deletion zh/restore-from-gcs-using-br.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ category: how-to
kubectl get rt -n test2 -owide
```

以上示例将存储在 GCS 上指定路径 `spec.gcs.bucket` 存储桶中 `spec.gcs.prefix`文件夹下的备份数据恢复到 TiDB 集群 `spec.to.host`。关于 BR、GCS 的配置项可以参考 [backup-gcs.yaml](backup-to-gcs-using-br.md#备份数据到-gcs) 中的配置。
以上示例将存储在 GCS 上指定路径 `spec.gcs.bucket` 存储桶中 `spec.gcs.prefix`文件夹下的备份数据恢复到 TiDB 集群 `spec.to.host`。关于 BR、GCS 的配置项可以参考 [backup-gcs.yaml](backup-to-gcs-using-br.md#ad-hoc-全量备份过程) 中的配置。

更多 `Restore` CR 字段的详细解释如下:

Expand Down
2 changes: 1 addition & 1 deletion zh/tidb-operator-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ TiDB Operator 提供了多种方式来部署 Kubernetes 上的 TiDB 集群:
+ [访问 TiDB 集群](access-tidb.md)
+ [TiDB 集群扩缩容](scale-a-tidb-cluster.md)
+ [TiDB 集群升级](upgrade-a-tidb-cluster.md#升级-tidb-版本)
+ [TiDB 集群配置变更](upgrade-a-tidb-cluster.md#更新-tidb-集群配置)
+ [TiDB 集群配置变更](configure-cluster-using-tidbcluster.md)
+ [TiDB 集群备份](backup-to-aws-s3-using-br.md)
+ [TiDB 集群备份恢复](restore-from-aws-s3-using-br.md)
+ [配置 TiDB 集群故障自动转移](use-auto-failover.md)
Expand Down
4 changes: 2 additions & 2 deletions zh/upgrade-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ category: how-to

### 强制升级 TiDB 集群

如果 PD 集群因为 PD 配置错误、PD 镜像 tag 错误、NodeAffinity 等原因不可用,[TiDB 集群扩缩容](scale-a-tidb-cluster.md)、[升级 TiDB 版本](#升级-tidb-版本)和[更新 TiDB 集群配置](#更新-tidb-集群配置)这三种操作都无法成功执行
如果 PD 集群因为 PD 配置错误、PD 镜像 tag 错误、NodeAffinity 等原因不可用,[TiDB 集群扩缩容](scale-a-tidb-cluster.md)、[升级 TiDB 版本](#升级-tidb-版本)和更新 TiDB 集群配置这三种操作都无法成功执行

这种情况下,可使用 `force-upgrade` 强制升级集群以恢复集群功能。
首先为集群设置 `annotation`
Expand Down Expand Up @@ -97,7 +97,7 @@ kubectl annotate --overwrite tc ${cluster_name} -n ${namespace} tidb.pingcap.com

### 强制升级 TiDB 集群

如果 PD 集群因为 PD 配置错误、PD 镜像 tag 错误、NodeAffinity 等原因不可用,[TiDB 集群扩缩容](scale-a-tidb-cluster.md)、[升级 TiDB 版本](#升级-tidb-版本)和[更新 TiDB 集群配置](#更新-tidb-集群配置)这三种操作都无法成功执行
如果 PD 集群因为 PD 配置错误、PD 镜像 tag 错误、NodeAffinity 等原因不可用,[TiDB 集群扩缩容](scale-a-tidb-cluster.md)、[升级 TiDB 版本](#升级-tidb-版本)和更新 TiDB 集群配置这三种操作都无法成功执行

这种情况下,可使用 `force-upgrade`(TiDB Operator 版本 > v1.0.0-beta.3 )强制升级集群以恢复集群功能。
首先为集群设置 `annotation`
Expand Down

0 comments on commit 384922e

Please sign in to comment.