title | summary |
---|---|
tiup cluster scale-out |
The tiup cluster scale-out command is used to add new nodes to the cluster. It establishes an SSH connection to the new node, creates necessary directories, and updates the configuration. Options include -u for user, -i for identity file, -p for password, --no-labels to skip label check, --skip-create-user to skip user check, and -h for help. The output is the log of scaling out. |
The tiup cluster scale-out
command is used for scaling out the cluster. The internal logic of scaling out the cluster is similar to the cluster deployment. The tiup-cluster component first establishes an SSH connection to the new node, creates the necessary directories on the target node, then performs the deployment and starts the service.
When PD is scaled out, new PD nodes are added to the cluster by the join operation, and the configuration of the services associated with PD is updated; other services are directly started and added to the cluster.
tiup cluster scale-out <cluster-name> <topology.yaml> [flags]
<cluster-name>
: the name of the cluster to operate on. If you forget the cluster name, you can check it with the cluster list
command.
<topology.yaml>
: the prepared topology file. This topology file should only contain the new nodes that are to be added to the current cluster.
- Specifies the user name used to connect to the target machine. This user must have the secret-free sudo root permission on the target machine.
- Data type:
STRING
- Default: the current user who executes the command.
- Specifies the key file used to connect to the target machine.
- Data type:
STRING
- If this option is not specified in the command, the
~/.ssh/id_rsa
file is used to connect to the target machine by default.
- Specifies the password used to connect to the target machine. Do not use this option and
-i/--identity_file
at the same time. - Data type:
BOOLEAN
- Default: false
- This option is used to skip the label check.
- When two or more TiKV nodes are deployed on the same physical machine, a risk exists: PD does not know the cluster topology, so it might schedule multiple replicas of a Region to different TiKV nodes on one physical machine, which makes this physical machine a single point of failure. To avoid this risk, you can use labels to tell PD not to schedule the same Region to the same machine. See Schedule Replicas by Topology Labels for label configuration.
- For the test environment, this risk might not matter, and you can use
--no-labels
to skip the check. - Data type:
BOOLEAN
- Default: false
- During the cluster deployment, tiup-cluster checks whether the specified user name in the topology file exists or not. If not, it creates one. To skip this check, you can use the
--skip-create-user
option. - Data type:
BOOLEAN
- Default: false
- Prints the help information.
- Data type:
BOOLEAN
- Default: false
The log of scaling out.