Skip to content

Commit

Permalink
Added a description about default host for sbin/slaves
Browse files Browse the repository at this point in the history
  • Loading branch information
sarutak committed Sep 22, 2014
1 parent 1bba8a9 commit 7120a0c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/spark-standalone.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ Finally, the following configuration options can be passed to the master and wor

# Cluster Launch Scripts

To launch a Spark standalone cluster with the launch scripts, you need to create a file called `conf/slaves` in your Spark directory, which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. The master machine must be able to access each of the slave machines via password-less `ssh` (using a private key). For testing, you can just put `localhost` in this file.
To launch a Spark standalone cluster with the launch scripts, you need to create a file called `conf/slaves` in your Spark directory, which should contain the hostnames of all the machines where you would like to start Spark workers, one per line. If conf/slaves does not exist, the launch scripts use a list which contains single hostname `localhost`. This can be used for testing. The master machine must be able to access each of the slave machines via password-less `ssh` (using a private key).

Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/bin`:

Expand Down

0 comments on commit 7120a0c

Please sign in to comment.