From 26283a5e446bc18346a9d46fc1246515425919c9 Mon Sep 17 00:00:00 2001 From: Timothy Chen Date: Tue, 18 Nov 2014 14:08:42 -0800 Subject: [PATCH] Add mesos specific configurations into doc --- docs/running-on-mesos.md | 41 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md index 1073abb202c56..945906f2ac725 100644 --- a/docs/running-on-mesos.md +++ b/docs/running-on-mesos.md @@ -183,6 +183,47 @@ node. Please refer to [Hadoop on Mesos](https://github.com/mesos/hadoop). In either case, HDFS runs separately from Hadoop MapReduce, without being scheduled through Mesos. +# Configuration + +See the [configuration page](configuration.html) for information on Spark configurations. The following configs are specific for Spark on Mesos. + +#### Spark Properties + + + + + + + + + + + + + + + + + + + + + + + +
Property NameDefaultMeaning
spark.mesos.coarsefalse + Set the run mode for Spark on Mesos. For more information about the run mode, refer to #Mesos Run Mode section above. +
spark.mesos.extra.cores0 + Set the extra amount of cpus to request per task. +
spark.mesos.executor.homeSPARK_HOME + The location where the mesos executor will look for Spark binaries to execute, and uses the SPARK_HOME setting on default. + This variable is only used when no spark.executor.uri is provided, and assumes Spark is installed on the specified location + on each slave. +
spark.mesos.executor.memoryOverhead384 + The amount of memory that Mesos executor will request for the task to account for the overhead of running the executor itself. + The final total amount of memory allocated is the maximum value between executor memory plus memoryOverhead, and overhead fraction (1.07) plus the executor memory. +
+ # Troubleshooting and Debugging A few places to look during debugging: