Skip to content

Commit

Permalink
CR from @aarondav - move config, clarify for standalone mode
Browse files Browse the repository at this point in the history
  • Loading branch information
Evan Chan committed Apr 3, 2014
1 parent 9f10d96 commit 0689995
Showing 1 changed file with 26 additions and 24 deletions.
50 changes: 26 additions & 24 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -333,6 +333,32 @@ Apart from these, the following properties are also available, and may be useful
receives no heartbeats.
</td>
</tr>
<tr>
<td>spark.worker.cleanup.enabled</td>
<td>true</td>
<td>
Enable periodic cleanup of worker / application directories. Note that this only affects standalone
mode, as YARN works differently.
</td>
</tr>
<tr>
<td>spark.worker.cleanup.interval</td>
<td>1800 (30 minutes)</td>
<td>
Controls the interval, in seconds, at which the worker cleans up old application work dirs
on the local machine.
</td>
</tr>
<tr>
<td>spark.worker.cleanup.appDataTtl</td>
<td>7 * 24 * 3600 (7 days)</td>
<td>
The number of seconds to retain application work directories on each worker. This is a Time To Live
and should depend on the amount of available disk space you have. Application logs and jars are
downloaded to each application work dir. Over time, the work dirs can quickly fill up disk space,
especially if you run jobs very frequently.
</td>
</tr>
<tr>
<td>spark.akka.frameSize</td>
<td>10</td>
Expand Down Expand Up @@ -586,30 +612,6 @@ Apart from these, the following properties are also available, and may be useful
Number of cores to allocate for each task.
</td>
</tr>
<tr>
<td>spark.worker.cleanup.enabled</td>
<td>true</td>
<td>
Enable periodic cleanup of worker / application directories
</td>
</tr>
<tr>
<td>spark.worker.cleanup.interval</td>
<td>1800 (30 minutes)</td>
<td>
Controls the interval, in seconds, at which the worker cleans up old application work dirs
on the local machine.
</td>
</tr>
<tr>
<td>spark.worker.cleanup.appDataTtl</td>
<td>7 * 24 * 3600 (7 days)</td>
<td>
The number of seconds to retain application work directories on each worker. This is a Time To Live
and should depend on the amount of available disk space you have. Application logs and jars are
downloaded to each application work dir. Over time, the work dirs can quickly fill up disk space,
especially if you run jobs very frequently.
</tr>
</table>

## Viewing Spark Properties
Expand Down

0 comments on commit 0689995

Please sign in to comment.