Skip to content

Commit

Permalink
Update docs defaults.
Browse files Browse the repository at this point in the history
  • Loading branch information
marmbrus committed Nov 3, 2014
1 parent d63d2d5 commit 2d73acc
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/sql-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -582,7 +582,7 @@ Configuration of Parquet can be done using the `setConf` method on SQLContext or
</tr>
<tr>
<td><code>spark.sql.parquet.cacheMetadata</code></td>
<td>false</td>
<td>true</td>
<td>
Turns on caching of Parquet schema metadata. Can speed up querying of static data.
</td>
Expand Down Expand Up @@ -823,15 +823,15 @@ Configuration of in-memory caching can be done using the `setConf` method on SQL
<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
<tr>
<td><code>spark.sql.inMemoryColumnarStorage.compressed</code></td>
<td>false</td>
<td>true</td>
<td>
When set to true Spark SQL will automatically select a compression codec for each column based
on statistics of the data.
</td>
</tr>
<tr>
<td><code>spark.sql.inMemoryColumnarStorage.batchSize</code></td>
<td>1000</td>
<td>10000</td>
<td>
Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization
and compression, but risk OOMs when caching data.
Expand All @@ -849,7 +849,7 @@ that these options will be deprecated in future release as more optimizations ar
<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
<tr>
<td><code>spark.sql.autoBroadcastJoinThreshold</code></td>
<td>10000</td>
<td>10485760 (10 MB)</td>
<td>
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when
performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently
Expand Down

0 comments on commit 2d73acc

Please sign in to comment.