diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 6949c27438edb..a6eb29a50cf02 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -582,7 +582,7 @@ Configuration of Parquet can be done using the `setConf` method on SQLContext or
spark.sql.parquet.cacheMetadata |
- false |
+ true |
Turns on caching of Parquet schema metadata. Can speed up querying of static data.
|
@@ -823,7 +823,7 @@ Configuration of in-memory caching can be done using the `setConf` method on SQL
Property Name | Default | Meaning |
spark.sql.inMemoryColumnarStorage.compressed |
- false |
+ true |
When set to true Spark SQL will automatically select a compression codec for each column based
on statistics of the data.
@@ -831,7 +831,7 @@ Configuration of in-memory caching can be done using the `setConf` method on SQL
|
spark.sql.inMemoryColumnarStorage.batchSize |
- 1000 |
+ 10000 |
Controls the size of batches for columnar caching. Larger batch sizes can improve memory utilization
and compression, but risk OOMs when caching data.
@@ -849,7 +849,7 @@ that these options will be deprecated in future release as more optimizations ar
|
Property Name | Default | Meaning |
spark.sql.autoBroadcastJoinThreshold |
- 10000 |
+ 10485760 (10 MB) |
Configures the maximum size in bytes for a table that will be broadcast to all worker nodes when
performing a join. By setting this value to -1 broadcasting can be disabled. Note that currently
|