Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-21786][SQL] The 'spark.sql.parquet.compression.codec' and 'spark.sql.orc.compression.codec' configuration doesn't take effect on hive table writing #20087

Closed
wants to merge 59 commits into from
Closed
Show file tree
Hide file tree
Changes from 56 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
9bbfe6e
[SPARK-21786][SQL] When acquiring 'compressionCodecClassName' in 'Par…
fjh100456 Dec 25, 2017
48cf108
[SPARK-21786][SQL] When acquiring 'compressionCodecClassName' in 'Par…
fjh100456 Dec 25, 2017
5dbd3ed
spark.sql.parquet.compression.codec[SPARK-21786][SQL] When acquiring …
fjh100456 Dec 25, 2017
5124f1b
spark.sql.parquet.compression.codec[SPARK-21786][SQL] When acquiring …
fjh100456 Dec 25, 2017
6907a3e
Make comression codec take effect in hive table writing.
fjh100456 Dec 25, 2017
67e40d4
Modify test
fjh100456 Dec 25, 2017
e2526ca
Separate the pr
fjh100456 Dec 26, 2017
8ae86ee
Add test case with the table containing mixed compression codec
fjh100456 Dec 26, 2017
94ac716
Revert back
fjh100456 Dec 26, 2017
43e041f
Revert back
fjh100456 Dec 26, 2017
ee0c558
Add a new line at the of file
fjh100456 Dec 26, 2017
e9f705d
Fix scala style
fjh100456 Jan 2, 2018
d3aa7a0
Fix scala style
fjh100456 Jan 2, 2018
5244aaf
[SPARK-22897][CORE] Expose stageAttemptId in TaskContext
advancedxy Jan 2, 2018
b96a213
[SPARK-22938] Assert that SQLConf.get is accessed only on the driver.
juliuszsompolski Jan 3, 2018
a05e85e
[SPARK-22934][SQL] Make optional clauses order insensitive for CREATE…
gatorsmile Jan 3, 2018
b962488
[SPARK-20236][SQL] dynamic partition overwrite
cloud-fan Jan 3, 2018
27c949d
[SPARK-22932][SQL] Refactor AnalysisContext
gatorsmile Jan 2, 2018
79f7263
[SPARK-22896] Improvement in String interpolation
chetkhatri Jan 3, 2018
a51212b
[SPARK-20960][SQL] make ColumnVector public
cloud-fan Jan 3, 2018
f51c8fd
[SPARK-22944][SQL] improve FoldablePropagation
cloud-fan Jan 4, 2018
1860a43
[SPARK-22933][SPARKR] R Structured Streaming API for withWatermark, t…
felixcheung Jan 4, 2018
a7cfd6b
[SPARK-22950][SQL] Handle ChildFirstURLClassLoader's parent
yaooqinn Jan 4, 2018
eb99b8a
[SPARK-22945][SQL] add java UDF APIs in the functions object
cloud-fan Jan 4, 2018
1f5e354
[SPARK-22939][PYSPARK] Support Spark UDF in registerFunction
gatorsmile Jan 4, 2018
bcfeef5
[SPARK-22771][SQL] Add a missing return statement in Concat.checkInpu…
maropu Jan 4, 2018
cd92913
[SPARK-21475][CORE][2ND ATTEMPT] Change to use NIO's Files API for ex…
jerryshao Jan 4, 2018
bc4bef4
[SPARK-22850][CORE] Ensure queued events are delivered to all event q…
Jan 4, 2018
2ab4012
[SPARK-22948][K8S] Move SparkPodInitContainer to correct package.
Jan 4, 2018
84707f0
[SPARK-22953][K8S] Avoids adding duplicated secret volumes when init-…
liyinan926 Jan 4, 2018
ea9da61
[SPARK-22960][K8S] Make build-push-docker-images.sh more dev-friendly.
Jan 5, 2018
158f7e6
[SPARK-22957] ApproxQuantile breaks if the number of rows exceeds MaxInt
juliuszsompolski Jan 5, 2018
145820b
[SPARK-22825][SQL] Fix incorrect results of Casting Array to String
maropu Jan 5, 2018
5b524cc
[SPARK-22949][ML] Apply CrossValidator approach to Driver/Distributed…
MrBago Jan 5, 2018
f9dcdbc
[SPARK-22757][K8S] Enable spark.jars and spark.files in KUBERNETES mode
liyinan926 Jan 5, 2018
fd4e304
[SPARK-22961][REGRESSION] Constant columns should generate QueryPlanC…
adrian-ionescu Jan 5, 2018
0a30e93
[SPARK-22940][SQL] HiveExternalCatalogVersionsSuite should succeed on…
bersprockets Jan 5, 2018
d1f422c
[SPARK-13030][ML] Follow-up cleanups for OneHotEncoderEstimator
jkbradley Jan 5, 2018
55afac4
[SPARK-22914][DEPLOY] Register history.ui.port
gerashegalov Jan 6, 2018
bf85301
[SPARK-22937][SQL] SQL elt output binary for binary inputs
maropu Jan 6, 2018
3e3e938
[SPARK-22960][K8S] Revert use of ARG base_image in images
liyinan926 Jan 6, 2018
7236914
[SPARK-22930][PYTHON][SQL] Improve the description of Vectorized UDFs…
icexelloss Jan 6, 2018
e6449e8
[SPARK-22793][SQL] Memory leak in Spark Thrift Server
Jan 6, 2018
0377755
[SPARK-21786][SQL] When acquiring 'compressionCodecClassName' in 'Par…
fjh100456 Jan 6, 2018
b66700a
[SPARK-22901][PYTHON][FOLLOWUP] Adds the doc for asNondeterministic f…
HyukjinKwon Jan 6, 2018
f9e7b0c
[HOTFIX] Fix style checking failure
gatorsmile Jan 6, 2018
285d342
[SPARK-22973][SQL] Fix incorrect results of Casting Map to String
maropu Jan 7, 2018
bd1a80a
Merge remote-tracking branch 'upstream/branch-2.3'
fjh100456 Jan 8, 2018
584cdc2
Merge pull request #2 from apache/master
fjh100456 Jan 8, 2018
5b150bc
Fix test issue
fjh100456 Jan 8, 2018
2337edd
Merge pull request #1 from apache/master
fjh100456 Jan 8, 2018
43e7eb5
Merge branch 'master' of https://github.com/fjh100456/spark
fjh100456 Jan 9, 2018
4b89b44
consider the precedence of `hive.exec.compress.output`
fjh100456 Jan 11, 2018
6cf32e0
Resume to private and add public function
fjh100456 Jan 19, 2018
365c5bf
Resume to private and add public function
fjh100456 Jan 19, 2018
99271d6
Fix test issue
fjh100456 Jan 19, 2018
2b9dfbe
Fix test issue
fjh100456 Jan 20, 2018
5b5e1df
Fix style issue
fjh100456 Jan 20, 2018
118f788
Fix style issue
fjh100456 Jan 20, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -67,4 +67,6 @@ object OrcOptions {
"snappy" -> "SNAPPY",
"zlib" -> "ZLIB",
"lzo" -> "LZO")

def getORCCompressionCodecName(name: String): String = shortOrcCompressionCodecNames(name)
}
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ import org.apache.spark.sql.internal.SQLConf
/**
* Options for the Parquet data source.
*/
private[parquet] class ParquetOptions(
class ParquetOptions(
@transient private val parameters: CaseInsensitiveMap[String],
@transient private val sqlConf: SQLConf)
extends Serializable {
Expand Down Expand Up @@ -82,4 +82,7 @@ object ParquetOptions {
"snappy" -> CompressionCodecName.SNAPPY,
"gzip" -> CompressionCodecName.GZIP,
"lzo" -> CompressionCodecName.LZO)

def getParquetCompressionCodecName(name: String): String =
shortParquetCompressionCodecNames(name).name()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def getParquetCompressionCodecName(name: String): String = {
  shortParquetCompressionCodecNames(name).name()
}

}
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,16 @@ package org.apache.spark.sql.hive.execution

import java.util.Locale

import scala.collection.JavaConverters._

import org.apache.hadoop.hive.ql.plan.TableDesc
import org.apache.orc.OrcConf.COMPRESS
import org.apache.parquet.hadoop.ParquetOutputFormat

import org.apache.spark.sql.catalyst.util.CaseInsensitiveMap
import org.apache.spark.sql.execution.datasources.orc.OrcOptions
import org.apache.spark.sql.execution.datasources.parquet.ParquetOptions
import org.apache.spark.sql.internal.SQLConf

/**
* Options for the Hive data source. Note that rule `DetermineHiveSerde` will extract Hive
Expand Down Expand Up @@ -102,4 +111,17 @@ object HiveOptions {
"collectionDelim" -> "colelction.delim",
"mapkeyDelim" -> "mapkey.delim",
"lineDelim" -> "line.delim").map { case (k, v) => k.toLowerCase(Locale.ROOT) -> v }

def getHiveWriteCompression(tableInfo: TableDesc, sqlConf: SQLConf): Option[(String, String)] = {
val tableProps = tableInfo.getProperties.asScala.toMap
tableInfo.getOutputFileFormatClassName.toLowerCase(Locale.ROOT) match {
case formatName if formatName.endsWith("parquetoutputformat") =>
val compressionCodec = new ParquetOptions(tableProps, sqlConf).compressionCodecClassName
Option((ParquetOutputFormat.COMPRESSION, compressionCodec))
case formatName if formatName.endsWith("orcoutputformat") =>
val compressionCodec = new OrcOptions(tableProps, sqlConf).compressionCodec
Option((COMPRESS.getAttribute, compressionCodec))
case _ => None
}
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -55,18 +55,28 @@ private[hive] trait SaveAsHiveFile extends DataWritingCommand {
customPartitionLocations: Map[TablePartitionSpec, String] = Map.empty,
partitionAttributes: Seq[Attribute] = Nil): Set[String] = {

val isCompressed = hadoopConf.get("hive.exec.compress.output", "false").toBoolean
val isCompressed =
fileSinkConf.getTableInfo.getOutputFileFormatClassName.toLowerCase(Locale.ROOT) match {
case formatName if formatName.endsWith("orcoutputformat") =>
// For ORC,"mapreduce.output.fileoutputformat.compress",
// "mapreduce.output.fileoutputformat.compress.codec", and
// "mapreduce.output.fileoutputformat.compress.type"
// have no impact because it uses table properties to store compression information.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although this is the existing behavior, but could you investigate how Hive behaves when Parquet.Compress is set. https://issues.apache.org/jira/browse/HIVE-7858 Is it the same as ORC?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Surely, I'll do it this days.

Copy link
Contributor Author

@fjh100456 fjh100456 Jan 23, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For parquet, using a hive client, parquet.compression has a higher priority than mapreduce.output.fileoutputformat.compress. And table-level compression( set by tblproperties) has the highest priority. parquet.compression set by cli also has a higher priority than mapreduce.output.fileoutputformat.compress.

After this pr, the priority does not changed. If table-level compression was set, other compression would not take effect, even though mapreduce.output.... were set, which is the same with hive. But parquet.compression set by spark cli does not take effect, unless set hive.exec.compress.output to true. This may because we do not get parquet.compression from the session, and I wonder if it's necessary because we have spark.sql.parquet.comression.codec instead.

For orc, hive.exec.compress.output and mapreduce.output.... have no impact really, but table-leval compression (set by tblproperties) always take effect. orc.compression set by spark cli does not take effect too, even though set hive.exec.compress.output to true, which is differet with parquet.
Another question, the comment say it uses table properties to store compression information, actully, by manul test, I found orc-tables also can have mixed compressions, and the data can be read together correctly, maybe I'm not very clear with what the comment mean.

My Hive version for this test is 1.1.0. Actully it's a little difficut for me to get a higher version runable Hive client.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment might not be correct now. We need to follow what the latest Hive works, if possible. The best way to try Hive (and the other RDBMS) is using docker. Maybe you can try the docker?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I'll try it.

false
case _ => hadoopConf.get("hive.exec.compress.output", "false").toBoolean
}

if (isCompressed) {
// Please note that isCompressed, "mapreduce.output.fileoutputformat.compress",
// "mapreduce.output.fileoutputformat.compress.codec", and
// "mapreduce.output.fileoutputformat.compress.type"
// have no impact on ORC because it uses table properties to store compression information.
hadoopConf.set("mapreduce.output.fileoutputformat.compress", "true")
fileSinkConf.setCompressed(true)
fileSinkConf.setCompressCodec(hadoopConf
.get("mapreduce.output.fileoutputformat.compress.codec"))
fileSinkConf.setCompressType(hadoopConf
.get("mapreduce.output.fileoutputformat.compress.type"))
} else {
// Set compression by priority
HiveOptions.getHiveWriteCompression(fileSinkConf.getTableInfo, sparkSession.sessionState.conf)
.foreach { case (compression, codec) => hadoopConf.set(compression, codec) }
}

val committer = FileCommitProtocol.instantiate(
Expand Down
Loading