Skip to content

Commit

Permalink
[CARMEL-6306] backport [SPARK-33399][SPARK-31078] Normalize output pa…
Browse files Browse the repository at this point in the history
…rtitioning and sortorder with respect to aliases to avoid unneeded exchange/sort nodes (#1092)

* [SPARK-31078][SQL] Respect aliases in output ordering

Currently, in the following scenario, an unnecessary `Sort` node is introduced:
```scala
withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> "0") {
  val df = (0 until 20).toDF("i").as("df")
  df.repartition(8, df("i")).write.format("parquet")
    .bucketBy(8, "i").sortBy("i").saveAsTable("t")
  val t1 = spark.table("t")
  val t2 = t1.selectExpr("i as ii")
  t1.join(t2, t1("i") === t2("ii")).explain
}
```
```
== Physical Plan ==
*(3) SortMergeJoin [i#8], [ii#10], Inner
:- *(1) Project [i#8]
:  +- *(1) Filter isnotnull(i#8)
:     +- *(1) ColumnarToRow
:        +- FileScan parquet default.t[i#8] Batched: true, DataFilters: [isnotnull(i#8)], Format: Parquet, Location: InMemoryFileIndex[file:/..., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int>, SelectedBucketsCount: 8 out of 8
+- *(2) Sort [ii#10 ASC NULLS FIRST], false, 0    <==== UNNECESSARY
   +- *(2) Project [i#8 AS ii#10]
      +- *(2) Filter isnotnull(i#8)
         +- *(2) ColumnarToRow
            +- FileScan parquet default.t[i#8] Batched: true, DataFilters: [isnotnull(i#8)], Format: Parquet, Location: InMemoryFileIndex[file:/..., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int>, SelectedBucketsCount: 8 out of 8
```
Notice that `Sort [ii#10 ASC NULLS FIRST], false, 0` is introduced even though the underlying data is already sorted. This is because `outputOrdering` doesn't handle aliases correctly. This PR proposes to fix this issue.

To better handle aliases in `outputOrdering`.

Yes, now with the fix, the `explain` prints out the following:
```
== Physical Plan ==
*(3) SortMergeJoin [i#8], [ii#10], Inner
:- *(1) Project [i#8]
:  +- *(1) Filter isnotnull(i#8)
:     +- *(1) ColumnarToRow
:        +- FileScan parquet default.t[i#8] Batched: true, DataFilters: [isnotnull(i#8)], Format: Parquet, Location: InMemoryFileIndex[file:/..., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int>, SelectedBucketsCount: 8 out of 8
+- *(2) Project [i#8 AS ii#10]
   +- *(2) Filter isnotnull(i#8)
      +- *(2) ColumnarToRow
         +- FileScan parquet default.t[i#8] Batched: true, DataFilters: [isnotnull(i#8)], Format: Parquet, Location: InMemoryFileIndex[file:/..., PartitionFilters: [], PushedFilters: [IsNotNull(i)], ReadSchema: struct<i:int>, SelectedBucketsCount: 8 out of 8
```

Tests added.

Closes #27842 from imback82/alias_aware_sort_order.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Wenchen Fan <wenchen@databricks.com>

* [SPARK-33399][SQL] Normalize output partitioning and sortorder with respect to aliases to avoid unneeded exchange/sort nodes

This pull request tries to remove unneeded exchanges/sorts by normalizing the output partitioning and sortorder information correctly with respect to aliases.

Example: consider this join of three tables:

     |SELECT t2id, t3.id as t3id
     |FROM (
     |    SELECT t1.id as t1id, t2.id as t2id
     |    FROM t1, t2
     |    WHERE t1.id = t2.id
     |) t12, t3
     |WHERE t1id = t3.id

The plan for this looks like:

      *(9) Project [t2id#1034L, id#1004L AS t3id#1035L]
      +- *(9) SortMergeJoin [t1id#1033L], [id#1004L], Inner
         :- *(6) Sort [t1id#1033L ASC NULLS FIRST], false, 0
         :  +- Exchange hashpartitioning(t1id#1033L, 5), true, [id=#1343]   <------------------------------
         :     +- *(5) Project [id#996L AS t1id#1033L, id#1000L AS t2id#1034L]
         :        +- *(5) SortMergeJoin [id#996L], [id#1000L], Inner
         :           :- *(2) Sort [id#996L ASC NULLS FIRST], false, 0
         :           :  +- Exchange hashpartitioning(id#996L, 5), true, [id=#1329]
         :           :     +- *(1) Range (0, 10, step=1, splits=2)
         :           +- *(4) Sort [id#1000L ASC NULLS FIRST], false, 0
         :              +- Exchange hashpartitioning(id#1000L, 5), true, [id=#1335]
         :                 +- *(3) Range (0, 20, step=1, splits=2)
         +- *(8) Sort [id#1004L ASC NULLS FIRST], false, 0
            +- Exchange hashpartitioning(id#1004L, 5), true, [id=#1349]
               +- *(7) Range (0, 30, step=1, splits=2)

In this plan, the marked exchange could have been avoided as the data is already partitioned on "t1.id". This happens because AliasAwareOutputPartitioning class handles aliases only related to HashPartitioning. This change normalizes all output partitioning based on aliasing happening in Project.

To remove unneeded exchanges.

No

New UT added.

On TPCDS 1000 scale, this change improves the performance of query 95 from 330 seconds to 170 seconds by removing the extra Exchange.

Closes #30300 from prakharjain09/SPARK-33399-outputpartitioning.

Authored-by: Prakhar Jain <prakharjain09@gmail.com>
Signed-off-by: Takeshi Yamamuro <yamamuro@apache.org>

* [CARMEL-6306] Fix ut

* [CARMEL-6306] Fix alias not compatible with ebay skew implementation

Co-authored-by: Terry Kim <yuminkim@gmail.com>
Co-authored-by: Prakhar Jain <prakharjain09@gmail.com>
  • Loading branch information
3 people authored and GitHub Enterprise committed Nov 4, 2022
1 parent 5ba7783 commit 708a1d8
Show file tree
Hide file tree
Showing 9 changed files with 294 additions and 67 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.sql.execution

import org.apache.spark.sql.catalyst.expressions.{Alias, AttributeMap, AttributeReference, Expression, NamedExpression, SortOrder}
import org.apache.spark.sql.catalyst.plans.physical.{CoalescedPartitioning, Partitioning}

/**
* A trait that provides functionality to handle aliases in the `outputExpressions`.
*/
trait AliasAwareOutputExpression extends UnaryExecNode {
protected def outputExpressions: Seq[NamedExpression]

private lazy val aliasMap = AttributeMap(outputExpressions.collect {
case a @ Alias(child: AttributeReference, _) => (child, a.toAttribute)
})

protected def hasAlias: Boolean = aliasMap.nonEmpty

protected def normalizeExpression(exp: Expression): Expression = {
exp.transform {
case attr: AttributeReference => aliasMap.getOrElse(attr, attr)
}
}
}

/**
* A trait that handles aliases in the `outputExpressions` to produce `outputPartitioning` that
* satisfies distribution requirements.
*/
trait AliasAwareOutputPartitioning extends AliasAwareOutputExpression {
final override def outputPartitioning: Partitioning = {
if (hasAlias) {
child.outputPartitioning match {
case CoalescedPartitioning(p, specs) =>
CoalescedPartitioning(replaceAlias(p), specs)
case other => replaceAlias(other)
}
} else {
child.outputPartitioning
}
}

private def replaceAlias(part: Partitioning): Partitioning = {
part match {
case e: Expression =>
normalizeExpression(e).asInstanceOf[Partitioning]
case other => other
}
}
}

/**
* A trait that handles aliases in the `orderingExpressions` to produce `outputOrdering` that
* satisfies ordering requirements.
*/
trait AliasAwareOutputOrdering extends AliasAwareOutputExpression {
protected def orderingExpressions: Seq[SortOrder]

final override def outputOrdering: Seq[SortOrder] = {
if (hasAlias) {
orderingExpressions.map(normalizeExpression(_).asInstanceOf[SortOrder])
} else {
orderingExpressions
}
}
}

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,8 @@ case class HashAggregateExec(
resultExpressions: Seq[NamedExpression],
child: SparkPlan)
extends BaseAggregateExec
with BlockingOperatorWithCodegen {
with BlockingOperatorWithCodegen
with AliasAwareOutputPartitioning {

require(HashAggregateExec.supportsAggregate(aggregateBufferAttributes))

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ import org.apache.spark.sql.catalyst.expressions._
import org.apache.spark.sql.catalyst.expressions.aggregate._
import org.apache.spark.sql.catalyst.plans.physical._
import org.apache.spark.sql.catalyst.util.truncatedString
import org.apache.spark.sql.execution.{AliasAwareOutputPartitioning, SparkPlan}
import org.apache.spark.sql.execution.{AliasAwareOutputOrdering, AliasAwareOutputPartitioning, SparkPlan}
import org.apache.spark.sql.execution.metric.SQLMetrics

/**
Expand All @@ -38,7 +38,10 @@ case class SortAggregateExec(
initialInputBufferOffset: Int,
resultExpressions: Seq[NamedExpression],
child: SparkPlan)
extends BaseAggregateExec {
extends BaseAggregateExec
with AliasAwareOutputPartitioning
with AliasAwareOutputOrdering {


override lazy val metrics = Map(
"numOutputRows" -> SQLMetrics.createMetric(sparkContext, "number of output rows"))
Expand All @@ -47,7 +50,7 @@ case class SortAggregateExec(
groupingExpressions.map(SortOrder(_, Ascending)) :: Nil
}

override def outputOrdering: Seq[SortOrder] = {
override protected def orderingExpressions: Seq[SortOrder] = {
groupingExpressions.map(SortOrder(_, Ascending))
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,10 @@ import org.apache.spark.util.random.{BernoulliCellSampler, PoissonSampler}

/** Physical plan for Project. */
case class ProjectExec(projectList: Seq[NamedExpression], child: SparkPlan)
extends UnaryExecNode with CodegenSupport with AliasAwareOutputPartitioning {
extends UnaryExecNode
with CodegenSupport
with AliasAwareOutputPartitioning
with AliasAwareOutputOrdering {

override def output: Seq[Attribute] = projectList.map(_.toAttribute)

Expand Down Expand Up @@ -83,10 +86,10 @@ case class ProjectExec(projectList: Seq[NamedExpression], child: SparkPlan)
}
}

override def outputOrdering: Seq[SortOrder] = child.outputOrdering

override protected def outputExpressions: Seq[NamedExpression] = projectList

override protected def orderingExpressions: Seq[SortOrder] = child.outputOrdering

override def verboseStringWithOperatorId(): String = {
s"""
|$formattedNodeName
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1404,7 +1404,7 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan
assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)
if(joinType != "LEFT SEMI" && joinType != "LEFT ANTI") {
// No extra sort on left side before last sort merge join
assert(collect(plan) { case _: SortExec => true }.size === 6)
assert(collect(plan) { case _: SortExec => true }.size === 5)
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ class CoalesceShufflePartitionsSuite
val shuffleReaders = finalPlan.collect {
case c @ CoalescedShuffleReader() => c
}
assert(shuffleReaders.length === 1)
assert(shuffleReaders.length === 0)
minNumPostShufflePartitions match {
case Some(numPartitions) =>
shuffleReaders.foreach { reader =>
Expand Down
Loading

0 comments on commit 708a1d8

Please sign in to comment.