-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-48012][SQL] SPJ: Support Transfrom Expressions for One Side Shuffle #46255
Conversation
Some implementation notes. SPARK-41471 works by making the ShuffleExchangeExec side of the join have a KeyGroupedPartitioning, which is created by the other side's KeyGroupedShuffleSpec and is a clone of it (with that side's partition expression and values). That way both sides of the join have KeyGroupedPartioning and SPJ can work. But previously only AttributeExpressions were supported for the other side's partition expressions. Code changes:
Some fixes:
Limitations:
This updates the BatchScanExec's outputPartitioning with new numPartitions. after the re-grouping by join key. But it does not update the ShuffleExchangeExec's outputPartitioning's numPartitons. Hence the error in subsequent optimizer pass:
This can be reproduced by removing this check and running the relevant unit test added in this pr. It needs more investigation to be enabled in follow up pr. |
2bb6375
to
ca81d6f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late review. This looks pretty good to me. I'll take another pass soon.
@@ -149,7 +150,9 @@ private[spark] class KeyGroupedPartitioner( | |||
override val numPartitions: Int) extends Partitioner { | |||
override def getPartition(key: Any): Int = { | |||
val keys = key.asInstanceOf[Seq[Any]] | |||
valueMap.getOrElseUpdate(keys, Utils.nonNegativeMod(keys.hashCode, numPartitions)) | |||
val normalizedKeys = ArraySeq.from(keys) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious what does this do? why it is normalized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iirc, I hit a bug due to trying to compare different Seq types (more info in the pr description)
normalize the valueMap key type in KeyGroupedPartitioner to use specific Seq implementation class. Previously the partitioner's map are initialized with keys as Vector , but then compared with keys as ArraySeq, and these seem to have different hashcodes, so will always create new entries with new partition ids.
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/TransformExpression.scala
Outdated
Show resolved
Hide resolved
override def canCreatePartitioning: Boolean = { | ||
// Allow one side shuffle for SPJ for now only if partially-clustered is not enabled | ||
// and for join keys less than partition keys only if transforms are not enabled. | ||
val checkExprType = if (SQLConf.get.v2BucketingAllowJoinKeysSubsetOfPartitionKeys) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to understand the reason behind this. Also, it might be better to add some logging here if it is easy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
iirc, I hit a pretty hard bug when trying to enable the feature with v2BucketingAllowJoinKeysSubsetOfPartitionKeys (more in the pr description). As we may need to rethink the logic of v2BucketingAllowJoinKeysSubsetOfPartitionKeys to fix, I was going to disable for now, and try to fix in a subsequent PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at it, maybe i will do logging in another pr. There's no table name so not sure if its valuable to log the decision?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, sounds good.
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/physical/partitioning.scala
Outdated
Show resolved
Hide resolved
…uffle ### Why are the changes needed? Support SPJ one-side shuffle if other side has partition transform expression ### How was this patch tested? New unit test in KeyGroupedPartitioningSuite ### Was this patch authored or co-authored using generative AI tooling? No.
b74a524
to
e7f0ec0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks! Merged to master/4.0 |
…uffle Support SPJ one-side shuffle if other side has partition transform expression ### How was this patch tested? New unit test in KeyGroupedPartitioningSuite ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#46255 from szehon-ho/spj_auto_bucket. Authored-by: Szehon Ho <szehon.apache@gmail.com> Signed-off-by: Chao Sun <chao@openai.com>
Why are the changes needed?
Support SPJ one-side shuffle if other side has partition transform expression
How was this patch tested?
New unit test in KeyGroupedPartitioningSuite
Was this patch authored or co-authored using generative AI tooling?
No.