-
Notifications
You must be signed in to change notification settings - Fork 28.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-21786][SQL][FOLLOWUP] Add compressionCodec test for CTAS #22302
Conversation
What changes were proposed in this pull request? Since resolved by @DongJoon in 20522, compressionCodec test for CTAS has been able to support, the scenario of CTAS suggested by @gatorsmile in 20087 should be enabled. How was this patch tested? Add test.
@gatorsmile @ueshin Can you trigger this? I checked the related jira and code and, then I think these tests should be passed in master when |
@fjh100456 Can you format the PR description cleanly to make others more understood? |
ok to test. |
Test build #95572 has finished for PR 22302 at commit
|
@maropu I'd update the PR description, thank you! |
@fjh100456 . I'm @dongjoon-hyun . I think you put a wrong person in the the PR description. :) |
@fjh100456 please do not break the PR format, and can you clean up, too? |
@dongjoon-hyun @maropu I'm so sorry, and I have changed the description. Is it ok now ? |
nit. Maybe, it seems that you removed
|
btw, the behaivour differences of |
The pr #20120 was closed, instead SPARK-23355 had resolved it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM. cc @gatorsmile for final sign-off
Retest this please. |
Test build #95736 has finished for PR 22302 at commit
|
Merged to master/branch-2.4. |
## What changes were proposed in this pull request? Before Apache Spark 2.3, table properties were ignored when writing data to a hive table(created with STORED AS PARQUET/ORC syntax), because the compression configurations were not passed to the FileFormatWriter in hadoopConf. Then it was fixed in #20087. But actually for CTAS with USING PARQUET/ORC syntax, table properties were ignored too when convertMastore, so the test case for CTAS not supported. Now it has been fixed in #20522 , the test case should be enabled too. ## How was this patch tested? This only re-enables the test cases of previous PR. Closes #22302 from fjh100456/compressionCodec. Authored-by: fjh100456 <fu.jinhua6@zte.com.cn> Signed-off-by: Dongjoon Hyun <dongjoon@apache.org> (cherry picked from commit 473f2fb) Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
What changes were proposed in this pull request?
Before Apache Spark 2.3, table properties were ignored when writing data to a hive table(created with STORED AS PARQUET/ORC syntax), because the compression configurations were not passed to the FileFormatWriter in hadoopConf. Then it was fixed in #20087. But actually for CTAS with USING PARQUET/ORC syntax, table properties were ignored too when convertMastore, so the test case for CTAS not supported.
Now it has been fixed in #20522 , the test case should be enabled too.
How was this patch tested?
This only re-enables the test cases of previous PR.