Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-15042: [C++][Parquet] Update stats on subsequent batches of dictionaries #15179

Merged
merged 4 commits into from
Jan 11, 2023

Conversation

wjones127
Copy link
Member

@wjones127 wjones127 commented Jan 3, 2023

@github-actions
Copy link

github-actions bot commented Jan 3, 2023

@github-actions
Copy link

github-actions bot commented Jan 3, 2023

⚠️ GitHub issue #15042 has no components, please add labels for components.

@wjones127 wjones127 force-pushed the GH-15042-parquet-stats-bug branch from 10e5a7c to a704c6d Compare January 3, 2023 23:58
@wjones127 wjones127 marked this pull request as ready for review January 4, 2023 20:01
@wjones127 wjones127 requested a review from westonpace January 4, 2023 20:01
Copy link
Member

@westonpace westonpace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not fully convinced the unit test has the correct test data (though I could be misreading it)

R"([0, null, 3, 0, null, 3])"), // ["b", null "a", "b", null, "a"]
ArrayFromJSON(
::arrow::int32(),
R"([0, 3, null, 0, null, 1])")}; // ["b", "c", null, "b", "c", null]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
R"([0, 3, null, 0, null, 1])")}; // ["b", "c", null, "b", "c", null]
R"([0, 1, null, 0, 1, null])")}; // ["b", "c", null, "b", "c", null]

I like what you have in the comment because then the min/max of row group 0 / chunk 0 is different from row group 0 / chunk 1. Right now it looks like your indices don't match your comment and we have:

// ["b", null, "a", "b", null, "c"]

This leads to a/b being the min/max in stats0 but a/b is the min/max in both chunks of stats0. To reproduce I think we want what you have in the comment which would mean chunk 0 is a/b and chunk 1 is b/c and so stats0 should be a/c.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are correct; I updated the data but forgot the comment. I did a weird thing where the chunks are 6/6 but the row groups are 9/3, anticipating that hit more interesting conditions in the writer (but maybe this is unnecessary).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I like the data as is. The first row group contains the first chunk plus the first three rows of the next chunk. Then the second group contains the last three elements of the second chunk. I've updated the comment so it is accurate again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Write...so I think (but could be wrong) this would lead to three calls to WriteArrowDictionary:

Call #1: (no previous dictionary) min=a, max=b, nulls=2
Call #2: (previous dictionary is equal) min=a, max=b, nulls=1
Call #3: (no previous dictionary) min=b, max=c, nulls=1

So if the bug was still in place, and it was using the first chunk to determine row-group statistics, it would still get the correct answer in this case.

Admittedly, the null count would still be wrong (it would report 2 nulls for stat0), so the test case itself wouldn't pass with the old code. But I think it would get further than it should.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes I see now. You are correct (just verified in lldb). I will change it so call 1 and 2 will have a different max.

(Also funny to realize how much PR 1, 2, and 3 of this repo have been mentioned 🤣 )

@wjones127 wjones127 requested a review from westonpace January 4, 2023 21:18
Copy link
Member

@westonpace westonpace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into this!

@wjones127 wjones127 merged commit 0da51b7 into apache:master Jan 11, 2023
@ursabot
Copy link

ursabot commented Jan 12, 2023

Benchmark runs are scheduled for baseline = a06a5d6 and contender = 0da51b7. 0da51b7 is a master commit associated with this PR. Results will be available as each benchmark for each run completes.
Conbench compare runs links:
[Finished ⬇️0.0% ⬆️0.0%] ec2-t3-xlarge-us-east-2
[Failed ⬇️0.63% ⬆️0.09%] test-mac-arm
[Finished ⬇️1.02% ⬆️0.0%] ursa-i9-9960x
[Finished ⬇️0.53% ⬆️0.0%] ursa-thinkcentre-m75q
Buildkite builds:
[Finished] 0da51b72 ec2-t3-xlarge-us-east-2
[Finished] 0da51b72 test-mac-arm
[Finished] 0da51b72 ursa-i9-9960x
[Finished] 0da51b72 ursa-thinkcentre-m75q
[Finished] a06a5d65 ec2-t3-xlarge-us-east-2
[Failed] a06a5d65 test-mac-arm
[Finished] a06a5d65 ursa-i9-9960x
[Finished] a06a5d65 ursa-thinkcentre-m75q
Supported benchmarks:
ec2-t3-xlarge-us-east-2: Supported benchmark langs: Python, R. Runs only benchmarks with cloud = True
test-mac-arm: Supported benchmark langs: C++, Python, R
ursa-i9-9960x: Supported benchmark langs: Python, R, JavaScript
ursa-thinkcentre-m75q: Supported benchmark langs: C++, Java

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Python][Parquet] Column statistics incorrect for Dictionary Column in Parquet
3 participants