-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-15042: [C++][Parquet] Update stats on subsequent batches of dictionaries #15179
GH-15042: [C++][Parquet] Update stats on subsequent batches of dictionaries #15179
Conversation
wjones127
commented
Jan 3, 2023
•
edited by github-actions
bot
Loading
edited by github-actions
bot
- Closes: [Python][Parquet] Column statistics incorrect for Dictionary Column in Parquet #15042
|
10e5a7c
to
a704c6d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not fully convinced the unit test has the correct test data (though I could be misreading it)
R"([0, null, 3, 0, null, 3])"), // ["b", null "a", "b", null, "a"] | ||
ArrayFromJSON( | ||
::arrow::int32(), | ||
R"([0, 3, null, 0, null, 1])")}; // ["b", "c", null, "b", "c", null] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
R"([0, 3, null, 0, null, 1])")}; // ["b", "c", null, "b", "c", null] | |
R"([0, 1, null, 0, 1, null])")}; // ["b", "c", null, "b", "c", null] |
I like what you have in the comment because then the min/max of row group 0 / chunk 0 is different from row group 0 / chunk 1. Right now it looks like your indices don't match your comment and we have:
// ["b", null, "a", "b", null, "c"]
This leads to a/b being the min/max in stats0 but a/b is the min/max in both chunks of stats0. To reproduce I think we want what you have in the comment which would mean chunk 0 is a/b and chunk 1 is b/c and so stats0 should be a/c.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are correct; I updated the data but forgot the comment. I did a weird thing where the chunks are 6/6 but the row groups are 9/3, anticipating that hit more interesting conditions in the writer (but maybe this is unnecessary).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I like the data as is. The first row group contains the first chunk plus the first three rows of the next chunk. Then the second group contains the last three elements of the second chunk. I've updated the comment so it is accurate again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Write...so I think (but could be wrong) this would lead to three calls to WriteArrowDictionary
:
Call #1: (no previous dictionary) min=a, max=b, nulls=2
Call #2: (previous dictionary is equal) min=a, max=b, nulls=1
Call #3: (no previous dictionary) min=b, max=c, nulls=1
So if the bug was still in place, and it was using the first chunk to determine row-group statistics, it would still get the correct answer in this case.
Admittedly, the null count would still be wrong (it would report 2 nulls for stat0), so the test case itself wouldn't pass with the old code. But I think it would get further than it should.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes I see now. You are correct (just verified in lldb). I will change it so call 1 and 2 will have a different max.
(Also funny to realize how much PR 1, 2, and 3 of this repo have been mentioned 🤣 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this!
Benchmark runs are scheduled for baseline = a06a5d6 and contender = 0da51b7. 0da51b7 is a master commit associated with this PR. Results will be available as each benchmark for each run completes. |