Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: Batch parquet primitive decoding #17462

Merged
merged 1 commit into from
Jul 6, 2024

Conversation

coastalwhite
Copy link
Collaborator

This is rather large change that changes quite fundamentally how HybridRLE and Parquet decoding works. There are now two important concepts that speed up the performance of the Parquet reader while utilizing less memory than before. This does how increase the complexity of the code.

First, for a benchmark using the NYC Yellow-Taxi dataset (we decode the whole dataset 100x). Here, we see the following results.

No maximum threads:

Benchmark 1: After Optimization
  Time (mean ± σ):      4.918 s ±  0.076 s    [User: 28.748 s, System: 2.486 s]
  Range (min … max):    4.819 s …  5.064 s    10 runs

Benchmark 2: Before Optimization
  Time (mean ± σ):      7.333 s ±  2.144 s    [User: 60.374 s, System: 3.054 s]
  Range (min … max):    5.416 s … 11.132 s    10 runs

Summary
  After Optimization ran
    1.49 ± 0.44 times faster than Before Optimization

Maximum threads = 1:

Benchmark 1: After Optimization
  Time (mean ± σ):     18.452 s ±  0.054 s    [User: 16.058 s, System: 2.325 s]
  Range (min … max):   18.332 s … 18.511 s    10 runs

Benchmark 2: Before Optimization
  Time (mean ± σ):     27.027 s ±  0.062 s    [User: 24.668 s, System: 2.271 s]
  Range (min … max):   26.912 s … 27.105 s    10 runs

Summary
  After Optimization ran
    1.46 ± 0.01 times faster than Before Optimization

This PR introduces the concepts of a Translator and a BatchedCollector.

The Translator trait maps from hybrid RLE encoded values to an arbitrary set of values. The HybridRLEDecoder will can then collect and call the translator with batches of values. This way we minimize the amount of iterator polls and we do not need to allocate any more on the heap except the output buffer. This does however mean that the whole HybridRLEDecoder needs to be aware of the Translator trait.

Furthermore, the HybridRLEDecoder can now itself buffer, instead of using the BufferedHybridRleDecoderIter that was used before. Again, this allows minimal memory consumption and prevents constant polling.

The BatchedCollector is essentially a wrapper for the Pushable trait that automatically optimizes sequential pushes of valid and invalid values. It also allows for efficient skipping of values.

Overall, this change significantly speeds up the Parquet reader and extensive testing was done to ensure that no invalid data gets produced. But it is difficult to test all edge-cases.

From here, we can start incorporating the BatchedCollector and Translator traits in more places. In general, the HybridRleDecoder iterator implementation should effectively never be used.

@coastalwhite coastalwhite changed the title perf: batch parquet primitive decoding perf: Batch parquet primitive decoding Jul 6, 2024
@github-actions github-actions bot added performance Performance issues or improvements python Related to Python Polars rust Related to Rust Polars and removed title needs formatting labels Jul 6, 2024
This is rather large change that changes quite fundamentally how HybridRLE and
Parquet decoding works. There are now two important concepts that speed up the
performance of the Parquet reader while utilizing less memory than before. This
does how increase the complexity of the code.

First, for a benchmark using the NYC Yellow-Taxi dataset (we decode the whole
dataset 100x). Here, we see the following results.

No maximum threads:

```
Benchmark 1: After Optimization
  Time (mean ± σ):      4.918 s ±  0.076 s    [User: 28.748 s, System: 2.486 s]
  Range (min … max):    4.819 s …  5.064 s    10 runs

Benchmark 2: Before Optimization
  Time (mean ± σ):      7.333 s ±  2.144 s    [User: 60.374 s, System: 3.054 s]
  Range (min … max):    5.416 s … 11.132 s    10 runs

Summary
  After Optimization ran
    1.49 ± 0.44 times faster than Before Optimization
```

Maximum threads = 1:

```
Benchmark 1: After Optimization
  Time (mean ± σ):     18.452 s ±  0.054 s    [User: 16.058 s, System: 2.325 s]
  Range (min … max):   18.332 s … 18.511 s    10 runs

Benchmark 2: Before Optimization
  Time (mean ± σ):     27.027 s ±  0.062 s    [User: 24.668 s, System: 2.271 s]
  Range (min … max):   26.912 s … 27.105 s    10 runs

Summary
  After Optimization ran
    1.46 ± 0.01 times faster than Before Optimization
```

This PR introduces the concepts of a `Translator` and a `BatchedCollector`.

The `Translator` trait maps from hybrid RLE encoded values to an arbitrary set
of values. The HybridRLEDecoder will can then collect and call the translator
with batches of values. This way we minimize the amount of iterator polls and
we do not need to allocate any more on the heap except the output buffer. This
does however mean that the whole HybridRLEDecoder needs to be aware of the
`Translator` trait.

Furthermore, the `HybridRLEDecoder` can now itself buffer, instead of using the
`BufferedHybridRleDecoderIter` that was used before. Again, this allows minimal
memory consumption and prevents constant polling.

The `BatchedCollector` is essentially a wrapper for the `Pushable` trait that
automatically optimizes sequential pushes of valid and invalid values. It also
allows for efficient skipping of values.

Overall, this change significantly speeds up the Parquet reader and extensive
testing was done to ensure that no invalid data gets produced. But it is
difficult to test all edge-cases.

From here, we can start incorporating the `BatchedCollector` and `Translator`
traits in more places. In general, the `HybridRleDecoder` iterator
implementation should effectively never be used.
Copy link

codecov bot commented Jul 6, 2024

Codecov Report

Attention: Patch coverage is 59.76530% with 480 lines in your changes missing coverage. Please review.

Project coverage is 80.48%. Comparing base (27ac6cc) to head (2370b0f).
Report is 3 commits behind head on main.

Files Patch % Lines
...rs-parquet/src/parquet/encoding/hybrid_rle/fuzz.rs 35.68% 182 Missing ⚠️
...ars-parquet/src/parquet/encoding/hybrid_rle/mod.rs 54.31% 106 Missing ⚠️
...arquet/src/parquet/encoding/hybrid_rle/buffered.rs 57.60% 78 Missing ⚠️
crates/polars-arrow/src/bitmap/utils/iterator.rs 65.13% 38 Missing ⚠️
...polars-parquet/src/arrow/read/deserialize/utils.rs 78.33% 26 Missing ⚠️
...quet/src/parquet/encoding/hybrid_rle/translator.rs 86.42% 19 Missing ⚠️
...s-parquet/src/parquet/encoding/bitpacked/decode.rs 81.66% 11 Missing ⚠️
...parquet/src/arrow/read/deserialize/binary/basic.rs 0.00% 7 Missing ⚠️
...arquet/src/arrow/read/deserialize/binview/basic.rs 42.85% 4 Missing ⚠️
.../arrow/read/deserialize/fixed_size_binary/basic.rs 33.33% 4 Missing ⚠️
... and 2 more
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #17462      +/-   ##
==========================================
- Coverage   80.58%   80.48%   -0.10%     
==========================================
  Files        1480     1483       +3     
  Lines      193682   194830    +1148     
  Branches     2765     2770       +5     
==========================================
+ Hits       156071   156810     +739     
- Misses      37103    37511     +408     
- Partials      508      509       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@ritchie46 ritchie46 merged commit 909f08f into pola-rs:main Jul 6, 2024
21 checks passed
@coastalwhite coastalwhite deleted the parquet-batch-decoding branch July 6, 2024 15:32
@c-peters c-peters added the accepted Ready for implementation label Jul 8, 2024
henryharbeck pushed a commit to henryharbeck/polars that referenced this pull request Jul 8, 2024
coastalwhite added a commit to coastalwhite/polars that referenced this pull request Jul 10, 2024
This PR is a follow up to pola-rs#17462. This batches the collects in the nested
Parquet decoders, with that we can also simplify the code quite a lot.

I did a benchmark where we had one column `{ 'x': pl.List(pl.Int8) }` of length
`10_000_000`. Then, we read that Parquet file 50 times. Here are the results.

```
Benchmark 1: After Optimization
  Time (mean ± σ):      3.398 s ±  0.064 s    [User: 49.412 s, System: 4.362 s]
  Range (min … max):    3.311 s …  3.490 s    10 runs

Benchmark 2: Before Optimization
  Time (mean ± σ):      4.135 s ±  0.015 s    [User: 59.506 s, System: 5.234 s]
  Range (min … max):    4.105 s …  4.149 s    10 runs

Summary
  After Optimization ran
    1.22 ± 0.02 times faster than Before Optimization
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Ready for implementation performance Performance issues or improvements python Related to Python Polars rust Related to Rust Polars
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants