-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: Use mmap-ed memory if possible in Parquet reader #17725
Conversation
e374fa9
to
4570ba7
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #17725 +/- ##
==========================================
+ Coverage 80.38% 80.44% +0.05%
==========================================
Files 1501 1502 +1
Lines 196772 196954 +182
Branches 2793 2794 +1
==========================================
+ Hits 158172 158435 +263
+ Misses 38087 38005 -82
- Partials 513 514 +1 ☔ View full report in Codecov by Sentry. |
4570ba7
to
8081d3e
Compare
This seems to have negative performance without prefetching or with madvise. The current implementation uses prefetching to the L2 cache. This seems to have ~5% increased performance for multithreaded and ~10% increased performance on single-threaded. All this testing is done on cold file reads. Warm file reads seems to be faster as well, but it is more noisy. Multi-threaded: ``` Benchmark 1: ./plparbench-before Time (mean ± σ): 6.049 s ± 0.031 s [User: 5.813 s, System: 5.811 s] Range (min … max): 6.013 s … 6.086 s 5 runs Benchmark 2: ./plparbench-after Time (mean ± σ): 5.761 s ± 0.020 s [User: 5.083 s, System: 5.792 s] Range (min … max): 5.735 s … 5.788 s 5 runs Summary ./plparbench-after ran 1.05 ± 0.01 times faster than ./plparbench-before ``` Single-threaded: ``` Benchmark 1: ./plparbench-before Time (mean ± σ): 13.601 s ± 0.184 s [User: 5.295 s, System: 5.206 s] Range (min … max): 13.447 s … 13.858 s 5 runs Benchmark 2: ./plparbench-after Time (mean ± σ): 12.398 s ± 0.152 s [User: 4.862 s, System: 5.134 s] Range (min … max): 12.276 s … 12.664 s 5 runs Summary ./plparbench-after ran 1.10 ± 0.02 times faster than ./plparbench-before ```
8081d3e
to
adb7b7e
Compare
|
||
/// Attempt to prefetch the memory belonging to to this [`MemSlice`] | ||
#[inline] | ||
pub fn prefetch(&self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really doubt whether it can work as expected.
_mm_prefetch
prefetches data from RAM to cache. If the data is not yet present in RAM (i.e., mmap), it's a no-op. For mmap, I think madvise
should be used instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried madvise, no-op reading and prefetching. Prefetching was the only one that was faster over the memcpy approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not notice a difference between no-op and prefetch_l2. May I ask how you benched it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this, I think cold performance on large files is the most important. So I am running a drop_caches
before every run and running on a dataset I got from someone of 11GB.
hyperfine --warmup 2 \
-p 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' './read-parquet-before' \
-p 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' './read-parquet-after' \
--runs 5
I have done the same while preparing with pcu-fadvise
but it seemed to cause the same benchmarks.
Warm performance does not really matter here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether the input parquet is compressed. I believe the mmap code path is only used when reading uncompressed parquet.
For mmap, I think madvise
(Mmap::advise
) should be used instead of fadvise
. Here is the code I used:
pub fn prefetch(&self) {
if self.len() == 0 {
return;
}
if let MemSliceInner::Mmap(MmapSlice { ref mmap, ptr, len }) = self.0 {
let offset = ptr as usize - mmap.as_ptr() as usize;
mmap.advise_range(Advice::WillNeed, offset, len).unwrap();
}
}
On my machine, with a 9GB uncompressed parquet, there is nearly no difference between no-op and both prefetching methods.
no-op: 13972 ms
prefetch_l2: 14084 ms
madvise: 13967 ms
I think it is because not the whole file content is used when parsing parquet, so it is not worthy to do aggressive pre-reading or prefetching.
BTW, I wonder why you perform prefetching page by page. IMO it will prefetch too much. And _mm_prefetch
does not prefetches a whole page but only few cache lines. It, as stated in Rust doc and intrinsic doc, does not trigger page faults, either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether the input parquet is compressed. I believe the mmap code path is only used when reading uncompressed parquet.
I don't think this is true. The MmapSlice are used for both uncompressed and compressed pages at the moment.
For mmap, I think madvise (Mmap::advise) should be used instead of fadvise.
I meant pcu-fadvise
as an alternative for drop_caches
. As a file eviction tool.
I think it is because not the whole file content is used when parsing parquet, so it is not worthy to do aggressive pre-reading or prefetching.
At the moment, we prefetch per ColumnChunk
. All this data should be used unless you request a limited amount of rows.
But I agree with you. This seems like faulty benchmarking on my side. I will investigate further later. The mmap, even without prefetching, seems to have some performance benefits over memcpying mmap-ed data to the heap.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also want to test Mmap::lock.
It prevents memory from being paged to the swap area, and the amount of memory that a process can lock is limited by RLIMIT_MEMLOCK. I don't think this effect is desired. If you want to immediately read all pages into RAM, you can use PopulateRead
; however in my bench it is slower than no-op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to immediately read all pages into RAM, you can use
PopulateRead
; however in my bench it is slower than no-op.
Yeah, I tried it as well just now. It is quite a bit slower.
This resolves a discussion had in #17712.
This seems to have negative performance with madvise or without prefetching.
The current implementation uses prefetching to the L2 cache. This seems to have ~5% increased performance for multithreaded and ~10% increased performance on single-threaded. All this testing is done on cold file reads. Warm file reads seem to be faster as well, but it is more noisy.
Multi-threaded:
Single-threaded:
@ruihe774 does this look okay to you or is there something you would do different here?