-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EPIC] Improve shuffle performance #1123
Comments
Do you mean when If so, it makes some sense because Comet shuffle reader is still JVM based. We should make it native eventually to boost shuffle reading performance. It is on the early roadmap when we started on Comet shuffle. Although it is not super urgent and high priority at that moment. But now I think it is the time we can begin to work on this. Opened: #1125 |
Yes, exactly. |
This is also an issue for shuffle writes. The child node of We pay the FFI cost twice - once to import from native plan to JVM, then again to export to the shuffle write native plan. We have the cost of schema serde in both directions. Perhaps there is a way to shortcut this and avoid a full serde because we do not need to read the batch in the JVM in this case, just pass it from one native plan to another. |
I created a Google document for collaborating on ideas around improving shuffle performance: https://docs.google.com/document/d/1rx1ue7UZ4ljzic9Rc2kT-v35bfLB7Rhhe5FW1d0Sw4I/edit?usp=sharing |
Update: The ScanExec time is from calling CometBatchIterator.next() so includes the time for it to fetch input batches as well as the FFI cost. It looks like the cost of fetching the batches is much more than the FFI cost. Perhaps this is really measuring the execution time of the input query, so could be misleading |
It makes more sense. I'm used to doubt that FFI overhead could be significant on performance number. It is designed to be lightweight to pass Arrow vectors around processes. |
I ran some benchmarks this morning and found that with their respective shuffle managers disabled, Comet and Gluten(+Velox) have pretty similar performance (Gluten was 15% faster) but with shuffle managers enabled, Gluten was 176% faster. I am going to try and learn more about how Gluten+Velox implements shuffle as a background task. I think this validates that DataFusion and Velox likely have similar performance. |
It is interesting. For background task, do you see Gluten+Velox's shuffle is running as that (async?) and different to Spark/Comet? |
One thing to try might be moving from zstd to lz4. The default zstd level (3 I believe) is quite slow for compression. Zstd can be tuned to be comparable in speed (fast mode), but then compression ratio will be lower as well. |
Spark by default uses lz4. shuffle codec is not configurable yet in Comet and it uses zstd. @andygrove Do you use same codec in the comparison? |
Thanks @Dandandan. I just discovered this morning that Gluten is using lz4. Comet is using zstd. I plan on trying lz4 in Comet as the next step. |
This epic is for improving shuffle / ScanExec performance.
Issues
Context
I have been comparing Comet and Ballista performance for TPC-H q3. Both execute similar native plans. I am using the
comet-parquet-exec
branch which uses DataFusion'sParquetExec
.Ballista is approximately 3x faster than Comet. Given that they are executing similar DataFusion native plans, I would expect performance to be similar.
The main difference between Comet and Ballista is that Comet transfers batches between JVM and native code during shuffle operations.
Most of the native execution time in Comet is spent in
ScanExec
which is reading Arrow batches from the JVM using Arrow FFI. This time was not included in our metrics prior to #1128 and #1111.The text was updated successfully, but these errors were encountered: