-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use more than one core/thread when querying CSV #5205
Comments
I definitely think a first step would be to interleave the fetching of the next set of bytes and decoding. This might be as simple as adding
Doing this correctly is quite tricky as CSV permits unescaped newlines, which makes it actually impossible to know where a given record starts without having read all prior records, you can't just seek to the next newline character as that might be part way through a record. That being said, a trick that I believe duckdb uses is to search for the next newline, and then test the first record against the expected schema, if it matches you have likely found a newline from which you can start reading.
Not to blow my own trumpet, but I'm pretty chuffed by this, 2.7Gbps on a single core is pretty sweet 😁 We can and should make it even faster, but I think this is already pretty cool. |
Sure, of course, I'll be happy to test out any patches.
Ah, yes. I was thinking of using the function you already made for something similar:
Yeah, it's pretty great :) for a simple |
DuckDB does parallel csv reading, FWIW. duckdb/duckdb#5194 It would be great to implement this feature in DataFusion Regarding the "csv escaping means you can't always know when a newline is a record delimiter" I suggest:
A small variation of 1 would be to detect (and error) if DataFusion realized the parallel split did not work well, and produce an error that mentioned the config setting,. |
(nice work @tustvold on CSV reading). Maybe we can put this data into the blog post too apache/arrow-site#305 |
I filed #6325 to track a feature that might enable this to work |
Closed by #6801 |
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
When querying a CSV file on local disk only 1 thread is used, leaving the system mostly idle. For example, I have a 9 GB gzip-compressed CSV file that I'd like to query and convert to Parquet format. However, when running queries against the CSV, it can take 5 minutes or more for simple counts or group bys on single columns. 5 minutes per query isn't too unexpected (when uncompressed, it's a 100 GB file, 45 million records x 242 columns). But it's nowhere near utilizing all the machine's resources. It seems like there should be a way to move more of the work to other threads and querying/conversion to parquet could be substantially faster.
When profiled, it looks like this (green is the only busy thread):
and if you zoom into what that thread is doing, it's following a pattern of:
Describe the solution you'd like
Better utilization of the machine, work done in more than one thread and sustained reads from disk.
Describe alternatives you've considered
Is it possible to offload the decompression and processing of bytes -> csv records to other threads? I had an idea to try to implement some kind of fan-out/fan-in somewhere under CsvExec so that one thread is just reading from disk and sending the raw bytes to other threads to decompress/csv_read/convert to recordbatches (CPU-heavy work is in the CSV reading) -> fan back in to stream the recordbatches from CsvExec as expected. Is there a better way to do it?
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: