-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-36028: [Docs][Parquet] Detailed parquet format support and parquet integration status #36027
Conversation
Thanks for opening a pull request! If this is not a minor PR. Could you open an issue for this pull request on GitHub? https://github.com/apache/arrow/issues/new/choose Opening GitHub issues ahead of time contributes to the Openness of the Apache Arrow project. Then could you also rename the pull request title in the following format?
or
In the case of PARQUET issues on JIRA the title also supports:
See also: |
|
I'm sure this is too detailed in some places also there is a good chance that it misses many useful features. My approach was going through the great blogpost, the parquet-format changelog, the thrift file, the parquet-mr, arrow and arrow-rs issue queue. I've intentionally tried to avoid 2.4-2.10 parquet format version info as it'd imply that the 2.9 features include 2.6 features which might not reflect the reality. Instead of that I've tried to focus on the end-user public API and providing a flat list of features instead. I'm open for different approaches as well. I feel particularly uncertain about the statistics and indices, I'm sure you can do that part better. |
@tustvold @mapleFU @westonpace @wgtmac What do you think? Would this be useful? |
b5cf60b
to
c42a6bb
Compare
c42a6bb
to
2392f06
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some comments, I would personally restrict this table to feature of the actual file readers and not query engine functionality like partitioning and concurrency - imo these are not features of a parquet implementation, but rather a query system. IMO a parquet implementation should not be unilaterally making concurrency decisions, but rather exposing APIs to allow query engines to distribute the work how they deem fit. Similarly partitions are a catalog detail
I would also suggest having separate tables for supported types, encodings, compression and feature support.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| LZ4_RAW | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Hive-style partitioning | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I'd consider this a feature of the parquet implementation, it is more a detail of the query engine imo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While arrow-rs needs datafusion for this functionality, arrow handles it without Acero. I don't have strong opinion though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @tustvold, partitioning
is more like a high-level use case on top of file format.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| ColumnIndex statistics | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page statistics | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this referring to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like I said there is a good chance I made a mistake here. I saw this in the thrift spec: ColumnChunk->ColumnMetadata->Statistics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we organize these items in a layered fashion? Maybe this is a good start point: https://arrow.apache.org/docs/cpp/parquet.html#supported-parquet-features
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page CRC32 checksum | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Parallel partition processing | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO this is a query engine detail, not a detail of the file format?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's part of the arrow API in python
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| xxHash based bloom filter | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| bloom filter length | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OMG, they finally added it - amazing, will get that incorporated into the rust writer/reader
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OMG, they finally added it - amazing, will get that incorporated into the rust writer/reader
I just added it recently :) Please note that the latest format is not released yet so the parquet-mr does not know bloom_filter_length
now.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| BYTE_STREAM_SPLIT | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Partition pruning on the partition column | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again this is a detail of the query engine not the parquet implementation imo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same, it's part of the current API, but I agree it's not consistent across implementations.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| RowGroup append / delete | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page append / delete | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think any support page appending, the semantics would be peculiar for things like dictionary pages, the rust implementation does support appending column chunks though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, likely some / most of the Page references should be ColumnChunk. I'll read about this more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't Parquet itself a write-once format that can't be appended to? I'm not sure what these are supposed to indicate. The inability to append/delete without re-writing a Parquet file is why table formats like Iceberg and Delta have proliferated.
| Storage-aware defaults (1) | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Adaptive concurrency (2) | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Adaptive IO when pruning used (3) | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure which parquet reader these features are based off, but my 2 cents is that they indicate a problematic IO abstraction that relies on prefetching heuristics instead of pushing vectored IO down into the IO subsystem (which the Rust, and proprietary DataBricks implementation do).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to capture the IO pushdown section https://arrow.apache.org/blog/2022/12/26/querying-parquet-with-millisecond-latency/#io-pushdown but also added more. Likely out of scope as none of the implementations goes into details or provides an API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps just a "Vectorized IO Pushdown". I believe there are efforts to add such an API to parquet-mr
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| RowGroup pruning using bloom filter | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page pruning using projection pushdown | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Page pruning using projection pushdown | | | | | | | |
| Column Pruning using projection pushdown | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this also a detail of the engine choosing what columns to read or not? Or is the intent here to indicate that rows/values can be pruned based on projection directly in the parquet lib?
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page pruning using statistics | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page pruning using bloom filter | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is supported by the format, bloom filters are per column chunk
| Format | C++ | Python | Java | Go | Rust | | ||
| | | | | | | | ||
+===========================================+=======+========+========+=======+=======+ | ||
| Basic compression | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we could have separate tables for supported physical types, encodings and compression
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for this.
Thanks @tustvold. I'll address the Page vs ColumnChunk issues and other improvement ideas. Also it's a good insight that the parquet vs arrow vs dataset vs query engine level API separation is different in select languages. |
| Format | C++ | Python | Java | Go | Rust | | ||
| | | | | | | | ||
+===========================================+=======+========+========+=======+=======+ | ||
| Basic compression | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for this.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| LZ4_RAW | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Hive-style partitioning | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @tustvold, partitioning
is more like a high-level use case on top of file format.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| File metadata | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| RowGroup metadata | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Column metadata | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these intended for the completeness of fields defined in the metadata? If yes, probably they worth a separate table and indicate the states of each field. But that sounds too complicated.
================================= | ||
|
||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Format | C++ | Python | Java | Go | Rust | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Java
column could be misleading here. In the arrow repo, there is a java dataset reader to support reading from parquet dataset. If this is for parquet-mr, then it can be easily out of sync.
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| ColumnIndex statistics | | | | | | | ||
+-------------------------------------------+-------+--------+--------+-------+-------+ | ||
| Page statistics | | | | | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we organize these items in a layered fashion? Maybe this is a good start point: https://arrow.apache.org/docs/cpp/parquet.html#supported-parquet-features
I'll repeat what the rest said about engine/format differences and maybe offer some clarification. In C++ the picture is pretty clear, as the APIs tend to be focused on implementation: There is a C++ parquet module which is purely a parquet reader. In pyarrow the picture is pretty muddled, as the APIs are more focused on user experience: There is a pyarrow.parquet module, however, many of its features are powered by C++ datasets. For example, the pyarrow.parquet module can read from S3 even the the C++ parquet module has no concept of S3 (it just has an abstraction for input streams). So I agree with the others that we should probably not base the features on the python API. |
Although...to play devil's advocate...it might be odd when a feature is available in the parquet reader, but not yet exposed in the query component. For example, there is some row skipping and bloom filters in the C++ parquet reader, but we haven't integrated those into the datasets layer yet. |
Also, do we think this table might belong at https://parquet.apache.org/docs/ (and we could link to it from Arrow's docs)? For example, the parquet-mr (java) implementation and the parquet.net (C#) implementation are not involved with the arrow project but are still standalone parquet readers. |
Agreed with @westonpace. |
Thanks, I can do another round on the weekend on the correct website and the suggestions included |
Moved it to the parquet-site repo: apache/parquet-site#34 |
This is a draft skeleton for: #35638 (comment)