-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HUDI-3709] Fixing ParquetWriter
impls not respecting Parquet Max File Size limit
#5129
Conversation
d303b2e
to
f39251d
Compare
…etWriter::getDataSize` in lieu of FileSystem wrapper that works incorrectly in the presence of caching
f39251d
to
dd1c5ba
Compare
...-client/src/test/java/org/apache/hudi/table/action/commit/TestCopyOnWriteActionExecutor.java
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. one minor clarification
@hudi-bot run azure |
I went through the code again wrt this patch and #5497. Probably we should bring back the WrapperFileSystem again so that we don't hit the parquetWriter to fetch the size. if we ensure we flush at regular intervals, wrapperFileSystem.getBytesWritten(Path file) should give us the right size of data that got written. this will also ensure we don't hit the disk or incur the cost due to column meta refresh within parquetWriter. |
@alexeykudinkin : let's jam on this when you are back. |
@nsivabalan we should not be interfering with the caching on the Parquet Writer level (by manually flushing), and checking the ParquetWriter for the currently accumulated buffer size is the right way to interface with it (as compared to intercept the FileSystem writes and accounting for how many bytes were written). The issue inadvertently planted with this approach (addressed in #5497) was that the cost of the |
Tips
What is the purpose of the pull request
Currently writing t/h Spark DataSource connector, does not respect "hoodie.parquet.max.file.size" setting: in the snippet pasted below i'm trying to limit the file-size to 16Mb, while on disk i'm getting ~80Mb files.
The reason for that is that we rely on
ParquetWriter
to control the file size (canWrite
method), that relies in turn on FileSystem to trace how much was actually written to FS.The problem with this approach is that
ParquetWriter
is writing lazily: It creates instances ofParquetWriter
which in turn cache the whole record group whenwrite
methods are invoked and only flushes the data to FS only when closing the Writer (ie whenclose
) is invoked.This PR instead rebases
canWrite
to rely onParquetWriter::getDataSize
which holistically reflects the size of the records both already written to FS as well as the ones kept in memory.Brief change log
(for example:)
Verify this pull request
(Please pick either of the following options)
This pull request is already covered by existing tests, such as (please describe tests).
Committer checklist
Has a corresponding JIRA in PR title & commit
Commit message is descriptive of the change
CI is green
Necessary doc changes done or have another open PR
For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.