Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-3709] Fixing ParquetWriter impls not respecting Parquet Max File Size limit #5129

Merged
merged 3 commits into from
Mar 26, 2022

Conversation

alexeykudinkin
Copy link
Contributor

Tips

What is the purpose of the pull request

Currently writing t/h Spark DataSource connector, does not respect "hoodie.parquet.max.file.size" setting: in the snippet pasted below i'm trying to limit the file-size to 16Mb, while on disk i'm getting ~80Mb files.

The reason for that is that we rely on ParquetWriter to control the file size (canWrite method), that relies in turn on FileSystem to trace how much was actually written to FS.

The problem with this approach is that ParquetWriter is writing lazily: It creates instances of ParquetWriter which in turn cache the whole record group when write methods are invoked and only flushes the data to FS only when closing the Writer (ie when close) is invoked.

This PR instead rebases canWrite to rely on ParquetWriter::getDataSize which holistically reflects the size of the records both already written to FS as well as the ones kept in memory.

Brief change log

(for example:)

  • Modify AnnotationLocation checkstyle rule in checkstyle.xml

Verify this pull request

(Please pick either of the following options)

This pull request is already covered by existing tests, such as (please describe tests).

Committer checklist

  • Has a corresponding JIRA in PR title & commit

  • Commit message is descriptive of the change

  • CI is green

  • Necessary doc changes done or have another open PR

  • For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.

Alexey Kudinkin added 3 commits March 25, 2022 10:50
…etWriter::getDataSize` in lieu of FileSystem wrapper that works incorrectly in the presence of caching
Copy link
Contributor

@nsivabalan nsivabalan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. one minor clarification

@alexeykudinkin
Copy link
Contributor Author

@hudi-bot run azure

@hudi-bot
Copy link

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@nsivabalan nsivabalan merged commit 189d529 into apache:master Mar 26, 2022
vingov pushed a commit to vingov/hudi that referenced this pull request Apr 3, 2022
@nsivabalan
Copy link
Contributor

I went through the code again wrt this patch and #5497. Probably we should bring back the WrapperFileSystem again so that we don't hit the parquetWriter to fetch the size. if we ensure we flush at regular intervals, wrapperFileSystem.getBytesWritten(Path file) should give us the right size of data that got written. this will also ensure we don't hit the disk or incur the cost due to column meta refresh within parquetWriter.

@nsivabalan
Copy link
Contributor

@alexeykudinkin : let's jam on this when you are back.

@alexeykudinkin
Copy link
Contributor Author

@nsivabalan we should not be interfering with the caching on the Parquet Writer level (by manually flushing), and checking the ParquetWriter for the currently accumulated buffer size is the right way to interface with it (as compared to intercept the FileSystem writes and accounting for how many bytes were written).

The issue inadvertently planted with this approach (addressed in #5497) was that the cost of the getDataSize was not factored in (assumed it to be O(1), while in reality it's O(N) of the written blocks)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants