-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GH-26685: [Python] use IPC for pickle serialisation #37683
Conversation
I was trying to be clever by creating a set of parameters that all of the pickling tests used, but it may not be appropriate for the protocol5 ones. The other thing I am noticing is that sliced + pickled Boolean arrays are a lot larger than any other data-type:
|
Existing pickling serialises the whole buffer, even if the Array is sliced. Now we use Arrow's buffer truncation implemented for IPC serialization for pickling. Relies on a RecordBatch wrapper, adding ~230 bytes to the pickled payload per Array chunk. Closes apache#26685
If you keep the parameters as they were, do the tests pass?
I am curious if you pickle the whole boolean array (not sliced) - what is the diff in size there? |
No! They seem to be legitimately failing due to the changes in the pickling process. I am still wrapping my head around the PEP, but the changes do not seem to adhere to PEP-574. One example from the test CI: |
Confirmed that this approach would break support for pickle protocol 5 for out of band data. |
Rationale for this change
Existing pickling serialises the whole buffer, even if the Array is sliced.
What changes are included in this PR?
Changes use Arrow's buffer truncation implemented for IPC serialization for pickling and restoring.
Relies on a RecordBatch wrapper, adding ~230 bytes to the pickled payload per Array chunk.
Chunks are not automatically combined pre-pickling.
Are these changes tested?
Yes
Are there any user-facing changes?
No