-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[InMemoryDataset redesign] Read many slices at once with the HDF5 C API #378
base: master
Are you sure you want to change the base?
Conversation
624332d
to
bb6291f
Compare
68b2236
to
e1dbcfc
Compare
45d8663
to
c6394e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this - the tests seem comprehensive, and I think there's only one place where we might need a lock. Otherwise this looks great! 🚀
@@ -11,6 +11,16 @@ py.install_sources( | |||
subdir: 'versioned_hdf5', | |||
) | |||
|
|||
# Adapted from https://numpy.org/doc/2.1/reference/random/examples/cython/meson.build.html |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
"""Implements read_many_slices data transfer when fast transfer cannot be performed. | ||
|
||
This happens when: | ||
1. src is a h5py.Dataset but h5py.Dataset._fast_read_ok returns False. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need a with phil
lock here in the case where src
is an h5py.Dataset
.
|
||
|
||
def test_read_many_slices_not_fast_read_ok(h5file): | ||
"""src is a h5py dataset that doesn't support fast read""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be good to outline somewhere in the docs when a dataset supports fast reading, since there is a significant difference in performance.
This is an alternative to #370 after it's been found out that virtual datasets perform very poorly in libhdf5.
Add a function to quickly read potentially thousands of slices from HDF5 into a numpy array, or between numpy arrays. This new function remains dormant for now and will be used in a later PR by a variant of StagedChangesArray from #370.