-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable gpu load nifti #8188
base: dev
Are you sure you want to change the base?
enable gpu load nifti #8188
Conversation
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
ed1de7e
to
ca1cfb8
Compare
for more information, see https://pre-commit.ci
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
related topic in: nipy/nibabel#1385 Will modify my draft PR accordingly. |
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
for more information, see https://pre-commit.ci
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Thank you for your contribution! I have a question about this enhancement: Will it load NIfTI/DICOM files to the GPU faster compared to caching all the data on the GPU? If not, does this option use less GPU memory since the metadata is stored on the CPU? Thanks. |
Hi @KumoLiu , these are two different things. The enhanced reader can load nii images from disk to GPU directly (the old way is to load from disk to cpu, and then convert from cpu to gpu), and the loaded data can do other things, like cache as you mentioned. |
Hi @KumoLiu , do you know what is the root cause of the doc error: https://github.com/Project-MONAI/MONAI/actions/runs/12311294929/job/34361260922?pr=8188#step:7:247 |
Yes, you may need include |
Co-authored-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com> Signed-off-by: Yiheng Wang <68361391+yiheng-wang-nv@users.noreply.github.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Co-authored-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com> Signed-off-by: Yiheng Wang <68361391+yiheng-wang-nv@users.noreply.github.com>
Co-authored-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com> Signed-off-by: Yiheng Wang <68361391+yiheng-wang-nv@users.noreply.github.com>
Co-authored-by: Eric Kerfoot <17726042+ericspod@users.noreply.github.com> Signed-off-by: Yiheng Wang <68361391+yiheng-wang-nv@users.noreply.github.com>
/build |
monai/data/meta_tensor.py
Outdated
@@ -565,7 +572,7 @@ def ensure_torch_and_prune_meta( | |||
if simple_keys: | |||
# ensure affine is of type `torch.Tensor` | |||
if MetaKeys.AFFINE in meta: | |||
meta[MetaKeys.AFFINE] = convert_to_tensor(meta[MetaKeys.AFFINE]) # bc-breaking | |||
meta[MetaKeys.AFFINE] = convert_to_tensor(meta[MetaKeys.AFFINE], device=device) # bc-breaking |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May don't need move affine to device here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @KumoLiu , thanks for the comments, I removed the device arg, and re-tested the changes with decathlon Task03_Liver dataset.
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
Signed-off-by: Yiheng Wang <vennw@nvidia.com>
/build |
Related to #8241 .
Description
A few sentences describing the changes proposed in this pull request.
Types of changes
./runtests.sh -f -u --net --coverage
../runtests.sh --quick --unittests --disttests
.make html
command in thedocs/
folder.