-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transformations in GPU #45
Comments
@edgarriba we are thinking about it. It seems like a good idea, and NVIDIA has the relevant GPU kernels already implemented. |
nice! that good could be a very appreciate feature 😉 |
also OpenCV seems a bit faster with this, however I think that such GPU routines are not available in Python |
@edgarriba You may try pytorch/accimage with PR #15 to leverage Intel IPP for preprocessing. Keep in mind that this package is beta (like in beta than nothing) and barely tested. |
@Maratyszcza thx to point me to this, looks pretty good. I'll try to run and give feedback |
I'd be happy to give this a go, if there's still interest? |
I think there might still be interest in having this cc @soumith who might have a stronger opinion here |
so any plans for GPU data-augmentations or do we still stick to cpu PIL? |
@radenmuaz in kornia.org we recently introduced an API for that. Not only supports GPU but also differentiablility. Please check the |
Those of us on non-x86-64 systems would benefit from GPU accelerated transformations given the lack of optimizations for our architecture at times. For example, we are seeing significant performance disparities between x86-64 and power9 systems that use the same GPU and profiling points the finger at the Power9 CPU taking longer to crunch torchvision/transforms and PIL. |
Could someone clarify if this piece of code is doing the transformation in CPU or GPU. It is not clear to me if the current torchvision library is supporting GPU transformations or all is done on CPUs at this point. |
@kaoutar55 In your example, the transformations will be performed on CPU The reason behind that is the following:
Since v0.8.0, you can use transforms on GPU (cf. example in the release note). |
@kaoutar55 the comments from @frgfm are spot on. I'm closing this as we have added GPU support for the transforms with the 0.8.0 release of torchvision https://github.com/pytorch/vision/releases/tag/v0.8.0 |
Thank you so much for your new GPU support. @fmassa May I ask why there is some slight differences results between the nn.Sequencial version and the transforms.Compose version even just in the resize case? Which one should be correct?
output: tensor(0.1961) tensor(0.7765) tensor(0.0179) tensor(0.9847) |
Hi @wetliu 👋 The difference comes from the different behaviours between PIL Image interpolation (in your x1) and PyTorch tensor interpolation (in your x2). This matter is quite different from the question of the device where the operation is performed. I would suggest checking #2950 👍 Hope this helps! |
How to switch processing to GPU
|
Hello @binn77 👋 Transforms, even in their "Module/Compose" form, have no learnable parameters (at least in the current API, to the best of my knowledge). In PyTorch, an operation depending on the location of input tensors will:
So you need to move the input tensor to your GPU (and your model if you're using one afterwards). You have two options:
from PIL import Image
from torchvision.transforms import Compose, Resize, ToTensor
with Image.open("path/to/img.jpg", mode='r') as f:
img = f
transfo = Compose([ Resize((128,128)), ToTensor()])
input_tensor = transfo(img)
input_tensor = input_tensor.cuda() Hope this helps! |
|
@binn77 my bad I fixed the snippet, it's |
Is there any plan to support image transformations for GPU?
Doing big transformations e.g resizing (224x224) <-> (64x64) with PIL seems a bit slow.
The text was updated successfully, but these errors were encountered: