Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: CUDA out of memory. Tried to allocate 614.00 MiB (GPU 0; 15.78 GiB total capacity; 12.94 GiB already allocated; 584.75 MiB free; 14.16 GiB reserved in total by PyTorch) #7

Open
lbqdhg opened this issue Jul 12, 2021 · 4 comments

Comments

@lbqdhg
Copy link

lbqdhg commented Jul 12, 2021

Hello, I use high RAM to run, and the following problems occur, how can I solve it?

image
`00001.png
/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/functional.py:2941: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/functional.py:3121: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
torch.Size([1, 2, 800, 1422])
Traceback (most recent call last):
File "run_FuSta.py", line 317, in
frame_out = model(input_frames, F_kprime_to_k, forward_flows, backward_flows)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/init.py", line 12, in forward
return self.model(input_frames, F_kprime_to_k, F_n_to_k_s, F_k_to_n_s)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 662, in forward
I_pred, C = self.refinementNetwork(torch.cat([tenWarpedFeat[i], global_average_pooled_feature, tenWarpedMask[i]], 1))
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 240, in forward
x_1 = self.layer1(x_0)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 158, in forward
x_a = self.ch_a(x)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/envs/FuSta/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/content/FuSta/models_arbitrary/adacofnet.py", line 116, in forward
x = x * self.gated(mask)
RuntimeError: CUDA out of memory. Tried to allocate 614.00 MiB (GPU 0; 15.78 GiB total capacity; 12.94 GiB already allocated; 584.75 MiB free; 14.16 GiB reserved in total by PyTorch)

CalledProcessError Traceback (most recent call last)
in ()
----> 1 get_ipython().run_cell_magic('shell', '', 'eval "$(conda shell.bash hook)" # copy conda command to shell\nconda deactivate\nconda activate FuSta\ncd /content/FuSta/\npython run_FuSta.py --load FuSta_model/checkpoint/model_epoch050.pth --input_frames_path input_frames/ --warping_field_path CVPR2020_warping_field/ --output_path output/ --temporal_width 41 --temporal_step 4')

2 frames
/usr/local/lib/python3.7/dist-packages/google/colab/_system_commands.py in check_returncode(self)
137 if self.returncode:
138 raise subprocess.CalledProcessError(
--> 139 returncode=self.returncode, cmd=self.args, output=self.output)
140
141 def repr_pretty(self, p, cycle): # pylint:disable=unused-argument

CalledProcessError: Command 'eval "$(conda shell.bash hook)" # copy conda command to shell
conda deactivate
conda activate FuSta
cd /content/FuSta/
python run_FuSta.py --load FuSta_model/checkpoint/model_epoch050.pth --input_frames_path input_frames/ --warping_field_path CVPR2020_warping_field/ --output_path output/ --temporal_width 41 --temporal_step 4' returned non-zero exit status 1.`

@lbqdhg lbqdhg changed the title Google colab pro RuntimeError: CUDA out of memory. Tried to allocate 614.00 MiB (GPU 0; 15.78 GiB total capacity; 12.94 GiB already allocated; 584.75 MiB free; 14.16 GiB reserved in total by PyTorch) Jul 12, 2021
@maaxxaam
Copy link

It looks like you are trying to stabilise a video with resolution 800x1422. My guess that one of the involved networks (probably, RAFT ) requires more memory for processing than a Colab GPU can give. Not sure what can I advise rather than decreasing input video resolution.

@lbqdhg
Copy link
Author

lbqdhg commented Jul 15, 2021

Thansk! By the way,compared to the original video after processing, there will be a few more frames, right?

@maaxxaam
Copy link

There shouldn't be any extra frames added. At least, there wasn't any when I have tested it last time. Did you encounter such a situation or just asking?

@lbqdhg
Copy link
Author

lbqdhg commented Jul 15, 2021

I encountered this situation and I got it just now.What I need is 30 frames, as the default is 25 frames. Modified and got what I want. Thansks very much for your time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants