You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When i followed the instructions as specified in the docker setup, it always give out of memory error. But I am already using an AWS P3 instance, which has a Tesla V100.
Is this expected or sometime is wrong in my setup?
The V100 has 16GB of VRAM, so that should certainly be enough (it runs fine on my 4GB GPU). Does the same thing happen if you try running it outside of docker?
The first thing that comes to mind is that you may be running something else that's holding onto GPU mem, that's a mistake I make all the time. If you run nvidia-smi and look in the "Memory-Usage" column, does it show 16GB total with only a small portion of that used? You can run this within the container as well: nvidia-docker run --rm wct-tf nvidia-smi
hello thanks for the reply!
Yea i checked nvidia-smi all the time and it is occupying the full memory like below screenshot for a aws p2 instance (12G GPU). I didn't use docker, directly issue the python command in instance.
I used the command python3 stylize.py --checkpoints models/relu5_1 models/relu4_1 models/relu3_1 models/relu2_1 models/relu1_1 --relu-targets relu5_1 relu4_1 relu3_1 relu2_1 relu1_1 --style-size 512 --alpha 0.8 --out-path static/style.jpg.
When i followed the instructions as specified in the docker setup, it always give out of memory error. But I am already using an AWS P3 instance, which has a Tesla V100.
Is this expected or sometime is wrong in my setup?
My config:
tensorflow-gpu: 1.10.0
keras: 2.0.9
Error from vgg_normalised.py line 38:
Thanks!
The text was updated successfully, but these errors were encountered: