You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|17.1M/17.1M [00:01<00:00,10.7MB/s]
Testing 002
Error CUDA out of memory. Tried to allocate 110.72 GiB (GPU 0; 8.00 GiB total capacity; 28.49 GiB already allocated; 0 bytes free; 29.31 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
If you encounter CUDA out of memory,try to set --tile with a smaller number.
Testing 103
It works ok via ncnn-vulkan though, so I just keep using it, I'm just curious about this error.
The text was updated successfully, but these errors were encountered:
It seems like something tried to eat all of my memory.
When trying to run via inference_realesrgan.py.
A command to run:
Produced output:
It works ok via ncnn-vulkan though, so I just keep using it, I'm just curious about this error.
The text was updated successfully, but these errors were encountered: