-
I have set-up GPU branch on my biigle server. When I try MAIA novelty detection, I got the following errors (ran out of memory) in logs. It seems simliar to biigle/maia#64 Logs:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
By default MAIA in the GPU branch is configured to expect 16 GB of GPU memory (the number in the .env file is actually 15 GB to avoid biigle/maia#64 most of the time). Do you have this amount of GPU memory? If not, try to lower this number, rebuild and restart your BIIGLE instance. If you do have this amount of GPU memory, you are probably affected by biigle/maia#64. In this case you can still try to lower the number and see when/if it works. We likely won't address biigle/maia#64 any more but rather port all the code to PyTorch/OpenMMLab. |
Beta Was this translation helpful? Give feedback.
By default MAIA in the GPU branch is configured to expect 16 GB of GPU memory (the number in the .env file is actually 15 GB to avoid biigle/maia#64 most of the time). Do you have this amount of GPU memory? If not, try to lower this number, rebuild and restart your BIIGLE instance.
If you do have this amount of GPU memory, you are probably affected by biigle/maia#64. In this case you can still try to lower the number and see when/if it works.
We likely won't address biigle/maia#64 any more but rather port all the code to PyTorch/OpenMMLab.