-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hardware problem #10
Comments
I alse have the same question,im trying to use 4070ti to finish this job |
Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory |
Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running? |
I spent a lot of time trying to solve this problem, and finally found that it was caused by lines 107-109, and l commented it out and there was no longer a problem with memory increments. |
Hi, I had the same problem. Could you please provide details of the solution. Thanks for your reply. |
my solution is , in ” segmentation/manager.py “,Put lines 107-109 into the following if statement, because “out[mask == 0] = torch.min(out)” takes up a lot of GPU memory if not placed this way |
Your help is greatly appreciated. |
I modified the code according to the solution and solved the problem of gradually increasing GPU memory. However, after training for a period of time, the GPU memory suddenly increased and exploded. I wonder that whether you met this issue and how to fix it.Thank you very much! |
Hi, thank you very much for your contribution to this project. I have a question to ask: what does your project hardware look like. What is the graphics card memory for the training project? thanks
The text was updated successfully, but these errors were encountered: