Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hardware problem #10

Open
ljx761795750 opened this issue Jul 23, 2022 · 8 comments
Open

hardware problem #10

ljx761795750 opened this issue Jul 23, 2022 · 8 comments

Comments

@ljx761795750
Copy link

Hi, thank you very much for your contribution to this project. I have a question to ask: what does your project hardware look like. What is the graphics card memory for the training project? thanks

@605521553
Copy link

I alse have the same question,im trying to use 4070ti to finish this job

@zhihao-lin
Copy link
Owner

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

@Sylva-Lin
Copy link

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running?

@Sylva-Lin
Copy link

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running?

I spent a lot of time trying to solve this problem, and finally found that it was caused by lines 107-109, and l commented it out and there was no longer a problem with memory increments.

@Ting-Devin-Han
Copy link

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running?

I spent a lot of time trying to solve this problem, and finally found that it was caused by lines 107-109, and l commented it out and there was no longer a problem with memory increments.

Hi, I had the same problem. Could you please provide details of the solution. Thanks for your reply.

@Sylva-Lin
Copy link

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running?

I spent a lot of time trying to solve this problem, and finally found that it was caused by lines 107-109, and l commented it out and there was no longer a problem with memory increments.

Hi, I had the same problem. Could you please provide details of the solution. Thanks for your reply.

my solution is , in ” segmentation/manager.py “,Put lines 107-109 into the following if statement, because “out[mask == 0] = torch.min(out)” takes up a lot of GPU memory if not placed this way

@Ting-Devin-Han
Copy link

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running?

I spent a lot of time trying to solve this problem, and finally found that it was caused by lines 107-109, and l commented it out and there was no longer a problem with memory increments.

Hi, I had the same problem. Could you please provide details of the solution. Thanks for your reply.

my solution is , in ” segmentation/manager.py “,Put lines 107-109 into the following if statement, because “out[mask == 0] = torch.min(out)” takes up a lot of GPU memory if not placed this way

Your help is greatly appreciated.

@wxycwymds
Copy link

wxycwymds commented Oct 30, 2023

Thanks for your interest in our projects. I used NVIDIA TITAN RTX for training and inference, which has 24G memory

Hi, thank you for your great work. I found the phenomenon of increasing the Gpu memory during the training stage, but I don't know which link caused it, does this happen when you're running?

I spent a lot of time trying to solve this problem, and finally found that it was caused by lines 107-109, and l commented it out and there was no longer a problem with memory increments.

Hi, I had the same problem. Could you please provide details of the solution. Thanks for your reply.

my solution is , in ” segmentation/manager.py “,Put lines 107-109 into the following if statement, because “out[mask == 0] = torch.min(out)” takes up a lot of GPU memory if not placed this way

Your help is greatly appreciated.

I modified the code according to the solution and solved the problem of gradually increasing GPU memory. However, after training for a period of time, the GPU memory suddenly increased and exploded. I wonder that whether you met this issue and how to fix it.Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants