-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
At what time should i stop the searching and begin retraining my model? #88
Comments
Just start to retrain after the ~40 epochs searching. |
thanks! I will try this to see if it works |
@Oliver-jiang Can you share your retraining result of the searched architecture with us? Thanks a lot. |
@Oliver-jiang Since RTX 2080Ti only has 11GB memory, you may have banned global pooling in ASPP. That may be the reason for the bad performance. |
I so sorry that i forgot to check my mail for a long time. The problem of the project autodeeplab is still there, but i have not changed any thing in ASPP module. Could you please explain more about this? thanks a lot.
In my implement, I only resized the input image to 128 and filter_multiplier to 4 to satisfy the memory. the result is stuck around 25% on pascal now.
…------------------ 原始邮件 ------------------
发件人: "HOU Yuenan"<notifications@github.com>;
发送时间: 2020年1月29日(星期三) 上午10:39
收件人: "NoamRosenberg/AutoML"<AutoML@noreply.github.com>;
抄送: "蒋云丞"<jyc97@qq.com>;"Mention"<mention@noreply.github.com>;
主题: Re: [NoamRosenberg/AutoML] At what time should i stop the searching and begin retraining my model? (#88)
@Oliver-jiang Since RTX 2080Ti only has 11GB memory, you may have banned global pooling in ASPP. That may be the reason for the bad performance.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@cardwing can you share some of your search_argparses with us? Thanks a lot. |
Currently, I have reduced the number of layers from 12 to 10 due to the limited memory and kept other hyperparameters fixed. The searching performance on the Cityscapes validation set is around 33% mIoU after 40 epochs. |
@cardwing thanks for your advice, i tried your method but currently the best performance after 7 epochs is around 7% and there was a big fluctuation in some epochs. however, the performance at begin shown in the paper is already around 10% mIoU(1% in my experiment). did you have the same problem? looking forward to your reply |
I have not met that problem. You can just train the model for more epochs and check the result. However, it takes me around 7 days to train the searched architecture and the final performance is only around 65% mIoU. I have to admit that NAS really consumes resources and time. |
Same. around 65% |
thanks lot for your wonderful efforts!
but I have some problems when implementing this project
I run the code for pascal and cityscapes with the default args, with one gpu RTX 2080Ti, but the result I got at epoch 40 is bad, pascal voc not even reach 20% miou and cityscapes also just around 20%. I'm so confused.
I saw your introduction that there should be 3 stages of searching, decoding and re-training. the re-training stage means I should start re-training until the search result reached ~79% or I should start re-training after ~40 epochs and re-train the model for the best result?
looking forward to your reply. thanks again.
The text was updated successfully, but these errors were encountered: