-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,I have a question about the loss compute? #207
Comments
@BCWang93 these are very good questions. Unfortunately when we implemented the exact darknet loss function we got poor results during training, so we tried hyperparameter tuning, and the current loss constants you see here are what we found to produce the best results on coco training from scratch. The main changes are:
We need to reexamine all of these and retune them ideally since there have been many changes since they were first set. I am hoping to start a dedicated issue on this topic soon. See #205 for more information. Also beware, a low-mAP bug was fixed in the last couple days, I strongly encourage you to |
Thanks for your reply! I will test it!Thanks! |
@dsp6414 those are not mAP values > 1 those are recall values. Can you supply your test images, .data, .cfg files, and trained model to reproduce the issue? |
In the 'compute_loss.py',I found this way to compute loss:
" k = 1 # nT / bs
if len(b) > 0:
pi = pi0[b, a, gj, gi] # predictions closest to anchors
tconf[b, a, gj, gi] = 1 # conf
the 'k' is what?And why do you multiply the number like '4,8,1',and the loss you compute is the all batch or the mean?Hope you reply!Thank you very much!
The text was updated successfully, but these errors were encountered: