Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why train so much epoch(240epoch) than caffe ssd (12000iter==58epoch) #223

Open
yinglang opened this issue Oct 23, 2018 · 1 comment
Open

Comments

@yinglang
Copy link

for origin caffe ssd, only train 120000 iter, batch size=8, take only about 58 epochs, why you train your ssd in mxnet need 240epoch, it cost too much.
do you try 58 epoch train use decay in epoch 39 and epoch 49?
it very confuse me, thanks for you read.

@zhreshold
Copy link
Owner

the original paper take batch size =32, so that's 240 epoch equivalent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants