-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to reproduce the model as paper #18
Comments
In our experiments, we train our model on multiple datasets for 200 epochs. Then we test on h36m dataset. |
Did you fine-tune with the same model as you provided in GitHub? Thank you |
Thanks for the question. The H36M model we provided is the best performing checkpoint among the entire training process (we train for 200 epochs, and we pick the best for H36M). For 3DPW fine-tuning, we fine-tune with the final checkpoint (trained at 200 epoch). I couldn't retrieve the checkpoint file, but here I found the relevant log file. The log is noisy as we didn't clean the codebase during paper submission. In the log below, the metrics we are looking for should be mPVE_smp, mPJPE_smpl, PAmPJPE_smpl.
|
Now it's clear. Thank you for your answer. And I have one more question Because there was a big difference in performance between 25 batches on the 24G memory GPU and 10 batches on the 12G memory GPU. I couldn't increase the batch size due to environmental problems, but I wonder if I can get better performance if it increases to more than 30. |
Yes. In my experience, increasing to batch size 32 can bring a small improvement. I also tried increasing it to even larger size (like 40, 48), but no significant improvements observed. |
Thank you for your great help |
how many epoch do I have to train to reproduce the model as paper
I think, I have to use the pretrained model which trained on h36m dataset. right?
then how many epochs do we need for fine tuning on 3dpw??
The text was updated successfully, but these errors were encountered: