Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Details for reproducing #3

Closed
KSXGroup opened this issue Jan 3, 2023 · 1 comment
Closed

Details for reproducing #3

KSXGroup opened this issue Jan 3, 2023 · 1 comment

Comments

@KSXGroup
Copy link

KSXGroup commented Jan 3, 2023

Hi, I am trying to reproduce your result on eth dataset but it is far away from the reported result (just better than linear).

I preprocessed from raw data using your data converter and pretrained for 700 epoch (lr=3e-6) + 300 epoch (lr=3e-7) with lambda value you mentioned in paper. And then finetuned on eth. But I could not got the reported value. Maybe I missed some steps. So could you provide detail for reproducing or trained model checkpoint ?

Thanks!

@Sigta678
Copy link
Owner

Sigta678 commented Jan 4, 2023

Hi @KSXGroup,

Thanks for your question.
The missing steps might be the learning rate decay on the fine-tuning process, or you can try to drop some data and leave just 1% or 10% for fine-tuning. We discussed this part in our experiment "amount of data for fine-tuning". Also, you can check our Figure 2 in supplementary materials, this explains the phenomenon you met on eth.
Appendix_Fig2

Hope this helps!

@Sigta678 Sigta678 closed this as completed May 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants