You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am trying to reproduce your result on eth dataset but it is far away from the reported result (just better than linear).
I preprocessed from raw data using your data converter and pretrained for 700 epoch (lr=3e-6) + 300 epoch (lr=3e-7) with lambda value you mentioned in paper. And then finetuned on eth. But I could not got the reported value. Maybe I missed some steps. So could you provide detail for reproducing or trained model checkpoint ?
Thanks!
The text was updated successfully, but these errors were encountered:
Thanks for your question.
The missing steps might be the learning rate decay on the fine-tuning process, or you can try to drop some data and leave just 1% or 10% for fine-tuning. We discussed this part in our experiment "amount of data for fine-tuning". Also, you can check our Figure 2 in supplementary materials, this explains the phenomenon you met on eth.
Hi, I am trying to reproduce your result on eth dataset but it is far away from the reported result (just better than linear).
I preprocessed from raw data using your data converter and pretrained for 700 epoch (lr=3e-6) + 300 epoch (lr=3e-7) with lambda value you mentioned in paper. And then finetuned on eth. But I could not got the reported value. Maybe I missed some steps. So could you provide detail for reproducing or trained model checkpoint ?
Thanks!
The text was updated successfully, but these errors were encountered: