Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-training for SQUAD and fine-tuning for BIOASQ #10

Closed
rsanjaykamath opened this issue Feb 20, 2019 · 3 comments
Closed

Pre-training for SQUAD and fine-tuning for BIOASQ #10

rsanjaykamath opened this issue Feb 20, 2019 · 3 comments
Assignees

Comments

@rsanjaykamath
Copy link

Hello,

Can you please explain how exactly can one replicate the state of the results on BIOASQ dataset, as you report in the paper that you pre-train on SQUAD before fine tuning on BIOASQ. I tried doing the same steps reported in this repo and found the results to be quite low if not pre-trained on SQUAD, which is normal.

Is there any doc which explains the commands to do this? I'm sorry if I missed reading this.

Thanks

@jhyuklee
Copy link
Member

As Wonjin said, to reproduce our results, you need to follow a few steps below.

  1. Fine-tune BioBERT on SQuAD dataset (https://github.com/google-research/bert)
  2. Use ckpt of step1 as initial checkpoint for fine-tunning BioASQ datasets. (Be sure to set output folder as new one)

Thanks.

@telukuntla
Copy link

telukuntla commented Mar 21, 2019

Hello @jhyuklee ,
May I know what are the parameters you have used while training it on Squad. like epochs and learning rate..

@sonigovind
Copy link

As I have a dataset for NER, Which is containing three features like seq, word, and tag and now I want to use the NER task so how can I use this model for my work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants