Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have done some experiments on Chinese using bert-base config, the results are not promising #6

Open
yyht opened this issue Jan 26, 2021 · 2 comments

Comments

@yyht
Copy link

yyht commented Jan 26, 2021

Hi, I have done pretraining on Chinese-dataset(50G) and run downstream finetuning on ChineseClue benchmark, the default hyperparameters ars the same to bert-base:
learning_rate: 3e-5,
epoch: 3 or 5
the finetuning results on benchmark are worse than official Chinese bert-base released by Goolge

@zhu143xin
Copy link

Hi, I want to use TTA to do some work about spelling Error Correction on Chinese, and did you do some experiments?

@yyht
Copy link
Author

yyht commented Jan 28, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants