-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PruningCallback doesn't work #20
Comments
Thank you so much for diving into allennlp-optuna. Which storage do you use? (should be one of sqlite3, MySQL, PostgreSQL, Redis) |
Thanks for creating the issue @himkt I use the default sqlite3 storage
This was the config I was using. You would have to change the models and dataset readers, I can try to reproduce with a simpler example with predefined models etc, but it would take me a while since I won't be using the multi GPU cluster for some time. |
@vikigenius Thank you for your help. Let me ask a question: does this configuration work well if it runs on a single GPU? (means that it disables If your configuration works on a single GPU, I'll investigate AllenNLP integration in Optuna. But, it may take time because the mechanism for supporting Sorry for the inconvenience. 🙇 |
Related to optuna/optuna#1990. |
FYI @vikigenius I'm working on the entirely refactoring AllenNLP integration in Optuna (optuna/optuna#2796). |
In the Optuna v3.0.0a0, we finally introduced the support for the pruning callback in distributed training. pip install -U optuna==3.0.0a0 |
Apart from #18
#18 (comment)
The text was updated successfully, but these errors were encountered: