We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
分散学習時の学習率チューニングを不要にするようなSGDの拡張
https://arxiv.org/abs/2007.05105
Tyler B. Johnson, Pulkit Agrawal, Haijie Gu, Carlos Guestrin (Apple)
2020/07/09
既存のスケジューリングルールであるIdentity scaling ruleとlinear scaling ruleを適応的にした.
The text was updated successfully, but these errors were encountered:
nocotan
No branches or pull requests
一言でいうと
分散学習時の学習率チューニングを不要にするようなSGDの拡張
論文リンク
https://arxiv.org/abs/2007.05105
著者/所属機関
Tyler B. Johnson, Pulkit Agrawal, Haijie Gu, Carlos Guestrin (Apple)
投稿日付(yyyy/MM/dd)
2020/07/09
概要
新規性・差分
既存のスケジューリングルールであるIdentity scaling ruleとlinear scaling ruleを適応的にした.
手法
結果
コメント
The text was updated successfully, but these errors were encountered: