title | booktitle | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Understanding Gradient Descent on the Edge of Stability in Deep Learning |
Proceedings of the 39th International Conference on Machine Learning |
Deep learning experiments by \citet{cohen2021gradient} using deterministic Gradient Descent (GD) revealed an <em>Edge of Stability (EoS)</em> phase when learning rate (LR) and sharpness (<em>i.e.</em>, the largest eigenvalue of Hessian) no longer behave as in traditional optimization. Sharpness stabilizes around $2/$LR and loss goes up and down across iterations, yet still with an overall downward trend. The current paper mathematically analyzes a new mechanism of implicit regularization in the EoS phase, whereby GD updates due to non-smooth loss landscape turn out to evolve along some deterministic flow on the manifold of minimum loss. This is in contrast to many previous results about implicit bias either relying on infinitesimal updates or noise in gradient. Formally, for any smooth function |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
arora22a |
0 |
Understanding Gradient Descent on the Edge of Stability in Deep Learning |
948 |
1024 |
948-1024 |
948 |
false |
Arora, Sanjeev and Li, Zhiyuan and Panigrahi, Abhishek |
|
2022-06-28 |
Proceedings of the 39th International Conference on Machine Learning |
162 |
inproceedings |
|