title | booktitle | abstract | layout | series | publisher | issn | id | month | tex_title | firstpage | lastpage | page | order | cycles | bibtex_author | author | date | address | container-title | volume | genre | issued | extras | ||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Private optimization in the interpolation regime: faster rates and hardness results |
Proceedings of the 39th International Conference on Machine Learning |
In non-private stochastic convex optimization, stochastic gradient methods converge much faster on interpolation problems—namely, problems where there exists a solution that simultaneously minimizes all of the sample losses—than on non-interpolating ones; similar improvements are not known in the private setting. In this paper, we investigate differentially private stochastic optimization in the interpolation regime. First, we show that without additional assumptions, interpolation problems do not exhibit an improved convergence rates with differential privacy. However, when the functions exhibit quadratic growth around the optimum, we show (near) exponential improvements in the private sample complexity. In particular, we propose an adaptive algorithm that improves the sample complexity to achieve expected error |
inproceedings |
Proceedings of Machine Learning Research |
PMLR |
2640-3498 |
asi22a |
0 |
Private optimization in the interpolation regime: faster rates and hardness results |
1025 |
1045 |
1025-1045 |
1025 |
false |
Asi, Hilal and Chadha, Karan and Cheng, Gary and Duchi, John |
|
2022-06-28 |
Proceedings of the 39th International Conference on Machine Learning |
162 |
inproceedings |
|