Skip to content

Latest commit

 

History

History
56 lines (56 loc) · 2.4 KB

2022-06-28-awasthi22c.md

File metadata and controls

56 lines (56 loc) · 2.4 KB
title booktitle abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
H-Consistency Bounds for Surrogate Loss Minimizers
Proceedings of the 39th International Conference on Machine Learning
We present a detailed study of estimation errors in terms of surrogate loss estimation errors. We refer to such guarantees as H-consistency bounds, since they account for the hypothesis set H adopted. These guarantees are significantly stronger than H-calibration or H-consistency. They are also more informative than similar excess error bounds derived in the literature, when H is the family of all measurable functions. We prove general theorems providing such guarantees, for both the distribution-dependent and distribution-independent settings. We show that our bounds are tight, modulo a convexity assumption. We also show that previous excess error bounds can be recovered as special cases of our general results. We then present a series of explicit bounds in the case of the zero-one loss, with multiple choices of the surrogate loss and for both the family of linear functions and neural networks with one hidden-layer. We further prove more favorable distribution-dependent guarantees in that case. We also present a series of explicit bounds in the case of the adversarial loss, with surrogate losses based on the supremum of the $\rho$-margin, hinge or sigmoid loss and for the same two general hypothesis sets. Here too, we prove several enhancements of these guarantees under natural distributional assumptions. Finally, we report the results of simulations illustrating our bounds and their tightness.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
awasthi22c
0
H-Consistency Bounds for Surrogate Loss Minimizers
1117
1174
1117-1174
1117
false
Awasthi, Pranjal and Mao, Anqi and Mohri, Mehryar and Zhong, Yutao
given family
Pranjal
Awasthi
given family
Anqi
Mao
given family
Mehryar
Mohri
given family
Yutao
Zhong
2022-06-28
Proceedings of the 39th International Conference on Machine Learning
162
inproceedings
date-parts
2022
6
28