Skip to content

Latest commit

 

History

History
49 lines (49 loc) · 1.93 KB

2022-06-28-awasthi22b.md

File metadata and controls

49 lines (49 loc) · 1.93 KB
title booktitle abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Do More Negative Samples Necessarily Hurt In Contrastive Learning?
Proceedings of the 39th International Conference on Machine Learning
Recent investigations in noise contrastive estimation suggest, both empirically as well as theoretically, that while having more “negative samples” in the contrastive loss improves downstream classification performance initially, beyond a threshold, it hurts downstream performance due to a “collision-coverage” trade-off. But is such a phenomenon inherent in contrastive learning? We show in a simple theoretical setting, where positive pairs are generated by sampling from the underlying latent class (introduced by Saunshi et al. (ICML 2019)), that the downstream performance of the representation optimizing the (population) contrastive loss in fact does not degrade with the number of negative samples. Along the way, we give a structural characterization of the optimal representation in our framework, for noise contrastive estimation. We also provide empirical support for our theoretical results on CIFAR-10 and CIFAR-100 datasets.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
awasthi22b
0
Do More Negative Samples Necessarily Hurt In Contrastive Learning?
1101
1116
1101-1116
1101
false
Awasthi, Pranjal and Dikkala, Nishanth and Kamath, Pritish
given family
Pranjal
Awasthi
given family
Nishanth
Dikkala
given family
Pritish
Kamath
2022-06-28
Proceedings of the 39th International Conference on Machine Learning
162
inproceedings
date-parts
2022
6
28