This page is to keep myself a reading list of ML algorithm/system/theory papers that combine locality sensitive hashing and deep learning.
Categorization is yet to come.
- Blalock et al., Multiplying Matrices without Multiplying, ICML 2021.
Replaces the matrix multiplication
$C = AB$ with LSH: First, generate hash tables from some training data (say, tilde A). Next, construct prototype row vectors for each bucket. Then, pre-compute B-processed versions of the prototypes (i.e.,$v^t B$ ). Then, for each row of A, map it to the bucket, and output the corresponding B-processed version.
- Chen et al., SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems, MLSys 2020.
- Chen et al., MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training, ICLR 2021.
- Kitaev et al., Reformer: The Efficient Transformer, ICLR 2020.
- Dikkala et al., For Manifold Learning, Deep Neural Networks can be Locality Sensitive Hash Functions, arXiv 2021.
- Panigrahy et al., Sketch based memory for neural networks, AISTATS 2021.