A curated list of awesome papers on dataset distillation and related applications, inspired by awesome-computer-vision.
Dataset distillation is the task of synthesizing a small dataset such that models trained on it achieve high performance on the original large dataset. A dataset distillation algorithm takes as input a large real dataset to be distilled (training set), and outputs a small synthetic distilled dataset, which is evaluated via testing models trained on this distilled dataset on a separate real dataset (validation/test set). A good small distilled dataset is not only useful in dataset understanding, but has various applications (e.g., continual learning, privacy, neural architecture search, etc.). This task was first introduced in the 2018 paper Dataset Distillation [Tongzhou Wang et al., '18], along with a proposed algorithm using backpropagation through optimization steps.
In recent years (2019-now), dataset distillation has gained increasing attention in the research community, across many institutes and labs. More papers are now being published each year. These wonderful researches have been constantly improving dataset distillation and exploring its various variants and applications.
This project is curated and maintained by Guang Li, Bo Zhao, and Tongzhou Wang.
- ๐ Project Page
- Code
- ๐
bibtex
If you find this project useful for your research, please use the following BibTeX entry.
@misc{li2022awesome,
author={Li, Guang and Zhao, Bo and Wang, Tongzhou},
title={Awesome-Dataset-Distillation},
howpublished={\url{https://github.com/Guang000/Awesome-Dataset-Distillation}},
year={2022}
}
Media Coverage
Acknowledgments
- Dataset Distillation (Tongzhou Wang et al., 2018) ๐ ๐
- Gradient-Based Hyperparameter Optimization Through Reversible Learning (Dougal Maclaurin et al., ICML 2015) ๐
- Dataset Condensation with Gradient Matching (Bo Zhao et al., ICLR 2021) ๐
- Dataset Condensation with Differentiable Siamese Augmentation (Bo Zhao et al., ICML 2021) ๐
- Dataset Distillation by Matching Training Trajectories (George Cazenavette et al., CVPR 2022) ๐ ๐
- Dataset Condensation with Contrastive Signals (Saehyung Lee et al., ICML 2022) ๐
- Delving into Effective Gradient Matching for Dataset Condensation (Zixuan Jiang et al., 2022) ๐
- Loss-Curvature Matching for Dataset Selection and Condensation (Seungjae Shin & Heesun Bae et al., AISTATS 2023) ๐
- Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation (Jiawei Du & Yidi Jiang et al., CVPR 2023) ๐
- Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory (Justin Cui et al., ICML 2023) ๐
- DREAM: Efficient Dataset Distillation by Representative Matching (Yanqing Liu & Jianyang Gu et al., ICCV 2023) ๐
- CAFE: Learning to Condense Dataset by Aligning Features (Kai Wang & Bo Zhao et al., CVPR 2022) ๐
- Dataset Condensation with Distribution Matching (Bo Zhao et al., WACV 2023) ๐
- Optimizing Millions of Hyperparameters by Implicit Differentiation (Jonathan Lorraine et al., AISTATS 2020) ๐
- Dataset Meta-Learning from Kernel Ridge-Regression (Timothy Nguyen et al., ICLR 2021) ๐
- Dataset Distillation with Infinitely Wide Convolutional Networks (Timothy Nguyen et al., NeurIPS 2021) ๐
- On Implicit Bias in Overparameterized Bilevel Optimization (Paul Vicol et al., ICML 2022) ๐
- Dataset Distillation using Neural Feature Regression (Yongchao Zhou et al., NeurIPS 2022) ๐ ๐
- Efficient Dataset Distillation using Random Feature Approximation (Noel Loo et al., NeurIPS 2022) ๐
- Accelerating Dataset Distillation via Model Augmentation (Lei Zhang & Jie Zhang et al., CVPR 2023) ๐
- Dataset Distillation with Convexified Implicit Gradients (Noel Loo et al., ICML 2023) ๐
- Dataset Quantization(Daquan Zhou et al., ICCV 2023) ๐
- Dataset Condensation via Efficient Synthetic-Data Parameterization (Jang-Hyun Kim et al., ICML 2022) ๐
- Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks (Zhiwei Deng et al., NeurIPS 2022) ๐
- On Divergence Measures for Bayesian Pseudocoresets (Balhae Kim et al., NeurIPS 2022) ๐
- Dataset Distillation via Factorization (Songhua Liu et al., NeurIPS 2022) ๐
- Synthesizing Informative Training Samples with GAN (Bo Zhao et al., NeurIPS 2022 Workshop) ๐
- PRANC: Pseudo RAndom Networks for Compacting Deep Models (Parsa Nooralinejad et al., 2022) ๐
- Dataset Condensation with Latent Space Knowledge Factorization and Sharing (Hae Beom Lee & Dong Bok Lee et al., 2022) ๐
- Generalizing Dataset Distillation via Deep Generative Prior (George Cazenavette et al., CVPR 2023) ๐ ๐
- DiM: Distilling Dataset into Generative Model (Kai Wang & Jianyang Gu et al., 2023) ๐
- Rethinking Data Distillation: Do Not Overlook Calibration (Dongyao Zhu et al., ICCV 2023) ๐
- Flexible Dataset Distillation: Learn Labels Instead of Images (Ondrej Bohdal et al., NeurIPS 2020 Workshop) ๐
- Soft-Label Dataset Distillation and Text Dataset Distillation (Ilia Sucholutsky et al., IJCNN 2021) ๐
- Multimodal Dataset Distillation for Image-Text Retrieval (Xindi Wu et al., 2023) ๐ ๐
- DC-BENCH: Dataset Condensation Benchmark (Justin Cui et al., NeurIPS 2022) ๐ ๐
- A Comprehensive Study on Dataset Distillation: Performance, Privacy, Robustness and Fairness) (Zongxiong Chen & Jiahui Geng et al., 2023) ๐
- A Survey on Dataset Distillation: Approaches, Applications and Future Directions (Jiahui Geng & Zongxiong Chen et al., IJCAI 2023) ๐
- Data Distillation: A Survey (Noveen Sachdeva et al., 2023) ๐
- A Comprehensive Survey to Dataset Distillation (Shiye Lei et al., 2023) ๐
- Dataset Distillation: A Comprehensive Review (Ruonan Yu & Songhua Liu et al., 2023) ๐
- Reducing Catastrophic Forgetting with Learning on Synthetic Data (Wojciech Masarczyk et al., CVPR 2020 Workshop) ๐
- Condensed Composite Memory Continual Learning (Felix Wiewel et al., IJCNN 2021) ๐
- Distilled Replay: Overcoming Forgetting through Synthetic Samples (Andrea Rosasco et al., IJCAI 2021 Workshop) ๐
- Sample Condensation in Online Continual Learning (Mattia Sangermano et al., IJCNN 2022) ๐
- Summarizing Stream Data for Memory-Restricted Online Continual Learning (Jianyang Gu et al., 2023) ๐
- SecDD: Efficient and Secure Method for Remotely Training Neural Networks (Ilia Sucholutsky et al., AAAI 2021) ๐
- Privacy for Free: How does Dataset Condensation Help Privacy? (Tian Dong et al., ICML 2022) ๐
- Can We Achieve Robustness from Data Alone? (Nikolaos Tsilivis et al., ICML 2022 Workshop) ๐
- Private Set Generation with Discriminative Information (Dingfan Chen et al., NeurIPS 2022) ๐
- Towards Robust Dataset Learning (Yihan Wu et al., 2022) ๐
- Backdoor Attacks Against Dataset Distillation (Yugeng Liu et al., NDSS 2023) ๐
- Differentially Private Kernel Inducing Points (DP-KIP) for Privacy-preserving Data Distillation (Margarita Vinaroz et al., 2023) ๐
- Dataset Distillation Fixes Dataset Reconstruction Attacks (Noel Loo et al., 2023) ๐
- Soft-Label Anonymous Gastric X-ray Image Distillation (Guang Li et al., ICIP 2020) ๐
- Compressed Gastric Image Generation Based on Soft-Label Dataset Distillation for Medical Data Sharing (Guang Li et al., CMPB 2022) ๐
- Dataset Distillation for Medical Dataset Sharing (Guang Li et al., AAAI 2023 Workshop) ๐
- Federated Learning via Synthetic Data (Jack Goetz et al., 2020) ๐
- Distilled One-Shot Federated Learning (Yanlin Zhou et al., 2020) ๐
- FedSynth: Gradient Compression via Synthetic Data in Federated Learning (Shengyuan Hu et al., 2022) ๐
- DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics (Renjie Pi et al., 2022) ๐
- DENSE: Data-Free One-Shot Federated Learning(Jie Zhang et al., NeurIPS 2022)๐
- Meta Knowledge Condensation for Federated Learning (Ping Liu et al., ICLR 2023) ๐
- FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning (Yuanhao Xiong & Ruochen Wang et al., CVPR 2023) ๐
- Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge Environments (Rui Song et al., IJCNN 2023) ๐
- Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy (Hui-Po Wang et al., 2023) ๐
- Federated Virtual Learning on Heterogeneous Data with Local-global Distillation (Chun-Yin Huang et al., 2023) ๐
- Graph Condensation for Graph Neural Networks (Wei Jin et al., ICLR 2022) ๐
- Condensing Graphs via One-Step Gradient Matching (Wei Jin et al., KDD 2022) ๐
- Graph Condensation via Receptive Field Distribution Matching (Mengyang Liu et al., 2022) ๐
- Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (Felipe Petroski Such et al., ICML 2020) ๐
- Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation (Dmitry Medvedev et al., AIST 2021) ๐
- Wearable ImageNet: Synthesizing Tileable Textures via Dataset Distillation (George Cazenavette et al., CVPR 2022 Workshop) ๐ ๐
- Learning from Designers: Fashion Compatibility Analysis Via Dataset Distillation (Yulan Chen et al., ICIP 2022) ๐
- Knowledge Condensation Distillation (Chenxin Li et al., ECCV 2022) ๐
- Infinite Recommendation Networks: A Data-Centric Approach (Noveen Sachdeva et al., NeurIPS 2022) ๐
- Bidirectional Learning for Offline Infinite-width Model-based Optimization (Can Chen et al., NeurIPS 2022) ๐
- Bidirectional Learning for Offline Model-based Biological Sequence Design (Can Chen et al., ICML 2023) ๐
- Towards Efficient Deep Hashing Retrieval: Condensing Your Data via Feature-Embedding Matching (Tao Feng & Jie Zhang et al., 2023) ๐
- Data Distillation for Text Classification (Yongqi Li et al., 2021) ๐
- Dataset Distillation with Attention Labels for Fine-tuning BERT (Aru Maekawa et al., ACL 2023) ๐
- New Properties of the Data Distillation Method When Working With Tabular Data (Dmitry Medvedev et al., AIST 2020) ๐
- Beginning of Awesome-Dataset-Distillation
- Most Popular AI Research Aug 2022
- ไธไธช้กน็ฎๅธฎไฝ ไบ่งฃๆฐๆฎ้่ธ้ฆDataset Distillation
- ๆต็ผฉๅฐฑๆฏ็ฒพๅ๏ผ็จๅคงไธ็ป่ง่ง็ๅพ ๆฐๆฎ้่ธ้ฆ
We want to thank Nikolaos Tsilivis, Wei Jin, Yongchao Zhou, Noveen Sachdeva, Can Chen, Guangxiang Zhao, Shiye Lei, Xinchao Wang, Dmitry Medvedev, Seungjae Shin, Jiawei Du, Yidi Jiang, and Xindi Wu for their valuable suggestions and contributions.