Skip to content

xuyang-liu16/Awesome-Diffusion-Acceleration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 

Repository files navigation

Awesome-Diffusion-Acceleration Awesome

πŸ“’ Collection of awesome diffusion acceleration resources.

Outline

Sampling Solver

  • [1] Denoising Diffusion Implicit Models, ICLR 2021.

    Song, Jiaming and Meng, Chenlin and Ermon, Stefano.

    [Paper] [Code]

  • [2] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, NeurIPS 2022.

    Lu, Cheng and Zhou, Yuhao and Bao, Fan and Chen, Jianfei and Li, Chongxuan and Zhu, Jun.

    [Paper] [Code]

  • [3] DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models, arXiv 2022.

    Lu, Cheng and Zhou, Yuhao and Bao, Fan and Chen, Jianfei and Li, Chongxuan and Zhu, Jun.

    [Paper] [Code]

Pruning

  • [1] Token Merging for Fast Stable Diffusion, CVPRW 2023.

    Bolya, Daniel and Hoffman, Judy.

    [Paper] [Code]

  • [2] Structural Pruning for Diffusion Models, NeurIPS 2023.

    Fang, Gongfan and Ma, Xinyin and Wang, Xinchao.

    [Paper] [Code]

  • [3] Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models, CVPR 2024.

    Wang, Hongjie and Liu, Difan and Kang, Yan and Li, Yijun and Lin, Zhe and Jha, Niraj K and Liu, Yuchen.

    [Paper] [Code]

  • [4] LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models, arXiv 2024.

    Zhang, Dingkun and Li, Sijia and Chen, Chen and Xie, Qingsong and Lu, Haonan.

    [Paper] [Code]

  • [5] SparseDM: Toward Sparse Efficient Diffusion Models, arXiv 2024.

    Wang, Kafeng and Chen, Jianfei and Li, He and Mi, Zhenpeng and Zhu, Jun.

    [Paper] [Code]

Quantization

  • [1] Post-training Quantization on Diffusion Models, CVPR 2023.

    Shang, Yuzhang and Yuan, Zhihang and Xie, Bin and Wu, Bingzhe and Yan, Yan.

    [Paper] [Code]

  • [2] Temporal Dynamic Quantization for Diffusion Models, NeurIPS 2023.

    So, Junhyuk and Lee, Jungwon and Ahn, Daehyun and Kim, Hyungjun and Park, Eunhyeok.

    [Paper] [Code]

  • [3] QVD: Post-training Quantization for Video Diffusion Models, arXiv 2024.

    Tian, Shilong and Chen, Hong and Lv, Chengtao and Liu, Yu and Guo, Jinyang and Liu, Xianglong and Li, Shengxi and Yang, Hao and Xie, Tao.

    [Paper] [Code]

Distillation

  • [1] SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds, NeurIPS 2023.

    Li, Yanyu and Wang, Huan and Jin, Qing and Hu, Ju and Chemerys, Pavlo and Fu, Yun and Wang, Yanzhi and Tulyakov, Sergey and Ren, Jian.

    [Paper] [Code]

  • [2] BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion, ECCV 2024.

    Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook.

    [Paper] [Code]

Cache Mechanism

  • [1] Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models, arXiv 2023.

    Li, Senmao and Hu, Taihang and Khan, Fahad Shahbaz and Li, Linxuan and Yang, Shiqi and Wang, Yaxing and Cheng, Ming-Ming and Yang, Jian.

    [Paper] [Code]

  • [2] DeepCache: Accelerating Diffusion Models for Free, CVPR 2024.

    Ma, Xinyin and Fang, Gongfan and Wang, Xinchao.

    [Paper] [Code]

  • [3] βˆ†-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers, arXiv 2024.

    Chen, Pengtao and Shen, Mingzhu and Ye, Peng and Cao, Jianjian and Tu, Chongjun and Bouganis, Christos-Savvas and Zhao, Yiren and Chen, Tao.

    [Paper] [Code]

  • [4] FORA: Fast-Forward Caching in Diffusion Transformer Acceleration, arXiv 2024.

    Selvaraju, Pratheba and Ding, Tianyu and Chen, Tianyi and Zharkov, Ilya and Liang, Luming.

    [Paper] [Code]

  • [5] Learning-to-Cache: Accelerating Diffusion Transformer via Layer Caching, arXiv 2024.

    Ma, Xinyin and Fang, Gongfan and Mi, Michael Bi and Wang, Xinchao.

    [Paper] [Code]

Deployment

  • [1] AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising, arXiv 2024.

    Chen, Zigeng and Ma, Xinyin and Fang, Gongfan and Tan, Zhenxiong and Wang, Xinchao.

    [Paper] [Code]

Others

  • [1] Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators, ECCV 2024.

    Yifan Pu and Zhuofan Xia and Jiayi Guo and Dongchen Han and Qixiu Li and Duo Li and Yuhui Yuan and Ji Li and Yizeng Han and Shiji Song and Gao Huang and Xiu Li.

    [Paper] [Code]

Training-free Diffusion Acceleration

πŸ“Œ Training-free Stable Diffusion Acceleration

Base modals: Stable Diffusion, Stable Video Diffusion and Text2Video-Zero.

  • [1] Denoising Diffusion Implicit Models, ICLR 2021.

    Song, Jiaming and Meng, Chenlin and Ermon, Stefano.

    [Paper] [Code]

  • [2] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, NeurIPS 2022.

    Lu, Cheng and Zhou, Yuhao and Bao, Fan and Chen, Jianfei and Li, Chongxuan and Zhu, Jun.

    [Paper] [Code]

  • [3] DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models, arXiv 2022.

    Lu, Cheng and Zhou, Yuhao and Bao, Fan and Chen, Jianfei and Li, Chongxuan and Zhu, Jun.

    [Paper] [Code]

  • [4] Token Merging for Fast Stable Diffusion, CVPRW 2023.

    Bolya, Daniel and Hoffman, Judy.

    [Paper] [Code]

  • [5] AutoDiffusion: Training-Free Optimization of Time Steps and Architectures for Automated Diffusion Model Acceleration, ICCV 2023.

    Li, Lijiang and Li, Huixia and Zheng, Xiawu and Wu, Jie and Xiao, Xuefeng and Wang, Rui and Zheng, Min and Pan, Xin and Chao, Fei and Ji, Rongrong.

    [Paper] [Code]

  • [6] Structural Pruning for Diffusion Models, NeurIPS 2023.

    Fang, Gongfan and Ma, Xinyin and Wang, Xinchao.

    [Paper] [Code]

  • [7] Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion Models, arXiv 2023.

    Li, Senmao and Hu, Taihang and Khan, Fahad Shahbaz and Li, Linxuan and Yang, Shiqi and Wang, Yaxing and Cheng, Ming-Ming and Yang, Jian.

    [Paper] [Code]

  • [8] DeepCache: Accelerating Diffusion Models for Free, CVPR 2024.

    Ma, Xinyin and Fang, Gongfan and Wang, Xinchao.

    [Paper] [Code]

  • [9] Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models, CVPR 2024.

    Wang, Hongjie and Liu, Difan and Kang, Yan and Li, Yijun and Lin, Zhe and Jha, Niraj K and Liu, Yuchen.

    [Paper] [Code]

  • [9] PFDiff: Training-free Acceleration of Diffusion Models through the Gradient Guidance of Past and Future, arXiv 2024.

    Guangyi Wang and Yuren Cai and Lijiang Li and Wei Peng and Songzhi Su.

    [Paper] [Code]

πŸ“Œ Training-free Diffusion Transformer Acceleration

Base modals: DiT-XL and PIXART-Ξ±.

  • [1] βˆ†-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers, arXiv 2024.

    Chen, Pengtao and Shen, Mingzhu and Ye, Peng and Cao, Jianjian and Tu, Chongjun and Bouganis, Christos-Savvas and Zhao, Yiren and Chen, Tao.

    [Paper] [Code]

  • [2] FORA: Fast-Forward Caching in Diffusion Transformer Acceleration, arXiv 2024.

    Selvaraju, Pratheba and Ding, Tianyu and Chen, Tianyi and Zharkov, Ilya and Liang, Luming.

    [Paper] [Code]

  • [3] DiTFastAttn: Attention Compression for Diffusion Transformer Models, arXiv 2024.

    Yuan, Zhihang and Lu, Pu and Zhang, Hanling and Ning, Xuefei and Zhang, Linfeng and Zhao, Tianchen and Yan, Shengen and Dai, Guohao and Wang, Yu.

    [Paper] [Code]

Contact

πŸ“« For any question, please contact Xuyang Liu.