Skip to content

PyTorch code for our NeurIPS 2024 paper "Binarized Diffusion Model for Image Super-Resolution"

License

Notifications You must be signed in to change notification settings

zhengchen1999/BI-DiffSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Binarized Diffusion Model for Image Super-Resolution

Zheng Chen, Haotong Qin, Yong Guo, Xiongfei Su, Xin Yuan, Linghe Kong, and Yulun Zhang, "Binarized Diffusion Model for Image Super-Resolution", NeurIPS, 2024

[arXiv] [visual results] [pretrained models]

🔥🔥🔥 News

  • 2024-10-14: Code and pre-trained models are released. ⭐️⭐️⭐️
  • 2024-09-26: BI-DiffSR is accepted at NeurIPS 2024. 🎉🎉🎉
  • 2024-06-09: This repo is released.

Abstract: Advanced diffusion models (DMs) perform impressively in image super-resolution (SR), but the high memory and computational costs hinder their deployment. Binarization, an ultra-compression algorithm, offers the potential for effectively accelerating DMs. Nonetheless, due to the model structure and the multi-step iterative attribute of DMs, existing binarization methods result in significant performance degradation. In this paper, we introduce a novel binarized diffusion model, BI-DiffSR, for image SR. First, for the model structure, we design a UNet architecture optimized for binarization. We propose the consistent-pixel-downsample (CP-Down) and consistent-pixel-upsample (CP-Up) to maintain dimension consistent and facilitate the full-precision information transfer. Meanwhile, we design the channel-shuffle-fusion (CS-Fusion) to enhance feature fusion in skip connection. Second, for the activation difference across timestep, we design the timestep-aware redistribution (TaR) and activation function (TaA). The TaR and TaA dynamically adjust the distribution of activations based on different timesteps, improving the flexibility and representation alability of the binarized module. Comprehensive experiments demonstrate that our BI-DiffSR outperforms existing binarization methods.



HR LR SR3 (FP) BBCU BI-DiffSR (ours)

TODO

  • Release code and pretrained models

Dependencies

  • Python 3.9
  • PyTorch 1.13.1+cu117
# Clone the github repo and go to the default directory 'BI-DiffSR'.
git clone https://github.com/zhengchen1999/BI-DiffSR.git
conda create -n bi_diffsr python=3.9
conda activate bi_diffsr
pip install -r requirements.txt -f https://download.pytorch.org/whl/torch_stable.html
git clone https://github.com/huggingface/diffusers.git
cd diffusers
pip install -e ".[torch]"

Contents

  1. Datasets
  2. Models
  3. Training
  4. Testing
  5. Results
  6. Citation
  7. Acknowledgements

Datasets

Used training and testing sets can be downloaded as follows:

Training Set Testing Set Visual Results
DIV2K (800 training images, 100 validation images) + Flickr2K (2650 images) [complete training dataset DF2K: Google Drive / Baidu Disk] Set5 + Set14 + BSD100 + Urban100 + Manga109 [complete testing dataset: Google Drive / Baidu Disk] Google Drive / Baidu Disk

Download training and testing datasets and put them into the corresponding folders of datasets/.

Models

Method Params (M) FLOPs (G) PSNR (dB) LPIPS Model Zoo Visual Results
BI-DiffSR 4.58 36.67 24.11 0.1823 Google Drive Google Drive

The performance is reported on Urban100 (×4). Output size of FLOPs is 3×256×256.

Training

  • The ×2 task requires 4*8 GB VRAM, and the ×4 task requires 4*20 GB VRAM.

  • Download training (DF2K, already processed) and testing (Set5, BSD100, Urban100, Manga109, already processed) datasets, place them in datasets/.

  • Run the following scripts. The training configuration is in options/train/.

    # BI-DiffSR, input=64x64, 4 GPUs
    python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train/train_BI_DiffSR_x2.yml --launcher pytorch
    python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train/train_BI_DiffSR_x4.yml --launcher pytorch
  • The training experiment is in experiments/.

Testing

  • Download the pre-trained models and place them in experiments/pretrained_models/.

    We provide pre-trained models for image SR (×2, ×4).

  • Download testing (Set5, BSD100, Urban100, Manga109) datasets, place them in datasets/.

  • Run the following scripts. The testing configuration is in options/test/.

    # BI-DiffSR, reproduces results in Table 2 of the main paper
    python test.py -opt options/test/test_BI_DiffSR_x2.yml
    python test.py -opt options/test/test_BI_DiffSR_x4.yml

    Due to the randomness of diffusion model (diffusers), results may slightly vary.

  • The output is in results/.

Results

We achieved state-of-the-art performance. Detailed results can be found in the paper.

Quantitative Comparisons (click to expand)
  • Results in Table 2 (main paper)

Visual Comparisons (click to expand)
  • Results in Figure 8 (main paper)

  • Results in Figure 13 (supplemental material)

  • Results in Figure 14 (supplemental material)

Citation

If you find the code helpful in your research or work, please cite the following paper(s).

@inproceedings{chen2024binarized,
    title={Binarized Diffusion Model for Image Super-Resolution},
    author={Chen, Zheng and Qin, Haotong and Guo, Yong and Su, Xiongfei and Yuan, Xin and Kong, Linghe and Zhang, Yulun},
    booktitle={NeurIPS},
    year={2024}
}

Acknowledgements

This code is built on BasicSR, Image-Super-Resolution-via-Iterative-Refinement.

About

PyTorch code for our NeurIPS 2024 paper "Binarized Diffusion Model for Image Super-Resolution"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages