Skip to content

FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.

Notifications You must be signed in to change notification settings

prathebaselva/FORA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

👉 FORA: Fast-Forward Caching in Diffusion Transformer Acceleration

Diffusion transformers (DiT) have become the de facto choice for generating high-quality images and videos, largely due to their scalability, which enables the construction of larger models for enhanced performance. However, the increased size of these models leads to higher inference costs, making them less attractive for real-time applications. We present **F**ast-F**OR**ward C**A**ching (FORA), a simple yet effective approach designed to accelerate DiT by exploiting the repetitive nature of the diffusion process. FORA implements a caching mechanism that stores and reuses intermediate outputs from the attention and MLP layers across denoising steps, thereby reducing computational overhead. This approach does not require model retraining and seamlessly integrates with existing transformer-based diffusion models. Experiments show that FORA can speed up diffusion transformers several times over while only minimally affecting performance metrics such as the IS Score and FID. By enabling faster processing with minimal trade-offs in quality, FORA represents a significant advancement in deploying diffusion transformers for real-time applications. Code will be made publicly available.

Teaser

🔧 Dependencies and Sampling

conda create -n FORA python=3.9
conda activate FORA
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118

DiT sampling

  • To sample for single ImageNet class with conditional guidance strength 1.5, with caching frequency 3, with output image size 512 and with DDIM steps 250
python src/sample.py --save-cache 'boost_infer_static' --cache-subtype 'default' --cache-threshold '3' --image-size 512 --seed 1 --cfg-scale 1.5 --num-sampling-steps 250
  • To sample for entire ImageNetdataset and save the output in samples folder
 torchrun --nnodes=1 --nproc_per_node=4 src/sample_ddp.py --num-fid-samples 50000 --save-cache 'boost_infer_static' --cache-subtype 'default' --cache-threshold '3' --image-size 256 --per-proc-batch-size 4 --sample-dir 'samples' --cfg-scale 1.5 --num-sampling-steps 250

PixelAlpha Sampling

Coming up!!

Acknowledgements

  • Thanks to DiT for their great work and codebase upon which we build FORA.
  • Thanks to PixArt-alpha for their wonderful work and contribution

About

FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages