Skip to content

Latest commit

 

History

History
132 lines (104 loc) · 5.57 KB

README.md

File metadata and controls

132 lines (104 loc) · 5.57 KB

MM-Diffusion(CVPR 2023)

This is the official PyTorch implementation of the paper MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation, which is accpted by CVPR 2023.

Contents

Introduction

We propose the first joint audio-video generation framework named MM-Diffusion that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. MM-Diffusion consists of a sequential multi-modal U-Net. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises.

Overview

Visual

The generated audio-video examples on landscape:

landscape.mp4

The generated audio-video examples on AIST++:

aist++.mp4

The generated audio-video examples on Audioset:

audioset.mp4

Requirements and dependencies

  • python 3.8 (recommend to use Anaconda)
  • pytorch >= 1.11.0
git clone https://github.com/researchmm/MM-Diffusion.git
cd MM-Diffusion

conda create -n mmdiffusion python=3.8
conda activate mmdiffusion
conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch-nightly -c nvidia
conda install mpi4py
pip install -r requirement.txt

Models

Pre-trained models can be downloaded from google drive, and baidu cloud.

  • Landscape.pt: trained on landscape dataset to generate audio-video pairs.

  • Landscape_SR.pt: trained on landscape dataset to upsample frame from reolution 64x64 to 256x256.

  • AIST++.pt: trained on AIST++ dataset to generate audio-video pairs.

  • AIST++_SR.pt: trained on AIST++ dataset to upsample frame from reolution 64x64 to 256x256.

  • guided-diffusion_64_256_upsampler.pt: from guided-diffusion, used as initialization of image SR model.

  • i3d_pretrained_400.pt: model for evaluting videos' FVD and KVD, Manually download to ~/.cache/mmdiffusion/ if the automatic download procedure fails.

  • AudioCLIP-Full-Training.pt: model for evaluting audios; FAD, Manually download to ~/.cache/mmdiffusion/ if the automatic download procedure fails.

Datasets

  1. Landscape
  2. AIST++_crop

The datasets can be downloaded from google drive, and baidu cloud.
We only use the training set for training and evaluation.

You can also run our script on your own dataset by providing the directory path with relevant videos, and the script will capture all videos under the path, regardless of how they are organized.

Test

  1. Download the pre-trained checkpoints.
  2. Download the datasets: Landscape or AIST++_crop.
  3. Modify relative pathes and run generation scripts to generate audio-video pairs.
bash ssh_scripts/multimodal_sample_sr.sh
  1. Modify REF_DIR, SAMPLE_DIR, OUTPUT_DIR and run evaluation scripts.
bash ssh_scripts/multimodal_eval.sh

Train

  1. Prepare training datasets: Landscape or AIST++_crop.
  2. Download datasets: Landscape or AIST++_crop
# Traning Base model
bash ssh_scripts/multimodal_train.sh

# Training Upsampler from 64x64 -> 256x256, first extract videos into frames for SR training, 
bash ssh_scripts/image_sr_train.sh

Conditional Generation

# zero-shot conditional generation: audio-to-video
bash ssh_scripts/audio2video_sample_sr.sh

# zero-shot conditional generation: video-to-audio
bash ssh_scripts/video2audio_sample.sh

Related projects

We also sincerely recommend some other excellent works related to us. ✨

Citation

If you find our work useful for your research, please consider citing our paper. 😊

@inproceedings{ruan2022mmdiffusion,
author = {Ruan, Ludan and Ma, Yiyang and Yang, Huan and He, Huiguo and Liu, Bei and Fu, Jianlong and Yuan, Nicholas Jing and Jin, Qin and Guo, Baining},
title = {MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation},
year	= {2023},
booktitle	= {CVPR},
}

Contact

If you meet any problems, please describe them in issues or contact: