Skip to content

[AAAI 25] Official Implementation for ”E-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment“

License

Notifications You must be signed in to change notification settings

littlespray/VE-Bench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shangkun Sun, Xiaoyu Liang, Songlin Fan, Wenxu Gao, Wei Gao*

(* Corresponding author)

from MMCAL, Peking University

🎦 Introduction

TL;DR: VE-Bench is an evaluation suite for text-driven video editing, consisting of a quality assessment model to provide a human-aligned metric for edited videos, and a database containing rich video-prompt pairs and the corresponding human scores.


Overview of the VE-Bench Suite

VE-Bench DB contains a rich collection of source videos, including real-world videos, AIGC videos, and CG videos, covering various aspects such as people, objects, animals, and landscapes. It also includes a variety of editing instructions across different categories, including semantic editing like addition, removal, replacement, etc., as well as structural changes in size, shape, etc., and stylizations such as color, texture, etc. Additionally, it features editing results based on different video editing models. We conducted a subjective experiment involving 24 participants from diverse backgrounds, resulting in 28,080 score samples. We further trained VE-Bench QA model based on this data. The left image below shows the box plot of average scores obtained by each model during the subjective experiment, while the right image illustrates the scores for each model across different types of prompts.


Left: Average score distributions of 8 editing methods.     Right: Performance on different types of prompts from previous video-editing methods.

💼 Model Preparation

Download all models from google drive and put them into ckpts.

✨ Usage

Evaluate one single video

python infer.py --single_test --src_path ${path_to_source_video} --dst_path ${path_to_dst_video} --prompt ${editing_prompt}

# Run on example videos
# python infer.py --single_test --src_path "./data/src/00433tokenflow_baby_gaze.mp4" --dst_path "./data/edited/00433tokenflow_baby_gaze.mp4" --prompt "A black-haired boy is turning his head" 

Evaluate a set of videos

python infer.py --data_path ${path_to_data_folder} --label_path ${path_to_prompt_txt_file}

🙏 Acknowledgements

Part of the code is developed based on DOVER and BLIP. We would like to thank the authors for their contributions to the community.

📭 Contact

If your have any comments or questions, feel free to contact sunshk@stu.pku.edu.cn.

📖 BibTex

@article{sun2024bench,
  title={VE-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment},
  author={Sun, Shangkun and Liang, Xiaoyu and Fan, Songlin and Gao, Wenxu and Gao, Wei},
  journal={arXiv preprint arXiv:2408.11481},
  year={2024}
}

About

[AAAI 25] Official Implementation for ”E-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment“

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages