From Sim-to-Real: Toward General Event-based Low-Light Frame Interpolation with Per-scene Optimization
Ziran Zhang1,2, Yongrui Ma3,2, Yueting Chen1, Feng Zhang2, Jinwei Gu3, Tianfan Xue3, Shi Guo2
1 Zhejiang University, 2 Shanghai AI Laboratory, 3 The Chinese University of Hong Kong
🌐 Project Page | 🎥 Video | 📄 Paper | 📊 Data | 🛠️ Weights
This repository hosts the implementation of "From Sim-to-Real: Toward General Event-based Low-Light Frame Interpolation with Per-scene Optimization" (SIGGRAPH Asia 2024). Our approach leverages event cameras and enhances Video Frame Interpolation (VFI) in low-light conditions via a per-scene optimization strategy. This method adapts the model to specific lighting and camera settings, solving issues like trailing artifacts and signal degradation common in low-light environments.
- Per-Scene Optimization: Fine-tunes a pre-trained model for each scene, significantly improving interpolation results in varied lighting conditions.
- Low-Light Event Correction: Effectively mitigates event-based signal latency and noise under low-light conditions.
- EVFI-LL Dataset: Provides challenging RGB+Event sequences captured in low-light environments for benchmarking.
Follow these steps to apply per-scene optimization with pre-trained models:
git clone https://github.com/OpenImagingLab/Sim2Real.git
cd Sim2Real/Sim2Real_code
- Main model weights: Place in
pretrained_weights/
- DISTS loss function weights: Place in
losses/DISTS/weights/
pip install -r requirements.txt
bash perscene.sh
This will fine-tune the pre-trained model on specific scenes, performing frame interpolation optimized for each setting.
To pretrain the model from scratch using simulated data:
- Pretrain the model:
bash pretrain.sh
- After pretraining, proceed with per-scene optimization as described above.
dataset/
: Utilities for dataset preparation and loading.losses/
: Custom loss functions and weights for training.models/
: Neural network models for Sim2Real frame interpolation tasks.params/
: Configuration files for training and evaluation.tools/
: Scripts for preprocessing and postprocessing.pretrained_weights/
: Directory for storing pre-trained models.run_network.py
: Main script for training and evaluation.pretrain.sh
: Script for model pretraining.perscene.sh
: Script for per-scene optimization.requirements.txt
: Required Python dependencies.
The EVFI-LL dataset includes RGB+Event sequences captured under low-light conditions, offering a challenging benchmark for evaluating event-based VFI performance. Download and place the dataset in the dataset/
directory.
The code in this repository is licensed under the MIT License.
If you find this work helpful in your research, please cite:
@article{zhang2024sim,
title={From Sim-to-Real: Toward General Event-based Low-light Frame Interpolation with Per-scene Optimization},
author={Zhang, Ziran and Ma, Yongrui and Chen, Yueting and Zhang, Feng and Gu, Jinwei and Xue, Tianfan and Guo, Shi},
journal={arXiv preprint arXiv:2406.08090},
year={2024}
}
This project builds upon the exceptional work of TimeLens-XL. We extend our sincere thanks to the original authors for their outstanding contributions.