Lei Sun, Christos Sakaridis, Jingyun Liang, Peng Sun, Jiezhang Cao, Kai Zhang, Qi Jiang, Kaiwei Wang, Luc Van Gool
The performance of video frame interpolation is inherently correlated with the ability to handle motion in the input scene. Even though previous works recognize the utility of asynchronous event information for this task, they ignore the fact that motion may or may not result in blur in the input video to be interpolated, depending on the length of the exposure time of the frames and the speed of the motion, and assume either that the input video is sharp, restricting themselves to frame interpolation, or that it is blurry, including an explicit, separate deblurring stage before interpolation in their pipeline. We instead propose a general method for event-based frame interpolation that performs deblurring ad-hoc and thus works both on sharp and blurry input videos. Our model consists in a bidirectional recurrent network that naturally incorporates the temporal dimension of interpolation and fuses information from the input frames and the events adaptively based on their temporal proximity. In addition, we introduce a novel real-world high-resolution dataset with events and color videos named HighREV, which provides a challenging evaluation setting for the examined task. Extensive experiments on the standard GoPro benchmark and on our dataset show that our network consistently outperforms previous state-of-the-art methods on frame interpolation, single image deblurring and the joint task of interpolation and deblurring.
- June 2023: The codes and dataset are publicly available.
- March 2023: The paper is accepted by CVPR 2023
Unified framework for both event-based sharp and blurry frame interpolation.
Sharp frame interpolation:
- Short exposure time
- Sharp reference frames
Blurry frame interpolation:
- Long exposure time
- Blurry reference frames
This implementation based on BasicSR which is a open source toolbox for image/video restoration tasks.
python 3.8.5
pytorch 1.7.1
cuda 11.0
git clone https://github.com/AHupuJR/REFID
cd REFID
pip install -r requirements.txt
python setup.py develop --no_cuda_ext
HighREV dataset is a event camera dataset with high spatial resolution. It can be used for event-based image deblurring, event-based frame interpolation, event-based blurry frame interpolation and other event-based low-level image tasks.
HighREV dataset includes:
- Blurry images (png)
- Sharp image (png)
- Event stream (npy)
The blurry images are synthesized from 11 sharp images, and we use RIFE to upsample the framerate of the original frames by 4 times. Thus each blurry image is synthesized from 44 sharp images.
We skip every 1/3 sharp images between each blurry image for frame interpolation task evaluation.
Because of the commercial reason, dataset download is allowed only with the authority of Alpsentek. Please contacting me or Alpsentek to get the authority from Alpsentek if needed.
-
train
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/GoPro/REFID.yml --launcher pytorch
-
eval
- Download pretrained model to ./experiments/pretrained_models/
python basicsr/test.py -opt options/test/GoPro/REFID.yml
-
train
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/HighREV/REFID.yml --launcher pytorch
-
eval
- Download pretrained model to ./experiments/pretrained_models/REFID-REBlur.pth
python basicsr/test.py -opt options/test/HighREV/REFID.yml
@article{sun2023event,
title={Event-Based Frame Interpolation with Ad-hoc Deblurring},
author={Sun, Lei and Sakaridis, Christos and Liang, Jingyun and Sun, Peng and Cao, Jiezhang and Zhang, Kai and Jiang, Qi and Wang, Kaiwei and Van Gool, Luc},
journal={arXiv preprint arXiv:2301.05191},
year={2023}
}
Should you have any questions, please feel free to contact leosun0331@gmail.com or leo_sun@zju.edu.cn
This project is under the Apache 2.0 license, and it is based on BasicSR which is under the Apache 2.0 license.