You can newly use DeFMO in Kornia (kornia.feature.DeFMO
).
Qualitative results on YouTube
The pre-trained DeFMO model as reported in the paper is available here. Put the models into ./saved_models sub-folder.
For generating video temporal super-resolution:
python run.py --video example/falling_pen.avi
For generating temporal super-resolution of a single frame with the given background:
python run.py --im example/im.png --bgr example/bgr.png
Simple evaluation scripts for evaluation on FMO deblurring benchmark. You can download there all evaluation dataset: Falling Objects, TbD-3D, and TbD, which are also available here.
For the dataset generation, please download:
-
Textures from the DTD dataset. The exact split used in DeFMO is from the "Neural Voxel Renderer: Learning an Accurate and Controllable Rendering Tool" model and can be downloaded here.
-
Backgrounds for the training dataset from the VOT dataset.
-
Backgrounds for the testing dataset from the Sports1M dataset.
-
Blender 2.79b with Python enabled.
Then, insert your paths in renderer/settings.py file. To generate the dataset, run in renderer sub-folder:
python run_render.py
Note that the full training dataset with 50 object categories, 1000 objects per category, and 24 timestamps takes 72 GB of storage memory. Due to this and also the ShapeNet licence, we cannot make the pre-generated dataset public - please generate it by yourself using the steps above.
Set up all paths in main_settings.py and run
python train.py
If you use this repository, please cite the following publication:
@inproceedings{defmo,
author = {Denys Rozumnyi and Martin R. Oswald and Vittorio Ferrari and Jiri Matas and Marc Pollefeys},
title = {DeFMO: Deblurring and Shape Recovery of Fast Moving Objects},
booktitle = {CVPR},
address = {Nashville, Tennessee, USA},
month = jun,
year = {2021}
}