Tensorflow implementation of CAM-Convs.
This repository contains original implementation of the paper: 'CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth' by Jose M. Facil ,Benjamin Ummenhofer, Huizhong Zhou, Luis Montesano, Thomas Brox* and Javier Civera*
The page of the paper is http://webdiis.unizar.es/~jmfacil/camconvs/Please cite CAM-Convs in your publications if it helps your research:
@InProceedings{Facil_2019_CVPR,
author = {Facil, Jose M. and Ummenhofer, Benjamin and Zhou, Huizhong and Montesano, Luis and Brox, Thomas and Civera, Javier},
title = {{CAM-Convs: Camera-Aware Multi-Scale Convolutions for Single-View Depth}},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
We recommend the use of a virtual enviroment for the use of this project. (e.g. pew)
$ pew new venvname -p python3 # replace venvname with your prefered name (it also works with python 2.7)
- python3
- cuda-10.0
- cuDNN 7.5
- TensorFlow 1.13
You are free to try different configurations. But we do not ensure it had been tested.
(venvname)$ pip install -r requirements.txt
Compile the submodule lmbspecialops
following the instrucions here.
We recommend to simply run:
(venvname)$ cd lmbspecialops
(venvname)$ python setup.py install
(venvname)$ pew add python
(venvname)$ cd ..
Note: You may need to set the enviroment variable LMBSPECIALOPS_LIB
(venvname)$ export LMBSPECIALOPS_LIB="/path/to/camconvs/lmbspecialops/build/lib.linux-x86_64-3.5/lmbspecialops.so"
(venvname)$ pew add python/
You can run the iPython Notebook and play with our Datawriter, Datareader and data augmentation operations to train CAM-Convs ipython/DEMO_DATA_AUGMENTATION.ipynb
.
We are planning to add a second exaple including a Network training with multiple cameras.