Implementation of Inverting Neural Radiance Fields for Pose Estimation using PyTorch.
To start, I recommend to create an environment using conda:
conda create -n inerf python=3.8
conda activate inerf
Clone the repository and install dependencies:
git clone https://github.com/salykovaa/inerf.git
cd inerf
pip install -r requirements.txt
To run the algorithm on Lego object
python run.py --config configs/lego.txt
If you want to store gif video of optimization process, set OVERLAY = True
here
All other parameters such as batch size, sampling strategy, initial camera error you can adjust in corresponding config files.
To run the algorithm on the llff dataset, just download the "nerf_llff_data" folder from here and put the downloaded folder in the "data" folder.
All NeRF models were trained using this code https://github.com/yenchenlin/nerf-pytorch/
├── data
│ ├── nerf_llff_data
│ ├── nerf_synthetic
Left - random, in the middle - interest points, right - interest regions. Interest regions sampling strategy provides faster convergence and doesnt stick in a local minimum like interest points.
Kudos to the authors
@article{yen2020inerf,
title={{iNeRF}: Inverting Neural Radiance Fields for Pose Estimation},
author={Lin Yen-Chen and Pete Florence and Jonathan T. Barron and Alberto Rodriguez and Phillip Isola and Tsung-Yi Lin},
year={2020},
journal={arxiv arXiv:2012.05877},
}
Parts of the code were based on yenchenlin's NeRF implementation: https://github.com/yenchenlin/nerf-pytorch
@misc{lin2020nerfpytorch,
title={NeRF-pytorch},
author={Yen-Chen, Lin},
howpublished={\url{https://github.com/yenchenlin/nerf-pytorch/}},
year={2020}
}