This repository includes a PyTorch implementation for 3D object reconstruction using only a few input images, drawing inspiration from the TensoRF and FreeNeRF papers. Our contribution involves optimizing the Few Shots structures for increased speed and accuracy during the refactoring process.
Install environment:
!pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard
!pip install plyfile
!pip install --upgrade tensorflow==2.9.2
!pip install ipython-autotime
The training script is in train.py
, to train a TensoRF:
python train.py --config configs/lego.txt
we provide a few examples in the configuration folder, please note:
dataset_name
, choices = ['blender', 'llff', 'nsvf', 'tankstemple'];
shadingMode
, choices = ['MLP_Fea', 'SH'];
model_name
, choices = ['TensorVMSplit', 'TensorCP'], corresponding to the VM and CP decomposition.
You need to uncomment the last a few rows of the configuration file if you want to training with the TensorCP model;
n_lamb_sigma
and n_lamb_sh
are string type refer to the basis number of density and appearance along XYZ
dimension;
N_voxel_init
and N_voxel_final
control the resolution of matrix and vector;
N_vis
and vis_every
control the visualization during training;
You need to set --render_test 1
/--render_path 1
if you want to render testing views or path after training.
More options refer to the opt.py
.
ython {train_path} --config {config} --render_test 1
python {train_path} --config {config} --render_only 1 --render_test 1 --render_train 1 --ckpt {ckpt_path}
You can just simply pass --render_only 1
and --ckpt path/to/your/checkpoint
to render images from a pre-trained
checkpoint. You may also need to specify what you want to render, like --render_test 1
, --render_train 1
or --render_path 1
.
The rendering results are located in your checkpoint folder.
You can also export the mesh by passing --export_mesh 1
:
python {train_path} --export_mesh 1 --ckpt {ckpt_path}
Note: Please re-train the model and don't use the pretrained checkpoints provided by us for mesh extraction, because some render parameters has changed.
We provide two options for training on your own image set:
- Following the instructions in the NSVF repo, then set the dataset_name to 'tankstemple'.
- Calibrating images with the script from NGP:
python dataLoader/colmap2nerf.py --colmap_matcher exhaustive --run_colmap
, then adjust the datadir inconfigs/your_own_data.txt
. Please check thescene_bbox
andnear_far
if you get abnormal results.