Skip to content

Latest commit

 

History

History
41 lines (33 loc) · 1.5 KB

README.md

File metadata and controls

41 lines (33 loc) · 1.5 KB

LSENeRF

implementation of the paper LSENeRF

paper | webpage | data

Setup

  1. Create an environment with python=3.8 or from environment.yml
  2. Install with the below:
python -m pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
python -m pip install -e .
  1. Reinstall tiny-cuda-nn torch extension from source with float32 enabled following here.

Training

Refer to data repo to format either a EVIMOv2 or LSENeRF scene. To train a model, update the --data in the training script and run them:

# to train a LSENeRF scene
bash scripts/train_lse_data.sh

# to train a EVIMOv2 scene
bash scripts/train_evimo.sh

You can choose which method to run by changing the configurations at the top of the train_evimo.sh and train_lse_data.sh.

To see all available parameters do:

ns-train lsenerf -h

Evaluation

These scripts run camera optimization before evaluation. Please update the experiment path before running. The example path in each script should give a sense of what to put down. To evaluate a non-embedding method, do:

bash scripts/eval.sh

To evaluate an embedding method do:

bash scripts/emb_eval.sh