Skip to content

This is the repo for the following paper: Ming-Fang Chang, Akash Sharma, Michael Kaess, and Simon Lucey. Neural Radiance Fields with LiDAR Maps. ICCV 2023

License

Notifications You must be signed in to change notification settings

alliecc/NeRF-LiDAR-cGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NeRF-LiDAR-cGAN

This is the working repo for the following paper:

Ming-Fang Chang, Akash Sharma, Michael Kaess, and Simon Lucey. Neural Radiance Fields with LiDAR Maps. ICCV 2023. paper link

If you find our work useful, please consider to cite:

@inproceedings{Chang2023iccv,
	title={{Neural Radiance Fields with LiDAR Maps}},
	author={Ming-Fang Chang and Akash Sharma and Michael Kaess and and Simon Lucey},
	booktitle=International Conference on Computer Vision,
	year={2023} 
} 

1. Environment

  1. The code was implemented and tested with python 3.7, PyTorch v1.12.1 and DGL 0.9.v1post1.

2. Download the datasets:

  1. Training/val samples (preprocessed DGL graphs). preprocessed graphs The preprocessed DGL graphs contain geometric information needed for volume rendering (see 3. for visualizations).
  2. The LiDAR point cloud maps. maps
  3. Other dataset information (ground truth images, camera poses, etc). dataset
  4. Masks for dynamic objects. masks
  5. Specify your local data folder path in configs/config.ini, or make a symlink named data pointing to your dataset folder.

3. Visualize the data:

  1. The visualization code was tested with pyvista v0.37.0.
  2. Run python3 visualize_data.py --log_id=<log_id> --name_data=clean
  3. Expected outputs include (from log 2b044433-ddc1-3580-b560-d46474934089):
    1. Camera rays (black), ray samples (red), and nearby LiDAR points (green) of subsampled pixels.
    2. GT rgb and depth.
    3. Train (blue) / val (red) camera poses on the map.

4. Run the code:

  1. Run python3 train.py --name_data=clean --log_id=<log_id> --name_config=config.ini --eval_only.
  2. Check the results with tensorboard (e.g. Run tensorboard --logdir=logs> to see the visuals. The log path can be specified in configs/config.ini).
  3. You can download the trained weights from weights (clean maps) weights (noisy maps).
  4. Expected outputs (from log 2b044433-ddc1-3580-b560-d46474934089):

  1. For netowrk training, remove the --eval_only argument.

5. Graph generation code sample:

Besides the preprocessed graphs, We also provide a sample code generate_graphs.py for generating new graphs. This version generates slighter high-quality graphs but runs a bit slower than the original version we used in the paper. To use it:

  1. Modify path_preprocessed_graph in configs/config.ini to your goal folder.
  2. Run: python3 generate_graphs.py --log_id=<log_id> --name_config=config.ini --name_data=clean.

About

This is the repo for the following paper: Ming-Fang Chang, Akash Sharma, Michael Kaess, and Simon Lucey. Neural Radiance Fields with LiDAR Maps. ICCV 2023

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages