Active Implicit Object Reconstruction using Uncertainty-guided Next-Best-View Optimization
In this work, we propose a seamless integration of the emerging implicit representation with the active reconstruction task. We build an implicit occupancy field as our geometry proxy. While training, the prior object bounding box is utilized as auxiliary information to generate clean and detailed reconstructions. To evaluate view uncertainty, we employ a sampling-based approach that directly extracts entropy from the reconstructed occupancy probability field as our measure of view information gain. This eliminates the need for additional uncertainty maps or learning. Unlike previous methods that compare view uncertainty within a finite set of candidates, we aim to find the next-best-view (NBV) on a continuous manifold. Leveraging the differentiability of the implicit representation, the NBV can be optimized directly by maximizing the view uncertainty using gradient descent. It significantly enhances the method's adaptability to different scenarios.
Authors: Dongyu Yan*, Jianheng Liu*, Fengyu Quan, Haoyao Chen, and Mengmeng Fu.
* Equal contribution.
If you use ActiveImplicitRecon for your academic research, please cite the following paper.
@ARTICLE{10223307,
author={Yan, Dongyu and Liu, Jianheng and Quan, Fengyu and Chen, Haoyao and Fu, Mengmeng},
journal={IEEE Robotics and Automation Letters},
title={Active Implicit Object Reconstruction using Uncertainty-guided Next-Best-View Optimization},
year={2023},
volume={},
number={},
pages={1-8},
doi={10.1109/LRA.2023.3306282}}
-
clone repo
mkdir -p AiR_ws/src cd AiR_ws/src git clone https://github.com/HITSZ-NRSL/ActiveImplicitRecon.git cd .. catkin_make source devel/setup.bash # source devel/setup.zsh
-
Create envireonment using
$ conda env create -f active_recon/environment.yml $ conda activate active recon
-
Install ROS and related packages
$ pip install pyyaml $ pip install rospkg
-
Install tiny-cuda-nn and apex following the instructions in tiny-cuda-nn and apex
-
start simulation
export GAZEBO_MODEL_PATH=(path_to_repo)/ActiveImplicitRecon/gazebo_simulation/model:$GAZEBO_MODEL_PATH roslaunch active_recon_gazebo simulation_bunny.launch
-
visualization
roslaunch active_recon_gazebo rviz.launch
-
(optional) reset simulation
sh src/active_recon_gazebo/scripts/reset_model.bash
python main.py --hash --config config/gazebo.txt --exp_path active_recon/exp/test
-
different methods: --method 0
- (default) ours sample_seed+end2end
- end2end
- candidate views
- random views sphere
- random views shell
- circular
- new sample_seed+end2end
-
mode: --method 0
- (default) simulation
- realworld
- offline
-
pose opt: --pose_opt
-
save vis output: --save_vis
-
add disturb to poses: --disturb_pose
for more options, please refer to config/gazebo.txt
-
surface coverage and entropy evaluation
# !!! make sure to change the gt mesh path in eval.py and eval_geometry.py to the correct path # mesh generation and surface coverage/entropy evaluation: sc.txt and entropy.txt python scripts/eval.py --hash --exp_path active_recon/exp --config config/gazebo.txt
-
geometry metrics evaluation
# geometry evaluation: results saved in geometry.txt and tsdf_geometry.txt python scripts/eval_geometry.py --hash --exp_path active_recon/exp --config config/gazebo.txt
-
(optional) tsdf-based mapping
# TSDF reconstruction: tsdf_mesh.ply, after active_recon, a dataset will be generated in the exp_path python scripts/tsdf_mapping.py --config active_recon/exp/tsdf.json