In this tutorial we introduce the real-time demo of NeuralRecon running with self-captured ARKit data. If you don't want to take the effort capturing your own data, you can download the example data and skip step 1.
To capture data and run this demo, an Apple device (iPhone or iPad) with ARKit support is required. Generally speaking, devices released after 2017 (e.g. iPhone 7 and later generations) are all supported. You can search for 'arkit' on this page to find out. You will also need a Mac computer to compile the data capture app and a GPU-enabled machine (GPU memory > 2GB) to run NeuralRecon.
For now we use ios_logger as the capture app, and you will have to compile it yourself. We are making an attempt to release a new capture app that is available to download from the App Store.
- Download and install Xcode.
git clone https://github.com/Varvrar/ios_logger
- Follow this tutorial to generate a certificate and a provisioning profile. (Don't be scared if you find this tutorial is very long and complex😉, it's actually quite simple with Xcode automatically handling most of the work.)
- Follow the README of ios_logger to run it on your device.
You can follow these steps to capture the data. Although a clean indoor environment is prefered since it's closer to the training dataset ScanNet, most scenarios (indoor and outdoor) should work just fine. Be sure to move around your device frequently during capture to get more views with covisibility on the same place.
After retrieving the captured data and transfer it to a GPU-enabled machine, you are good to proceed. Notice that it's a good idea to start with the example data to make sure the environment for NeuralRecon is correctly configured.
- Change the data path in demo.yaml.
- Optionally, you can enable the
VIS_INCREMENTAL
flag to get a real-time visualization during reconstruction if you are on a local machine.SAVE_INCREMENTAL
saves the incremental meshes at each step. - Optionally, you can disable the
REDUCE_GPU_MEM
flag to get the maximum inference speed (~3 keyframs/sec speed-up). - Run NeuralRecon demo:
python demo.py --cfg ./config/demo.yaml
The reconstructed mesh will be available under results/scene_demo_checkpoints_fusion_eval_47
.
You can open the ply file with MeshLab.
For those who are interested to reuse the captured data for other projects:
The directory structure:
DATAROOT
└───fragments.pkl
└───images
│ └───0.jpg
│ └───1.jpg
│ | ...
The structure of fragments.pkl
:
[
{'scene': scene_name: [str],
'fragment_id': fragment id: [int],
'image_ids': image id: [int],
'extrinsics': poses: [matrix: 4X4],
'intrinsics': intrinsics: [matrix: 3X3]
}
...
]
Here is the code for generating fragments.pkl
.