Skip to content

[ETH 3D Vision Project] Implementation of point cloud + mesh reconstruction pipeline, together with "Color Map Optimization" on HoloLens captured dataset

License

Notifications You must be signed in to change notification settings

Ribosome-rbx/Color_Map_Optimization

Repository files navigation

Room Reconstruction with Color Map Optimization

Report | Video

We implement the traditional reconstruction pipelne(point cloud + mesh), and use Color Map Optimization to draw clear textures on mesh.

Environment

All the dependencies used in this repo are listed as below:

  • Open3d 0.16.0
  • Opencv 4.7.0
  • Sklearn 1.2.2
  • Numpy 1.23.3
  • Scipy 1.9.3
  • Matplotlib 3.6.1
  • Png
  • Glob

HoloLens Room Dataset

This dataset is captured by Hololens2 and consists of two video recordings of two different room scenes. Each capture contains thousands of RGB video frames in 1280×720, monocular depth frames in a lower capturing frequency, the intrinsic parameters of the camera, and the corresponding camera poses and timestamp for each RGB frame. For the first capture (AnnaTrain/GowthamTrain), the HoloLens has a relatively slow movement, which results in a dataset containing less motion blur. While the second capture (named AnnaTest/GowthamTest) contains more motion blur.

Directory Structure

..
├── AnnaTrain
├── AnnaTest
├── GowthamTrain
├── GowthamTest
└── Color_Map_Optimization
     ├── color_map_optimization.py
     ├── ...

Run

We have a well reconstructed room mesh(.obj) in ./resource. To visulize it, use:

cd ./Color_Map_Optimization && python ./Visualization_rotate.py

To reconstruct the room from scatch, please download aforementioned dataset, and run:

python ./color_map_optimization.py

Illustration of each file

  • color_map_optimization.py: main implementation. Including dataloader, aligning RGB and depth images, point_cloud and mesh reconstruction, and color map optimization.
  • filter_blurry_images.py: select blurry images from the mixture of clear and blurry images.
  • llff_convertion.py: generate poses_bounds.npy for NeRF-based methods.
  • mesh2rgb.py: input camera poses to render rgb images with pre-built models.
  • metrices_compute.py: run evaluation metrices to output images of NeRF-based methods.
  • pcd_stitching.py: use ICP for point cloud stitching
  • pcd2mesh.py: implement poisson surface reconstruction to recover mesh from point clouds
  • pcd2rgb.py: input camera poses to render rgb images with pre-built point clouds.
  • Visualization_rotate.py visualize (point cloud/mesh)files in a rotating form.
  • visualization.py dependencies for visualization.

About

[ETH 3D Vision Project] Implementation of point cloud + mesh reconstruction pipeline, together with "Color Map Optimization" on HoloLens captured dataset

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages