Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[question] Reconstruction with sensor depth maps #1361

Closed
SamMaoYS opened this issue Apr 1, 2021 · 19 comments
Closed

[question] Reconstruction with sensor depth maps #1361

SamMaoYS opened this issue Apr 1, 2021 · 19 comments

Comments

@SamMaoYS
Copy link

SamMaoYS commented Apr 1, 2021

Describe the problem
Hi, thanks to the wiki and previously solved issues, I am able to reconstruct scenes with known camera poses. Since it is hard to predict the depth values in the untextured areas, I want to further improve the result by utilizing depth maps from a depth sensor. I manage to do it by replacing the depth values of the filtered depth maps in node DepthMapsFiltering node with the depth values from the sensor.
Since I also reconstruct with known camera poses, the depth values should be correct.
Overall, this process works as expected, but the reconstructed mesh has some significant artifacts. Holes, bumpy surfaces. What I know is during the meshing, Delaunay tetrahedralization is first used to do the volume discretization, then weights are assigned to the vertices, finally a minimal s-t graph cut to select the surfaces.
Screenshots
Projected dense point cloud from all depth maps (depth value from the sensor).

Meshing result from the above point cloud.

Projected dense point cloud from all depth maps (depth value estimated by meshroom pipeline)

Meshing result from the above point cloud.

Dataset
The images (RGB + depth + camera poses) are collected from an iPad, using Apple ARKit framework. The images are decoded from a video stream, so they are blurry. The meshing process, however, does not really depend on triangular feature points, so this should not be a big problem.

Desktop (please complete the following and other pertinent information):

  • OS: [Ubuntu 20.04 LTS]
  • Python version [3.6]
  • Meshroom version: 2021.1.0

Question
My question is what could be the reason that with depth values from the depth sensor, the reconstructed meshes have many holes. One thing to add is that the depth maps from the sensor is not very accurate. But is much better comparing to the depths estimated by meshroom in untextured areas.
I can tweak the parameters of Nb Pixel Size Behind, Full Weight to fill those holes, but result in less details in the geometry.

Depth comparison (Left, meshroom estimated depth; Right, depth from sensor)
Textured object
Screenshot from 2021-03-28 11-55-09
Limited Textures
Screenshot from 2021-03-28 12-37-58

Bumpy surfaces when I try to reconstruct with sensor depths (reconstruction of a laundry room)
Screenshot from 2021-04-01 07-27-40

Is it because of the projected point cloud is too dense? I notice the point cloud is augmented with silhouette points. If the point cloud is too dense, will it influence the accuracy of the s-t graph cut? Overall, I am trying to find the reason why the reconstructed meshes is less complete, less smooth on the surfaces.
If you need more information, please let me know. I will try to clarify as many questions as possible. Thank you!

@fabiencastan
Copy link
Member

Hi @SamMaoYS ,

Very cool work! It would be good to setup a confcall to discuss about it in an interactive session. Can you contact me at fabien.castan@mikrosimage.com?

If the depth maps are good but the fusion is not, it may be a problem with the accuracy of the cameras estimation. You can try to make the SfM in Meshroom from the source images, then use the SfMAlignment node to align this result to the sfmData you have created from the iPad information.
And then launch the Meshing on that using the precomputed depth maps.
So you will be in the right scale to use the depthmaps from the sensor, but the cameras alignment will be based on the source images (and not from the realtime image+imu fusion).

You can also try to see the impact if you initialize the SfM node from your initial sfmData (in all case you need the SfMAlignment node to realign at the end).

@fabiencastan
Copy link
Member

In the advanced options of the Meshing, you can also enable the "Save Raw Dense Point Cloud" to see the fused point cloud before the min-cut.

@SamMaoYS
Copy link
Author

SamMaoYS commented Apr 2, 2021

Thank you @fabiencastan, I followed your suggestions and use SfMAlignment node. I aligned the meshroom estimated SFM from source images to the SFM created with known camera poses. Following is a comparison between two SFM files.
The highlighted one is after alignment, the dark one is the SFM from source images.

As you can see, the scale with known camera poses is larger. I use this aligned SFM (larger one) to perform the reconstruction without sensor depth maps (All depth maps are estimated by meshroom), this can mitigate the impact of inaccurate depths.
During the DepthMaps node, I choose different downscale factors 1 (dimension 1920x1440) and 2 (dimension 960x720). The meshing results are shown in the following.
Downscale 1

Downscale 2

WIth downscale 2, the reconstruction tends to have more holes. So the scale of the object space and the scale of the depth maps actually will affect the reconstruction result? I will investigate more on this with more scenes, and check if with sensor depth this situation still exists or not. My first impression is that the sensor depth maps are quite dense, this could less be an issue. But maybe I am wrong.

@remmel
Copy link
Contributor

remmel commented Apr 10, 2021

Hi @SamMaoYS, I want to do the same but using Tof from Honor View 20.
I was able to import thes AR poses in Meshroom but
how do you import the depth maps? Do you create the _depthMap.exr files yourself? What about the _simMap.exr ?
Can you share your pipeline ?

To be "viewable" I create 16bits grayscale depth png (1000<=>1m) : https://github.com/remmel/rgbd-dataset/tree/main/2021-04-10_114859_pcchessboard

@remmel
Copy link
Contributor

remmel commented Apr 10, 2021

Related issue : #764

@SamMaoYS
Copy link
Author

Hi @remmel, I created the _depthMap.exr by replacing the depth values of the Meshroom estimated depth maps. I use readImage and writeImage functions in AliceVision/src/aliceVision/mvsData/imageIO.hpp to perform the depth value replacement. In my understanding, the _simMap.exr is used to score the depth values, and it is optional.

@remmel
Copy link
Contributor

remmel commented Jul 23, 2021

To share my first accomplishement using Honor 20 View tof sensor instead of calculated exr depthMaps. I m calculating by hand the scale between 2 worlds (measuring here the width of my europe map)
Screenshot_20210723_203314
My next steps will be to fine tune the scale, test differents intrinsics and to improve calculated depthmaps instead of remplacing it by tof depthmaps

@remmel
Copy link
Contributor

remmel commented Jul 27, 2021

I'm now enhancing calculated exr depthmaps exr with tof depthmaps. I might add a new node in Meshroom, in the meantime, code can be found https://github.com/remmel/image-processing-js/blob/master/tools/depthMapsExr.py
MJ3XDH
to calculated the ratio size between meshroom and world, I make sure that 1st picture has in its center an image which has both features and 100% Tof confidence, and later compare depth or centered pixel (w/2,h/2). I do not try to import anymore AREngine poses, as thoses poses are not perfect neither (in a test, the mesh generated importing poses with worst than using the default pipeline)

@remmel
Copy link
Contributor

remmel commented Jul 29, 2021

I've added new node DepthMapImport to handle that direclty in meshroom
https://github.com/remmel/meshroom/tree/dev_importTofDepthMap
ezgif-2-314e79e68729

Pipeline : Screenshot_20210729_140136_import
Meshing.estimateSpaceFromSfm must be uncheck to really see the difference, otherwise the bounding box is too small

@remmel
Copy link
Contributor

remmel commented Aug 8, 2021

As asked by @MAdelElbayumi link of the rgbd photos : https://drive.google.com/drive/folders/1XHYL4QhIbeR6jJrxam18jot5ovs0QLR6?usp=sharing (note that my script is taking stupidly the first photo in camera.sfm to calculate the ratio. That first one is the first photo displayed in the UI. For that dataset, I had the change the camera.sfm to make sure that the first one is 00000455_image.jpg as I want to calcualte the ratio on that one; otherwise use calculated ratio:3.36378 to have same as mine). Photo with fixed focus (thats why the quality is also not so great) for that test

@MAdelElbayumi
Copy link

@SamMaoYS, How did you plot your depth sensor data as a scatter plot? can you provide any code/information?

@stale
Copy link

stale bot commented Apr 16, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale for issues that becomes stale (no solution) label Apr 16, 2022
@stale
Copy link

stale bot commented Apr 27, 2022

This issue is closed due to inactivity. Feel free to re-open if new information is available.

@stale stale bot closed this as completed Apr 27, 2022
@natowi natowi removed the stale for issues that becomes stale (no solution) label May 28, 2022
@VihangMhatre
Copy link

I'm now enhancing calculated exr depthmaps exr with tof depthmaps. I might add a new node in Meshroom, in the meantime, code can be found https://github.com/remmel/image-processing-js/blob/master/tools/depthMapsExr.py MJ3XDH to calculated the ratio size between meshroom and world, I make sure that 1st picture has in its center an image which has both features and 100% Tof confidence, and later compare depth or centered pixel (w/2,h/2). I do not try to import anymore AREngine poses, as thoses poses are not perfect neither (in a test, the mesh generated importing poses with worst than using the default pipeline)

The link is broken

@remmel
Copy link
Contributor

remmel commented May 11, 2023

@VihangMhatre all the usefull code has been moved in that file https://github.com/remmel/meshroom/blob/425b64c1a3976a0ab01a02171e61d7484857b653/meshroom/nodes/aliceVision/DepthMapImport.py (to be used as a meshroom node)

@VihangMhatre
Copy link

@VihangMhatre all the usefull code has been moved in that file https://github.com/remmel/meshroom/blob/425b64c1a3976a0ab01a02171e61d7484857b653/meshroom/nodes/aliceVision/DepthMapImport.py (to be used as a meshroom node)

Thanks! Any tutorial on how to build this? WIll following this https://github.com/remmel/meshroom/blob/dev_importTofDepthMap/INSTALL.md and then cloning the repository that you mentioned work? Thanks!

@remmel
Copy link
Contributor

remmel commented May 11, 2023

If I remember, you can only copy paste the DepthMapImport.py file in your installed release version of meshroom

@VihangMhatre
Copy link

If I remember, you can only copy paste the DepthMapImport.py file in your installed release version of meshroom

This does not work. I get the same warning that another user got in this thread #1493 cv2 warning and cannot see node.

@natowi
Copy link
Member

natowi commented May 16, 2023

This workflow may work:
Download the source code from the releases, add the DepthMapImport.py to the nodes folder, add opencv-python and numpy to the requirements and make sure at least python 3.8 is installed (otherwise there will be an issue with os.add_dll_directory). Build meshroom. Then copy and replace the new files over to the existing meshroom binaries folder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants