-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Reconstruction with sensor depth maps #1361
Comments
Hi @SamMaoYS , Very cool work! It would be good to setup a confcall to discuss about it in an interactive session. Can you contact me at fabien.castan@mikrosimage.com? If the depth maps are good but the fusion is not, it may be a problem with the accuracy of the cameras estimation. You can try to make the SfM in Meshroom from the source images, then use the SfMAlignment node to align this result to the sfmData you have created from the iPad information. You can also try to see the impact if you initialize the SfM node from your initial sfmData (in all case you need the SfMAlignment node to realign at the end). |
In the advanced options of the Meshing, you can also enable the "Save Raw Dense Point Cloud" to see the fused point cloud before the min-cut. |
Thank you @fabiencastan, I followed your suggestions and use SfMAlignment node. I aligned the meshroom estimated SFM from source images to the SFM created with known camera poses. Following is a comparison between two SFM files. WIth downscale 2, the reconstruction tends to have more holes. So the scale of the object space and the scale of the depth maps actually will affect the reconstruction result? I will investigate more on this with more scenes, and check if with sensor depth this situation still exists or not. My first impression is that the sensor depth maps are quite dense, this could less be an issue. But maybe I am wrong. |
Hi @SamMaoYS, I want to do the same but using Tof from Honor View 20. To be "viewable" I create 16bits grayscale depth png (1000<=>1m) : https://github.com/remmel/rgbd-dataset/tree/main/2021-04-10_114859_pcchessboard |
Related issue : #764 |
Hi @remmel, I created the _depthMap.exr by replacing the depth values of the Meshroom estimated depth maps. I use |
I'm now enhancing calculated exr depthmaps exr with tof depthmaps. I might add a new node in Meshroom, in the meantime, code can be found https://github.com/remmel/image-processing-js/blob/master/tools/depthMapsExr.py |
I've added new node Pipeline : |
As asked by @MAdelElbayumi link of the rgbd photos : https://drive.google.com/drive/folders/1XHYL4QhIbeR6jJrxam18jot5ovs0QLR6?usp=sharing (note that my script is taking stupidly the first photo in camera.sfm to calculate the ratio. That first one is the first photo displayed in the UI. For that dataset, I had the change the camera.sfm to make sure that the first one is 00000455_image.jpg as I want to calcualte the ratio on that one; otherwise use calculated ratio:3.36378 to have same as mine). Photo with fixed focus (thats why the quality is also not so great) for that test |
@SamMaoYS, How did you plot your depth sensor data as a scatter plot? can you provide any code/information? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue is closed due to inactivity. Feel free to re-open if new information is available. |
The link is broken |
@VihangMhatre all the usefull code has been moved in that file https://github.com/remmel/meshroom/blob/425b64c1a3976a0ab01a02171e61d7484857b653/meshroom/nodes/aliceVision/DepthMapImport.py (to be used as a meshroom node) |
Thanks! Any tutorial on how to build this? WIll following this https://github.com/remmel/meshroom/blob/dev_importTofDepthMap/INSTALL.md and then cloning the repository that you mentioned work? Thanks! |
If I remember, you can only copy paste the DepthMapImport.py file in your installed release version of meshroom |
This does not work. I get the same warning that another user got in this thread #1493 cv2 warning and cannot see node. |
This workflow may work: |
Describe the problem
Hi, thanks to the wiki and previously solved issues, I am able to reconstruct scenes with known camera poses. Since it is hard to predict the depth values in the untextured areas, I want to further improve the result by utilizing depth maps from a depth sensor. I manage to do it by replacing the depth values of the filtered depth maps in node DepthMapsFiltering node with the depth values from the sensor.
Since I also reconstruct with known camera poses, the depth values should be correct.
Overall, this process works as expected, but the reconstructed mesh has some significant artifacts. Holes, bumpy surfaces. What I know is during the meshing, Delaunay tetrahedralization is first used to do the volume discretization, then weights are assigned to the vertices, finally a minimal s-t graph cut to select the surfaces.
Screenshots
Projected dense point cloud from all depth maps (depth value from the sensor).
Meshing result from the above point cloud.
Projected dense point cloud from all depth maps (depth value estimated by meshroom pipeline)
Meshing result from the above point cloud.
Dataset
The images (RGB + depth + camera poses) are collected from an iPad, using Apple ARKit framework. The images are decoded from a video stream, so they are blurry. The meshing process, however, does not really depend on triangular feature points, so this should not be a big problem.
Desktop (please complete the following and other pertinent information):
Question
My question is what could be the reason that with depth values from the depth sensor, the reconstructed meshes have many holes. One thing to add is that the depth maps from the sensor is not very accurate. But is much better comparing to the depths estimated by meshroom in untextured areas.
I can tweak the parameters of Nb Pixel Size Behind, Full Weight to fill those holes, but result in less details in the geometry.
Depth comparison (Left, meshroom estimated depth; Right, depth from sensor)
Textured object
Limited Textures
Bumpy surfaces when I try to reconstruct with sensor depths (reconstruction of a laundry room)
Is it because of the projected point cloud is too dense? I notice the point cloud is augmented with silhouette points. If the point cloud is too dense, will it influence the accuracy of the s-t graph cut? Overall, I am trying to find the reason why the reconstructed meshes is less complete, less smooth on the surfaces.
If you need more information, please let me know. I will try to clarify as many questions as possible. Thank you!
The text was updated successfully, but these errors were encountered: