-
-
Notifications
You must be signed in to change notification settings - Fork 911
Interface
OpenMVS pipeline needs as input a set of camera poses and the corresponding undistorted images, plus the sparse point-cloud generated by the Structure-from-Motion pipeline. There are currently three ways to generate the necessary input:
-
The most generic way is to generate a native OpenMVS project. In order to do this, just copy and include libs/MVS/Interface.h (self contained) into your project, fill the structure with your data and save it to a file using the included serialization support. As an example, see OpenMVG exporter openMVG_main_openMVG2openMVS.
-
OpenMVG output is supported by OpenMVS. In order to convert a project from OpenMVG to OpenMVS either use the OpenMVG integrated exporter openMVG_main_openMVG2openMVS, or at the building stage make sure to point the CMake tool to the OpenMVG installation folder to generate an importer.
-
COLMAP dense output (undistorted images and project files) is supported by OpenMVS.
-
BlocksExchange XML open exchange format used by Bentley ContextCapture and Agisoft Metashape is also supported by OpenMVS.
-
Polycam projects are supported by OpenMVS.
The input camera poses should contain full calibration: intrinsics and extrinsics. The intrinsics are represented as a standard calibration matrix K composed of the focal-lengths fx and fy and the principal point cx and cy. K values can be in pixels, but in order to be more generic to the input images, they should be normalized by 1/MAX(width,height) of the corresponding image. Using a normalized K matrix, makes the project immune to possible manual re-scaling of the images after the SfM process. The extrinsics represent a pose using a rotation matrix R and a position C. In order to project a 3D point X from world coordinates to image coordinates K*R(X-C) is used. The projection in image coordinates uses the convention that the center of a pixel is defined at integer coordinates, i.e. the center is at (0, 0) and the top left corner is at (-0.5, -0.5). All matrices are expected in row-major format.
OpenMVS supports central and non-central cameras. In order to accomplish this, the following structure is used:
- Platforms
- Cameras
- Poses
- Images
- fileName
- platformID
- cameraID
- poseID
- ID
The camera poses are represented by two arrays of Platform and Image structures.
The array of platforms contains one element for each camera (central or non-central) used in the project. A Platform is represented by two arrays of Camera and Pose structures. The array of cameras contains only one camera in the case of a central camera, or multiple cameras for representing a non-central camera. For the non-central case, each camera should contain along with the camera matrix also the pose relative to the platform.
The array of images contains one element for each undistorted image contained by the project. An Image contains the image file name, plus three IDs representing the platformID, cameraID and poseID indices into the corresponding platform, camera and pose arrays. Images with missing calibration can be represented by filling poseID with NO_ID special value. Optionally, ID can be set to a desired ID used to reference this image outside OpenMVS project.
The sparse point-cloud is represented as an array of points, each one containing the position, the list of image IDs seeing it, and optionally the color and normal.
The output is a point-cloud and/or a triangle mesh, exported by default in the PLY file format, and a texture image, exported by default as PNG. OBJ is also supported for storing meshes.