Skip to content
This repository has been archived by the owner on Apr 18, 2022. It is now read-only.

Does this support GCPs (Ground control points), if not, can you provide guidance on how to implement it here? #8

Open
AbegNext opened this issue May 8, 2020 · 6 comments

Comments

@AbegNext
Copy link

AbegNext commented May 8, 2020

By ground control points I mean ground control points with known georeferenced location and pixel locations in various images for model alignment and also to include them as constraints in the bundle adjustments.

@AIBluefisher
Copy link
Owner

AIBluefisher commented May 8, 2020

Okay, I also found some useful information on the webpage https://www.pix4d.com/blog/why-ground-control-points-important.

(1) Then, with GCPs, we can obtain the absolute scale of point clouds. If we want to align models by ground control points, we might implement it by following the steps below:

  1. Suppose each sub-model has some GCPs, thus we can compute the similarity transformation to register the sub-model into world frame / earth frame by 3D-3D correspondences. See function FindSimilarityTransform in https://github.com/AIBluefisher/GraphSfM/blob/dev/src/controllers/sfm_aligner.cpp for how to compute accurate similarity transformation.

  2. With known similarity transformation, all camera poses and 3D points can be transformed into the earth frame by this similarity transformation.

  3. When all sub-models are registered into the earth frame, we can simply put these sub-models together, as they are already in a global world frame.

I'm not sure whether I totally understand what you want to do. We could discuss it if not.

(2) For how to include GCPs as constraints in bundle adjustment, I'm not aware of the usage of GCPs in improving the model accuracy. As far as I currently know, we could simply set the ground control points as constant variables in the optimization, because these points can be seen as ground truth and we don't need to optimize these points.

@AbegNext
Copy link
Author

Hi @AIBluefisher. I would love to chat with you. GCPs can help two ways: (1) model alignment; and (2) [I believe this is probably the most important] they help with extra constraints to optimize the intrinsic and extrinsic parameters during the bundle adjustment by providing information of the location of shared pixels in the camera coordinate system and the 3D world. Therefore, if for example, the model has a bowl effect the GCPs would help eliminating it by providing those extra constraints that would avoid the bundle adjustment to optimize for that set of intrinsic/extrinsic parameters. However, there are always errors in the measured GCP 3D location and in the selection of which pixels they are related to on the images. This is why I would believe that using them as constant may not be the best as I think many times there is not a perfect fit, but rather, a condition we would like to model as close as possible. Would you be open to an online voice chat?

@AIBluefisher
Copy link
Owner

It sounds very interesting. I'm also interested in improving the accuracy with GCPs' constraints. I'd love to study more about the usage of GCPs.

An online chat is good, and I'm available this weekend. Maybe we can use Skype or other apps for this chat.

@AbegNext
Copy link
Author

@AIBluefisher sounds good. Do you have an email address I can send you the invite?

@namibj
Copy link

namibj commented May 15, 2020

@AbegNext sorry about being so late to the party: I've had an eye on EGSfM for almost a year now (only record I could easily find is that I starred it at the 30th of May, 2019).

I'd like to join in, likely for the most part just to listen. You're welcome to contact me via the email in my GH profile, or via Keybase.
In case I'd have something to contribute to the discussion, I'd like to do so if you'd want me to. I promise I won't go to the depth I went to below, unless asked to.
Feel free to read a rough overview of what pushed me here, including some numbers to put it into perspective.

I want to get a pipeline ready to handle scenes with very fine detail. Specifically, high-resolution auto-guided drone recordings of areas as large as 100mx100m, captured with an illumination setup that makes 100us global shutter exposures (expected for 30-60Hz, depending on overlap between successive frames) feasible, which has some impact on the minimum reasonable width of the zig-zag (overlapping) scan pattern.
Acceleration limits due to accuracy targets demand a minimum flight duration until direction reversal of about 3-5s, depending on the flight control's precision degradation with increased acceleration.

I expect up to single-digit-millions of frames with a typical/targeted overlap of 10-20, 5-10MP resolution (RGGB Bayer or monochrome) and most overlaps occurring with three groups of +-5 successive frames: the first one centered on the frame itself, and the others centered at mirrored offsets smoothed-sawtooth oscillating between 0-x, where x is in the 100s, but relatively-constant over a sequence in the 4-5 digit (frame count) range.
So typically 10-500 lines of 50-500 frames each, scanned in a zig-zag pattern.

I'm well-aware of the scale, and amount of data this implies. There is a reason I'm expecting to have to re-write a suitable implementation of a floating-scale surface reconstruction algorithm, modified to work out-of-core as far as the point cloud is concerned. I want to push the limits. I believed that to be feasible a year ago. My opinions didn't change.

@AIBluefisher
Copy link
Owner

@AIBluefisher sounds good. Do you have an email address I can send you the invite?

rainychen@pku.edu.cn.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants