Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature detection, description, and matching to assist stitching (complex) #110

Open
jywarren opened this issue Oct 21, 2018 · 15 comments
Open

Comments

@jywarren
Copy link
Member

This demo was pretty impressive:

https://inspirit.github.io/jsfeat/sample_orb.html

image

It'd be really neat to prototype a feature where when dragging an image, any image that roughly overlaps it (say, distance to center is < NW-SE corner span of dragged image?) has this matcher run against it with a relatively high threshold... and a couple lines are drawn on the screen to indicate possible matches to assist someone stitching.

Docs: https://inspirit.github.io/jsfeat/#features2d

Overal library: https://github.com/inspirit/jsfeat

@jywarren
Copy link
Member Author

I would guess green is a 'solid' match?

@rexagod
Copy link
Member

rexagod commented Feb 19, 2019

@jywarren The demo site doesn't seem to work on my local (further investigation revealed that the server returns a 404 everytime). Is there any way or maybe any other JS lib for this purpose that could maybe provide a little more insight to this which we can implement? I did consider tracking.js but dropped that idea after learning that it's no longer maintained and the builds are failing.

I'm really eager to do this, and with the proper resources, this shouldn't take long! What do you think?

@jywarren
Copy link
Member Author

Oh hmm, shoudn't it be running from the gh-pages branch of that repository? I had gotten it to work when i posted this, has any code changed?

@rexagod
Copy link
Member

rexagod commented Feb 19, 2019

There's a missing resource I discovered on ngrok-ing it

GET /[object%20MediaStream] 404 8.771 ms - 161
  • There seems to be an unhandled PromiseException (maybe someone in a bit of a rush forgot to catch the errors?)
  • The code looks intact since '17.
  • Upon serving the HTML on HTTPS I got a WebRTL is not defined error.

@jywarren As of now, just to confirm, is this running solid on your local?

@jywarren
Copy link
Member Author

Hmm. I'm not sure... it's possible i downloaded it an ran it locally back then. One clue could be the changes in WebRTC code that forced us to make changes in other repos -- like publiclab/infragram#45 -- but it doesn't look like the same error, maybe?

I think that error with MediaStream indicates that it's trying to grab an image from the WebRTC media stream API but something is going wrong.

@jywarren
Copy link
Member Author

There've been new commits since the gh-pages was last published: https://github.com/inspirit/jsfeat/commits/master . could those fix this?

@jywarren
Copy link
Member Author

Aha - i think it is the same WebRTC error! See how we fixed on this line: https://github.com/publiclab/infragram/pull/46/files#diff-6e7a2e9f1b12926b7586585e4a66c837R87

video.src needs to become video.srcObject!

@rexagod
Copy link
Member

rexagod commented Mar 5, 2019

@jywarren We are making some progress! This took a lot more work than expected, and I still need to adjust some calibrations and thresholds (also, a lot of reading to do inorder to build a solid understanding of the whole jsfeat codebase, since I didn't find the docs to be that detailed tbh) and maybe add some custom util fns that suit our codebase best! Once all this is solid, I'll jump to integrating it here.

What do you think?

screenshot from 2019-03-06 01-51-04

(Green dots are matches.)

@jywarren
Copy link
Member Author

jywarren commented Mar 5, 2019

This is amazing!!!! I think the initial implementation could just be drawing the lines between the two images. We can think about more complex displays later, and just get these basics right. Does that make sense? This is really fabulous!

@rexagod
Copy link
Member

rexagod commented Mar 7, 2019

@jywarren Below is a list of features/modifications that were incorporated into this since last night.

  • Completely removed all the unnecessary components (video modules, unwanted utils, etc.).
  • Increased sharpness (reduced Gaussian blur) for the inset image to get more accurate results while eliminating outliers (think of this as a balance between both -- outliers and coverage).
  • Calibrated thresholds on the basis of automated test runs (pull all images into an array, pass each pair to the module, then log the number of matches for different eigen/lap thresholds) on a few dozen images and finally took up the best results. As of now, I've set the best ones as the initial params for this module.
  • Converted the refined ORB codebase in an independent PL module. It now takes two images, runs the algorithm against them, and logs the results!

I think the initial implementation could just be drawing the lines between the two images.

You can check out the beta (based on above) here!! 🎉

Matches above are indicated by the keypoints, i.e., the green dots, while the "good matches" are indicated by the green lines.

Also, I'm looking forward to take this (Microscope live stitching, auto-stitch in MapKnitter (magnetic attraction)) along with LDI (Mapknitter UI) as my GSoC project, since I find both of these to be very interesting topics to work on and start drafting a proposal that includes these two, would that be okay?

Thanks!

@jywarren
Copy link
Member Author

jywarren commented Mar 7, 2019 via email

@jywarren
Copy link
Member Author

Noting that this might make it simpler to decide which other DistortableImage instances to detect common points with, at least as a pattern: https://github.com/MazeMap/Leaflet.LayerGroup.Collision

@rexagod
Copy link
Member

rexagod commented May 1, 2019

@jywarren, things are really starting to align now!

I did a bit of research on the engaging "headless web" trend that has been going on for a while now, and why is it considered an apt approach towards testing and automation. It made me realize the extent to which we can improve our testing, gather snapshots, upscale cross-browser tests, and automate in different environments (simple docker containers that'll take a few hours max to initialize) to log (or generate heaps/snaps) whenever there's a breaking change to see if everything is working fine, and easily pipe that to plotsbot, and this is all just right the top of my mind! Such potential!

I'd like you to have a look over here whenever you have some time to spare. It's a really nice place to read about the headless web, or you could just jump straightaway to the newer array of tools developed for the same purpose, each having their pros and cons, which I'd be more than happy to discuss, and which of these should we consider implementing in this module.

What do you think?

orb-cli

Above is a representation of the aforementioned "headless" approach that I incorporated to fetch a stats ORB object that updates every second, thus implementing a CLI (non-GUI) approach to gather data which the UI methods can render stuff upon.

/cc @justinmanley @sashadev-sky

@jywarren
Copy link
Member Author

Hi @rexagod i'm sorry i haven't had time to check in here. Will do my best to provide some feedback today!

@jywarren
Copy link
Member Author

Oh wait, ok. This is rad. I love the idea that it would have a CLI option. and i"ll read that article too, thank you!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants