-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature detection, description, and matching to assist stitching (complex) #110
Comments
I would guess green is a 'solid' match? |
@jywarren The demo site doesn't seem to work on my local (further investigation revealed that the server returns a I'm really eager to do this, and with the proper resources, this shouldn't take long! What do you think? |
Oh hmm, shoudn't it be running from the gh-pages branch of that repository? I had gotten it to work when i posted this, has any code changed? |
There's a missing resource I discovered on ngrok-ing it GET /[object%20MediaStream] 404 8.771 ms - 161
@jywarren As of now, just to confirm, is this running solid on your local? |
Hmm. I'm not sure... it's possible i downloaded it an ran it locally back then. One clue could be the changes in WebRTC code that forced us to make changes in other repos -- like publiclab/infragram#45 -- but it doesn't look like the same error, maybe? I think that error with MediaStream indicates that it's trying to grab an image from the WebRTC media stream API but something is going wrong. |
There've been new commits since the gh-pages was last published: https://github.com/inspirit/jsfeat/commits/master . could those fix this? |
Aha - i think it is the same WebRTC error! See how we fixed on this line: https://github.com/publiclab/infragram/pull/46/files#diff-6e7a2e9f1b12926b7586585e4a66c837R87
|
@jywarren We are making some progress! This took a lot more work than expected, and I still need to adjust some calibrations and thresholds (also, a lot of reading to do inorder to build a solid understanding of the whole jsfeat codebase, since I didn't find the docs to be that detailed tbh) and maybe add some custom util fns that suit our codebase best! Once all this is solid, I'll jump to integrating it here. What do you think? (Green dots are matches.) |
This is amazing!!!! I think the initial implementation could just be drawing the lines between the two images. We can think about more complex displays later, and just get these basics right. Does that make sense? This is really fabulous! |
@jywarren Below is a list of features/modifications that were incorporated into this since last night.
You can check out the beta (based on above) here!! 🎉 Matches above are indicated by the Also, I'm looking forward to take this (Microscope live stitching, auto-stitch in MapKnitter (magnetic attraction)) along with LDI (Mapknitter UI) as my GSoC project, since I find both of these to be very interesting topics to work on and start drafting a proposal that includes these two, would that be okay? Thanks! |
Wow, this is very impressive!
Yes, that would be a fine proposal. Thank you!
This is awesome, great work!
…On Wed, Mar 6, 2019 at 7:11 PM Pranshu Srivastava ***@***.***> wrote:
@jywarren <https://github.com/jywarren> Below is a list of
features/modifications that were incorporated into this since last night.
- Completely removed all the unnecessary components (video modules,
unwanted utils, etc.).
- Increased sharpness (reduced Gaussian blur) for the inset image to
get more accurate results while eliminating outliers (think of this as a
balance between both -- outliers and coverage).
- Calibrated thresholds on the basis of automated test runs (pull all
images into an array, pass each pair to the module, then log the number of
matches for different eigen/lap thresholds) on a few dozen images and
finally took up the best results. As of now, I've set the best ones as the
initial params for this module.
- Converted the refined ORB codebase in an independent PL module. It
now takes two images, runs the algorithm against them, and logs the results!
I think the initial implementation could just be drawing the lines between
the two images.
*You can check out the beta (based on above) here
<https://orb-deploy-8j90kc7vv.now.sh/>!!* 🎉
*Matches above are indicated by the keypoints, i.e., the green dots, while
the "good matches" are indicated by the green lines.*
Also, I'm looking forward to take this (Microscope live stitching,
auto-stitch in MapKnitter (magnetic attraction))
<https://publiclab.org/wiki/gsoc-ideas#Microscope+live+stitching,+auto-stitch+in+MapKnitter+(magnetic+attraction)>
along with LDI (Mapknitter UI)
<https://publiclab.org/wiki/gsoc-ideas#MapKnitter+UI> as my GSoC project,
since I find both of these to be very interesting topics to work on and
start drafting a proposal that includes these two, would that be okay?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#110 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AABfJxfr4J7AWPGMxjPOUcZ7t_xPgcVuks5vUFkKgaJpZM4Xyr_q>
.
|
Noting that this might make it simpler to decide which other DistortableImage instances to detect common points with, at least as a pattern: https://github.com/MazeMap/Leaflet.LayerGroup.Collision |
@jywarren, things are really starting to align now! I did a bit of research on the engaging "headless web" trend that has been going on for a while now, and why is it considered an apt approach towards testing and automation. It made me realize the extent to which we can improve our testing, gather snapshots, upscale cross-browser tests, and automate in different environments (simple docker containers that'll take a few hours max to initialize) to log (or generate heaps/snaps) whenever there's a breaking change to see if everything is working fine, and easily pipe that to plotsbot, and this is all just right the top of my mind! Such potential! I'd like you to have a look over here whenever you have some time to spare. It's a really nice place to read about the headless web, or you could just jump straightaway to the newer array of tools developed for the same purpose, each having their pros and cons, which I'd be more than happy to discuss, and which of these should we consider implementing in this module. What do you think? Above is a representation of the aforementioned "headless" approach that I incorporated to fetch a stats ORB object that updates every second, thus implementing a CLI (non-GUI) approach to gather data which the UI methods can render stuff upon. |
Hi @rexagod i'm sorry i haven't had time to check in here. Will do my best to provide some feedback today! |
Oh wait, ok. This is rad. I love the idea that it would have a CLI option. and i"ll read that article too, thank you!!! |
This demo was pretty impressive:
https://inspirit.github.io/jsfeat/sample_orb.html
It'd be really neat to prototype a feature where when dragging an image, any image that roughly overlaps it (say, distance to center is < NW-SE corner span of dragged image?) has this matcher run against it with a relatively high threshold... and a couple lines are drawn on the screen to indicate possible matches to assist someone stitching.
Docs: https://inspirit.github.io/jsfeat/#features2d
Overal library: https://github.com/inspirit/jsfeat
The text was updated successfully, but these errors were encountered: