-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jigsaw solution for framing cameras and linescan cameras in same group #3369
Comments
Are you saying that if I had a list of framing camera images and linescan camera images, I could use IPCE to process them all together and create a single bundle adjustment solution? |
Yes, it may be tedious to setup if you have a few hundred images but you can do it. |
So the capability exists in IPCE, but not in command-line jigsaw. Good to know. With that knowledge, I would sharpen this issue to be: Dealing with more than one 'kind' of imager (framing camera, linescan, etc.) is possible in IPCE, but not from the command line with jigsaw. It would be nice if jigsaw also had that capability. |
Here are the use cases we are considering for this:
|
We're going to use a PVL config file to control this because it is far to complex for the command line interface. the config file will allow for setting the following jigsaw arguments on a per observation basis:
This is designed to match the interface on the already implemented BundleObservationSolveSettings object that already sets the per-observation settings inside of jigsaw. The challenging part of this config file is determining how to specify which settings are used for which images. As a first MVP, we will have one object that contains the default settings and then additional objects that are image/observation specific settings. After that we can investigate things like checking the values/existence of keywords in the image labels or from things like camstats. |
@jessemapel, I'm not completely sure (because I haven't tried it yet), but I thought this capability in IPCE would would allow for different jigsaw arguments in the case of different mission, different sensor, but same type of sensor - for instance, Galileo SSI (framer) and Voyager (framer). Is this scenario captured in case I. iii., Galileo SSI & CaSSINI ISS? I think so, I would just like a confirmation. Thanks! |
I thought Ken had something already implemented in jigsaw to accept a PVL for this functionality, but never exposed it to the user by way of new parameters. I vaguely recall the possibility of having the PVL point to a list of images that each set of solve parameters would work on. Clunky, the whole PVL idea is a little cumbersome, but would get the job done. Is the old work not useful under the circumstances? I just snuck a peak at some of Ken's old work. It looks like he is driving the process by using a SensorGroup in the PVL: /work/projects/progteam/kedmundson/jigsaw_tests/multiple_sensors/ObservationMode/LO/APOLLO_LRONAC_LO.pvl Maybe you already know this and found all of this too outdated to use. Sorry if that is the case. Wouldn't want you to reinvent the wheel if not necessary. |
@lwellerastro We found Ken's old code from this and some test data from him. It looks like the PVL interface was created but then thrown away and a new interface was created to support the current work in IPCE. Unfortunately, Ken's old PVL interface keys off of the instrument ID and the new IPCE interface keys off of the observation. So, we will need to add some extra code in jigsaw to convert from instrument ids (or a more generic selection option) to observations. |
Doh! You type faster than I do - I just added to my previous comments. Thanks for the information @jessemapel ! |
Yes this would be supported. |
After looking at Ken's work, instead of using per-image settngs, we're going to go with per sensor settings. Here's the Apollo Metric, LRO NAC, LO config file Lynn mentioned above:
We could eventually allow for selections based on something other than instrument id as this doesn't support same missions, same sensor settings. That gets into adding logic though and that type of stuff is very challenging. See isisminer and findfeatures. |
Here's some explanation about why Ken's test code was removed prior to merging https://github.com/USGS-Astrogeology/ISIS3/blob/dev/isis/src/control/apps/jigsaw/jigsaw.xml#L350 |
@paarongiroux IPCE results to compare against: I bundle adjusted
Here's what the config file should look like:
|
I've got two testing networks ready to go now. The MRO CTX and Viking network is located at The viking/ctx network is fairly good. The Pluto network is kind of meh. The MVIC image has different lighting compared to the LORRI images, so pointreg had a lot of trouble. I went in and hand corrected all of the points. It could probably do with some additional hand added points too, but we'll see how it bundles first. |
If you're interested, I have some test cases - one that appears to be Ken's star case involving Lunar Orbiter, Apollo Metric and LROCNAC and on that involves Themis IR and Viking Orbiter. I believe the lunar case is one of Ken's test cases, but the lists in his directory (/work/projects/progteam/kedmundson/jigsaw_tests/multiple_sensors/) point to data that have been moved. Sometime in 2018 I gathered the images and network mostly likely for helping on the IPCE project.
The images are under the same directory. This should be a good network - Tammy Becker likely created it. It would be advisable to set observations=true for the Lunar Orbiter data. I copied one of Ken's config files into my area (APOLLO_LRONAC_LO.pvl from Ken's ObservationMode directory) so you can see how he was solving for things. Ken's test directories include a number of cases and pvl's to go with them.
The images are under the same directory. This is also a good network. As the name of the directory indicates, this is a small network.
I believe this network has some ground (constrained) points in it as well. No need to solve for spacecraft. If you need help locating other test cases please let me know as there is a chance I have something or know where to find it. |
@lwellerastro Thanks! That should be plenty of test cases. |
This will be available in the 4.1 release. |
Yay! Thank you. |
Description
Allow images from both framing cameras and linescan cameras in the FROMLIST to result in a solution.
My understanding is that jigsaw cannot perform a single solution when both LORRI (framing camera) and MVIC (linescan) images are in the mix. It can solve for all-framing camera images, or all-linescan camera images. This results in a less-than-ideal control solution (detailed in the Schenk et al. topo papers for Pluto and Charon). There is some circumstantial evidence (old e-mails and remembered conversations from a few years ago) that this capability exists in some developmental versions of jigsaw (perhaps in some abandoned branch of IPCE), so perhaps this effort doesn’t need to start from scratch.
The text was updated successfully, but these errors were encountered: