-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User stories transformations #84
Comments
Story 6: stitching (related to story 5)
Story 7: lens correction
Story 8: multiscales
|
Story 10: Specialized metadata for different applications (@neptunes5thmoon)
|
This issue has been mentioned on Image.sc Forum. There might be relevant details there: https://forum.image.sc/t/ome-zarr-chunking-questions/66794/38 |
Story 11: Reordering z-slices (@tischi)
This was initially discussed here, |
This issue has been mentioned on Image.sc Forum. There might be relevant details there: https://forum.image.sc/t/ome-zarr-chunking-questions/66794/41 |
Story 12: Stacking
|
This issue has been mentioned on Image.sc Forum. There might be relevant details there: https://forum.image.sc/t/ome-ngff-community-call-transforms-and-tables/71792/1 |
Story 13: Thin-plate spline transformationsSimilar to Story 4, but the thin-plate-spline parameters are stored in ome-zarr instead of the displacement field. That should save storage space and allow for transformations of annotations that are connected to the image but don't align to the voxel grid. Story 14: Mesh-affine transformationsExtends story 5 and 6. A section tile is divided in a triangle-mesh (i.e. non-overlapping triangles) and each triangle gets an affine transformation attached. |
Thanks @normanrz ! Both are important additions. I'd suggest to wait until the next revision to deal with these, largely because they depend on storing points (landmarks for (13) mesh for (14)), and that's not merged into the spec yet. |
@normanrz triangle mesh: storing the affine transformations is not necessary, only source and target coordinates of all vertices are required and three vertex indices per triangle. This is much more compact than storing the 6 value affine explicitly for each triangle because each vertex is used by typically 6 triangles. |
Story 15: Oblique plane microscopy / skewed acquisition +"on-the-fly" deskew
Story 16: stitching - multi angle acquisition (related to story 5 and 6)
|
Story 17: Registering multi-position acquisition
|
Story 18: Correspondence to MRI coordinates (related to 4, 5, 12, and 15, but with details from our existing wb_view implementation)
|
Stories motivating the possibility to have transformations that
Story 19: Drift correction
Story 20: Registration / stitching / multi-view reconstruction of timelapses
Story 21: Chromatic correction
|
Story 22: Registration of multiplexed imagingGoal: creating multi-channel (3d or 2d) image from multiple rounds of staining/acquisition.
|
Story 23: Stitching and registration of tiled and multiplexed imagingCombination of Story 6 and Story 22.
(This also requires illumination/shading correction to be applied in step 3, but this correction is an intensity transform rather than a spatial coordinate transform so is not really in scope here. Just thought I'd mention it to get people thinking about it.) |
Just a comment - from an outside perspective (I work mostly with MRI), the prospect of having the transform and blending stage of stitching happening on the fly during display is a bit scary. Specifically, if the transforms and raw tile data are stored in a single "image file", I worry about the effects that may have on the library API (what sample grid should the display logic use for such a file? Does the library handle resampling rotated/shifted tiles to a common grid, or just give up and throw the raw tile captures and their transforms at the application? How is lens aberration encoded or otherwise handled? What resampling algorithms are available (and in what color space(s)), and how is multithreading handled?), and on the display performance (how does the library determine which pieces are within a display region, a linear scan over potentially thousands? How well does resampling every frame during panning perform, and does it preserve things like spline coefficients between frames (and if so, how do you control the memory usage)? How do you handle the image pyramid?). Basically, stitching (which to my mind is just preprocessing to deal with a limitation of microscopes) isn't something I'd want to have to think about (or have the code spending time on) when just trying to view a histology image alongside other datatypes, I'd want the image file to be as simple as possible (so the file reading API and other considerations can be simple). This is one reason we generally keep registration-type transforms out of the image files themselves, in that if you don't want to rewrite the image data for IO reasons, writing the transform(s) to a separate file is a simple solution, and doesn't complicate the image file internals. In practice, for MRI data, once we have all the transforms, we do a final resampling from the original data to the desired grid before processing/display. If "raw tiles and stitching transforms" are a desired feature for the "basic image" file format, then perhaps there should be a file subtype that restricts the kinds of allowed complications (and perhaps even requires the presence of an image pyramid all the way to, say, 1000 samples or less along each dimension), which enables a set of simpler API functions that are well-suited to display purposes? |
Hi @coalsont Thanks for your comment, I appreciate your feedback and point of view.
I get it. What I can try to make clearer in the spec is that That is, you are welcome to work the way you are:
and the spec will not stop you. I.e. you're free to ignore the items on your list of very good questions and considerations. Viewers that try to support on-the-fly transformations will have to tackle those. The downside is that there will be valid ome-zarrs could look different in a viewer that supports on-the-fly transforms than in another viewer that doesn't. This is totally fine because I'd expect the latter viewer to communicate "I found an extra transformation but can't apply it". What I'm confident will still find valuable to you: Methods that produce transformations still have to store them somehow, and standardization there will improve tool interoperability. I've dumped lots of time into converting files storing affines from one format to another 😣
Tracking the provenance of image data is very important. In this context, after you've done that final resampling, I sure would like to track 1) what was the original image that was resampled, and 2) what transformation(s) if any were applied in a structured, standard way. (i.e. not in my brain, or even in a lab notebook). The spec tracks that information. Re: your last paragraph, something that I'll clarify is how a user or consuming software can know if a particular dataset is "basic" or not. Thanks again for posting here, and be in touch if I've not been clear or you're just not convinced. p.s. I did human MRI work during grad school so was familiar with how analysis was done in that domain (well, a decade ago). |
Stories motivating bounded (projective) coordinate spaces Story 24: Cropping
Story 25: Kymographs
Story 26: Curvilinear tube projection
Story 27: Mesh projection/surface quantification
|
User stories: coordinateTransformations
This is a collection of the example applications / user stories of coordinateTransformations from #57. These examples should be helpful for the next proposal that will cover more advanced transformations.
Story 1: Basic (@jbms)
(Editorial comment: this will be feasible with 0.4)
Story 2: Prediction with cutout (@jbms)
(Editorial comment: also feasible with 0.4 using
translation
)Story 3: affine registration (@jbms)
Story 4: non-rigid registration (@axtimwalde)
Story 5; registration of slices (@satra)
Feel free to collect more examples in this thread :).
The text was updated successfully, but these errors were encountered: