Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User stories transformations #84

Open
constantinpape opened this issue Jan 27, 2022 · 20 comments
Open

User stories transformations #84

constantinpape opened this issue Jan 27, 2022 · 20 comments
Milestone

Comments

@constantinpape
Copy link
Contributor

User stories: coordinateTransformations

This is a collection of the example applications / user stories of coordinateTransformations from #57. These examples should be helpful for the next proposal that will cover more advanced transformations.

Story 1: Basic (@jbms)

  1. Microscope interfacing software reads data from microscope and writes in ome-zarr format. Axis names and units are hard coded while the scales are set based on the imaging parameters.
  2. User loads data in ImageJ/Neuroglancer and sees correct axis names and scale bars.

(Editorial comment: this will be feasible with 0.4)

Story 2: Prediction with cutout (@jbms)

  1. User runs CNN on cutout from existing volume in ome-zarr format and writes result as a new OME-Zarr volume, using a translation transformation to indicate the cutout location.
  2. User loads CNN output and original volume in ImageJ/Neuroglancer and sees output correctly aligned to original full-size volume.

(Editorial comment: also feasible with 0.4 using translation)

Story 3: affine registration (@jbms)

  1. User runs affine registration between existing volumes A and B in ome-zarr format. After computing affine transform, software modifies metadata of volume B in place to indicate the affine transform.
  2. User loads data volumes A and B in Neuroglancer and sees them correctly aligned.

Story 4: non-rigid registration (@axtimwalde)

  1. User performs non-rigid registration between existing volumes A (a canonical atlas) and B in ome-zarr format. BigWarp updates transformation meta-data in volume B, the non-rigid transformation is stored as a displacement field in ome-zarr format.
  2. User loads data volumes A and B in BigDataViewer and sees them correctly aligned.
  3. User overlays segmentations performed on original volume B (Story 2) and sees them correctly aligned.

Story 5; registration of slices (@satra)

  1. User collects multiple overlapping/non-overlapping slices and stores each in ome-zarr format with appropriate transformations
  2. User collates all slices into a single ome-zarr volume (registering neighboring slices, joining the data, or introducing zeros/nans as necessary).
  3. [Potential] User may want to simply save the transformations necessary to create the single volume on the fly.

Feel free to collect more examples in this thread :).

@bogovicj
Copy link
Contributor

bogovicj commented Feb 1, 2022

Story 6: stitching (related to story 5)

  1. User collects multiple overlapping 2d / 3d image tiles that are subsets of a whole
  2. User performs stitching (e.g., with BigStitcher), and needs to track transforms from all tile spaces to world space

Story 7: lens correction

  1. User working with an imaging system estimates its lens distortion as a non-linear 2D transform and stores it with NGFF
  2. User collects a 3D image
  3. User applies the same lens correction transform to all 2d slices of the 3D image

Story 8: multiscales

  1. User has a large image stored at high resolution
  2. User generates a multiscale pyramid by repeatedly downsampling and storing relevant metadata
  3. User opens the pyramid with a viewer that can interpret scales correctly, on-the-fly loading works properly
    a) User can open two individual scales and can view them in the same physical space

@bogovicj
Copy link
Contributor

Story 9: Thickness of 2d slices (@tischi)

  1. User collects 2d data with known / specified physical thickness
  2. User displays 2d slices in 3d with thickness displayed correctly

Needs ability to apply M-dim transforms to N-dim data (M>N). see #103

@bogovicj
Copy link
Contributor

Story 10: Specialized metadata for different applications (@neptunes5thmoon)

  1. User collects two datasets for ML training
  2. Each has distinct coordinateTransform for display purposes (e.g. scale [4,4,3.8], scale [4,4,4.1])
  3. They share a coordinateTransform (e.g. scale [4,4,4]) to avoid interpolation when passing data to model

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/ome-zarr-chunking-questions/66794/38

@bogovicj
Copy link
Contributor

Story 11: Reordering z-slices (@tischi)

  1. User collects 2d images but they are written in an unpredictable order
  2. After the fact, user writes the correct order of z-slices to transform metadata
  3. A transformed view of the volume should display the slices in the correct order

This was initially discussed here,

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/ome-zarr-chunking-questions/66794/41

@bogovicj
Copy link
Contributor

bogovicj commented Sep 7, 2022

Story 12: Stacking

  1. User collects N 2d images each stored as its own array
  2. User writes metadata to "stack" the 2d images along a third dimension using coordinateTransformations and a new coordinateSystem
  3. Downstream software makes available a 3d image consisting of the N 2d images.

@imagesc-bot
Copy link

This issue has been mentioned on Image.sc Forum. There might be relevant details there:

https://forum.image.sc/t/ome-ngff-community-call-transforms-and-tables/71792/1

@normanrz
Copy link
Contributor

Story 13: Thin-plate spline transformations

Similar to Story 4, but the thin-plate-spline parameters are stored in ome-zarr instead of the displacement field. That should save storage space and allow for transformations of annotations that are connected to the image but don't align to the voxel grid.

Story 14: Mesh-affine transformations

Extends story 5 and 6. A section tile is divided in a triangle-mesh (i.e. non-overlapping triangles) and each triangle gets an affine transformation attached.

@bogovicj
Copy link
Contributor

Thanks @normanrz !

Both are important additions.

I'd suggest to wait until the next revision to deal with these, largely because they depend on storing points (landmarks for (13) mesh for (14)), and that's not merged into the spec yet.

@axtimwalde
Copy link

@normanrz triangle mesh: storing the affine transformations is not necessary, only source and target coordinates of all vertices are required and three vertex indices per triangle. This is much more compact than storing the 6 value affine explicitly for each triangle because each vertex is used by typically 6 triangles.

@NicoKiaru
Copy link

NicoKiaru commented Mar 21, 2023

Story 15: Oblique plane microscopy / skewed acquisition +"on-the-fly" deskew

  1. User collects skewed planes (OPM Snouty, Zeiss LLS7, diSPIM)
  2. Downstream software (BigDataViewer for instance) uses the skew transformation information (3d matrix for instance) to display the dataset in real physical space, without rasterisation of the dataset on an orthogonal XYZ array.

Story 16: stitching - multi angle acquisition (related to story 5 and 6)

  1. User collects multiple overlapping 2d / 3d image tiles that are subsets of a whole
  2. User repeats the operation 1. after having rotated significantly the sample (30, 60, 90, 120, 150, 180, etc.)
  3. User performs stitching (e.g., with BigStitcher), and needs to track transforms from all tile and all angle spaces to world space

@mattersoflight
Copy link

Story 17: Registering multi-position acquisition

  1. User collects 3D volumes in multiple channels at multiple positions in a 96-well plate.
  2. User analyzes the volumes to register the volumes in XYZ such that the focal plane is in the center of the volume.
  3. User stores separate affine transformations for all positions.
  4. User would like to view registered volume in napari/neuroglancer/ImageJ.

@coalsont
Copy link

Story 18: Correspondence to MRI coordinates (related to 4, 5, 12, and 15, but with details from our existing wb_view implementation)

  1. User collects T1w and other MRI scans of an animal, rigid aligns them and orients via ACPC or similar (not nonlinear or anisotropic affine)
  2. Brain is extracted and sectioned, regularly spaced sections (whole brain, ~0.5mm in macaque) are stained, 2D scanned (and stitched) with full coverage (high resolution, image pyramids), with hemispheres and temporal pole pieces as separate images (due to FoV limits, separated tissue not being placed in correct relative position on the slide, bent corpus callosum misaligning hemispheres, etc)
  3. Histology images are registered to MRI reference space, with the transform split into distinct parts:
    a. 2D to 2D affine for each piece to roughly position it within the section plane (for display without applying a deformation to image data)
    b. 2D to 2D deformation field (and its inverse) for each section to get accurate correspondence of histology tissue to the MRI tissue intersecting the section plane (tissue pliability, tearing, etc)
    c. 2D to 3D transform for each section to define the section plane in MRI mm space (currently affine, might include curvature in the future to deal with tissue deformation before sectioning)
  4. For each image, a "distance beyond tissue edge" measurement is created for the tissue that the image is "responsible for", stored in rough aligned section plane space, used to resolve overlaps with background and/or clipped tissue when displaying all images from a section together
  5. User wants to display the histology image data one section at a time (all pieces in the section shown as per rough alignment), next to MRI data, and clicking on the histology moves the MRI display to the corresponding coordinate (and vice versa)

@m-albert
Copy link

Stories motivating the possibility to have transformations that

  • transform spatial coordinates (affine etc.)
  • but depend on / differ for non-spatial coordinates (e.g. t and c)

Story 19: Drift correction

  1. User has a timelapse of a drifting sample.
  2. User performs drift correction, obtaining one (linear) transformation per timepoint.
  3. User wants to save / visualize the drift corrected timelapse without resampling / transforming the timelapse.

Story 20: Registration / stitching / multi-view reconstruction of timelapses

  1. User story fits into 5, 6, 16, 17.
  2. Typically, resulting registration parameters vary over time (e.g. because of stage positioning errors among others).
  3. User wants to save / visualize the registered / reconstructed image dataset without resampling / transforming the timelapse.

Story 21: Chromatic correction

  1. User has an image dataset exhibiting chromatic misalignment.
  2. User performs chromatic correction obtaining one transformation per channel.
  3. User wants to save / visualize the channel aligned image without resampling / transforming the image.

@imagejan
Copy link
Contributor

Story 22: Registration of multiplexed imaging

Goal: creating multi-channel (3d or 2d) image from multiple rounds of staining/acquisition.

  1. Acquire multiple rounds of 3-channel images with various markers, e.g.:
    a. first round: markers A, B, C
    b. second round: markers A, D, E
    c. third round: markers A, F, G
  2. Register a, b and c via the common marker (A, A', A''), with elastic/non-linear or linear transformation as needed.
  3. Display/process the image as 7-channel image (CZYX) with channels A, B, C, D, E, F, G without re-saving the transformed pixel/voxel data.

@jmuhlich
Copy link

Story 23: Stitching and registration of tiled and multiplexed imaging

Combination of Story 6 and Story 22.

  1. Collect multiple overlapping 2d / 3d image tiles that are subsets of a whole, with multiple rounds of imaging. Each round contains a common registration marker.
  2. Align tiles within each round of imaging (stitching) as well as between rounds (registration) using a tool such as Ashlar. Only corrected tile transformations are stored.
  3. Display/process combined image with all tiles and all channels without re-saving the original image data.

(This also requires illumination/shading correction to be applied in step 3, but this correction is an intensity transform rather than a spatial coordinate transform so is not really in scope here. Just thought I'd mention it to get people thinking about it.)

@coalsont
Copy link

Just a comment - from an outside perspective (I work mostly with MRI), the prospect of having the transform and blending stage of stitching happening on the fly during display is a bit scary. Specifically, if the transforms and raw tile data are stored in a single "image file", I worry about the effects that may have on the library API (what sample grid should the display logic use for such a file? Does the library handle resampling rotated/shifted tiles to a common grid, or just give up and throw the raw tile captures and their transforms at the application? How is lens aberration encoded or otherwise handled? What resampling algorithms are available (and in what color space(s)), and how is multithreading handled?), and on the display performance (how does the library determine which pieces are within a display region, a linear scan over potentially thousands? How well does resampling every frame during panning perform, and does it preserve things like spline coefficients between frames (and if so, how do you control the memory usage)? How do you handle the image pyramid?).

Basically, stitching (which to my mind is just preprocessing to deal with a limitation of microscopes) isn't something I'd want to have to think about (or have the code spending time on) when just trying to view a histology image alongside other datatypes, I'd want the image file to be as simple as possible (so the file reading API and other considerations can be simple). This is one reason we generally keep registration-type transforms out of the image files themselves, in that if you don't want to rewrite the image data for IO reasons, writing the transform(s) to a separate file is a simple solution, and doesn't complicate the image file internals. In practice, for MRI data, once we have all the transforms, we do a final resampling from the original data to the desired grid before processing/display.

If "raw tiles and stitching transforms" are a desired feature for the "basic image" file format, then perhaps there should be a file subtype that restricts the kinds of allowed complications (and perhaps even requires the presence of an image pyramid all the way to, say, 1000 samples or less along each dimension), which enables a set of simpler API functions that are well-suited to display purposes?

@bogovicj
Copy link
Contributor

Hi @coalsont

Thanks for your comment, I appreciate your feedback and point of view.

having the transform and blending stage of stitching happening on the fly during display is a bit scary.

I get it. What I can try to make clearer in the spec is that
you are not required to write code for visualization that applies transformations to images on the fly

That is, you are welcome to work the way you are:

  • Find transformations
  • Apply transformation to image / resample
  • Resave the result to disk

and the spec will not stop you. I.e. you're free to ignore the items on your list of very good questions and considerations. Viewers that try to support on-the-fly transformations will have to tackle those.

The downside is that there will be valid ome-zarrs could look different in a viewer that supports on-the-fly transforms than in another viewer that doesn't. This is totally fine because I'd expect the latter viewer to communicate "I found an extra transformation but can't apply it".

What I'm confident will still find valuable to you:

Methods that produce transformations still have to store them somehow, and standardization there will improve tool interoperability. I've dumped lots of time into converting files storing affines from one format to another 😣

once we have all the transforms, we do a final resampling from the original data to the desired grid before processing/display.

Tracking the provenance of image data is very important. In this context, after you've done that final resampling, I sure would like to track 1) what was the original image that was resampled, and 2) what transformation(s) if any were applied in a structured, standard way. (i.e. not in my brain, or even in a lab notebook). The spec tracks that information.

Re: your last paragraph, something that I'll clarify is how a user or consuming software can know if a particular dataset is "basic" or not. Thanks again for posting here, and be in touch if I've not been clear or you're just not convinced.

p.s. I did human MRI work during grad school so was familiar with how analysis was done in that domain (well, a decade ago).

@oanegros
Copy link

oanegros commented Nov 22, 2024

Stories motivating bounded (projective) coordinate spaces
intended for a follow-up transform RFC, not expected to be implemented in RFC-5

Story 24: Cropping

  1. user generates an N-D image
  2. user defines a lower and upper bound to an axis

Story 25: Kymographs

  1. User has 2D+time data
  2. User defines a line in 2D space
  3. Displays/process to a 2D image of time-line coordinate space

Story 26: Curvilinear tube projection

  1. User generates 3D + channel data
  2. User defines a spline in 3D space
  3. Process to an axis-aligned bounded space transformed around the spline

Story 27: Mesh projection/surface quantification

  1. user has a 3D image
  2. user defines a 2D surface mesh in 3D space
  3. user gets image values the mesh coordinates/projected to the mesh coordinates

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests