-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
affine bridge ITK-MONAI #6
base: master
Are you sure you want to change the base?
affine bridge ITK-MONAI #6
Conversation
@ntatsisk thank you! Related: InsightSoftwareConsortium/ITKPythonPackage#244 @aylward @hjmjohnson @HastingsGreer @ebrahimebrahim @PranjalSahu @dzenanz @vicory please take a look |
I am wondering if this functionality should be integrated into MONAI? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks super useful!
Nice docstrings
I guess the tests need the files
./input/ct_lung_downsampled_reversed.tif
./input/fixed_2mm.nii.gz
./input/moving_2mm.nii.gz
which are not included, so there should be some instructions for getting those.
Even if the work is not finalized it's still an excellent reference in its current form. A lot of these minor seeming things can consume a lot of time to figure out from scratch
@wyli might have an opinion. |
yes, if it's mostly pythonic this could be part of MONAI, please feel free to send a pull request. (There's also a related feature request by @HastingsGreer Project-MONAI/MONAI#4117, could be addressed as well?) |
Hi! This is excellent- from reading the code it looks like it correctly handles the details of affine registration between these code bases. We do need to be able to run the code: could you upload the test data to data.kitware.com, or another appropriate file host? If you aren't sure you have the rights to upload these lung images, we could also switch to using the pair of lung images at https://data.kitware.com/#user/5e420e39af2e2eed35895211/folder/62a0efe5bddec9d0c4175c1f from the github.com/uncbiag/icon_registration test suite |
example code for using girder to store test data; https://github.com/uncbiag/ICON/blob/89ce87c537f96b017a2d5a48f3926ea1e7569c02/src/icon_registration/test_utils.py#L9 |
Thanks everyone for the comments and the positive feedback so far! @HastingsGreer, Very helpful code snippet to download the data. I just pushed a commit that makes use of it. Could you check if you are able to run the code now? I will start working on extending the functionality to work also with 2D images in the meantime. |
Ok! I extended the code to work for 2D images as well, and I pushed here two 2D image examples from the ITKElastix repo in order to do the tests. However, I am getting an inconsistency related to test I do consecutive runs of the code and I notice that in some runs the object
while on another run,
Of course, the test fails for this latter case. Does this maybe ring any bells? Can it be some kind of garbage collection that alters |
This would be my first guess. @HastingsGreer and @thewtex might provide better advice/explanation. |
Yes, it could be a garbage collection issue. Perhaps:
to:
(generate new object) fixes the issue? Note: if needed, |
Thanks @thewtex, the tests pass consistently now! Hence, now it works both for 2D and 3D. Looking for any further comments to finalize this PR. I plan to create a related notebook in ITKElastix soon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ntatsisk ammmmazing work! 👏 🥇 🎇 💯
LGTM overall!
A few requests inline.
Given that this is pure Python, depends on monai (this repo is only pytorch), please create a PR on the monai repo with most of the content. We will have testing on monai, catch any issues with metadata management as it evolves with tests, and more eyes from monai developers. The tests here are good, and more cases could help, e.g. just images with differences in spacing, etc.
At the same time, let's vendor this code in itk-elastix and start using it there.
Back in the olden days, we did not have the Python matrix-multiply operator -- lovely to see it used here 👴
Thank you @thewtex for your beautiful reply and review! I addressed your comments :)) I had to only add an additional function that converts the affine matrix from MONAI->ITK (up to now there was only ITK->MONAI). @wyli, @Shadow-Devil, this PR is ready to start getting integrated in MONAI as far as I am concerned. Let me know when you begin the process so that I contribute as well. |
Hi Stephen, the case that you are describing is certainly very relevant. Unfortunately, the code does not take into account the non-zero indexes. An additional limitation is that when resampling using the coordinate space of a reference image, there is the underlying assumption that the pixel arrays have the same shape. For example, ITK/Elastix is often used to register images of different shape since the registration takes place in the physical space but this PR wouldn't work properly there. My take is that the closer that we approach emulating the full functionality of |
@Spenhouet For example you can try here: https://github.com/InsightSoftwareConsortium/itk-torch-transform-bridge/pull/6/files#diff-186fbf6339630a0c795f9c5426e0b50ff0823d2895d8c046f4073762bbdb484bR192 setting |
Damn. 🥺 and I though this was finally solved. The amount of time I already put into this and related investigations. Again, here the files with this change:
But what is the alternative? A solution that sometimes works? |
Then I don't know what is meant by non-zero indexes. Could you give an example? |
As @ntatsisk noted, index is different than origin. The origin is the location in physical space of the center of the voxel having a 0,0,0 index (index space). To support streaming images in blocks (and for other reasons), itk allows an image (block) to span a portion of index space that starts from a non-zero index. Think of breaking an image into quadrants for streaming. You could come up with a new data structure to support those sub-quadrants, or you could adapt the image data structure to represent the those sub-quadrants. However, you don't want to have to compute/maintain a new physical space origin for each of those sub-quadrants - precision errors will happen and reconstructing the full image from sub-quadrants on the receiving end of the streaming could result in unwanted errors/interpolations. So, itk allows an image to have an index which represents its offset from the original image's origin. This concept ended up helping in numerous other ways too, e.g., it is ideal for when you want to process only a small ROI in a much larger image and then re-insert it back into the larger image very easily. So, at any time, an image may represent only a smaller portion of a larger image, by having a non-zero index. Perhaps for now throw a exception if a non-zero image index is used? That is better than producing a potentially incorrect result. |
Thanks for the explanation. This makes sense now, but is not something relevant for our specific use-case of converting the affine of the ITK registration, right? Or does the ITK registration also use non-zero indexes? But differently sized images and non-diagonal affine matrices do seem relevant to us. I originally thought ITK already provides the affine transform but just in another space and doing some strange things. @ntatsisk Creating such a wrapper for ItkImage sounds like a worthwhile thing. The wrapper I'm currently working on is already going into this direction. Maybe it's worth adopting this and continuing on this? """
This module provides a wrapper for the ITK package.
It provides a python interface for the itkImageBase3 class.
It also includes conversion methods for ITK affine to standard Monai and Nibabel compatible affine.
"""
from typing import Optional, Tuple
import itk
from itk.itkImagePython import itkImageBase3
from monai.data.image_reader import ITKReader
from monai.data.image_writer import ITKWriter
from monai.data.meta_tensor import MetaTensor
from monai.utils.type_conversion import convert_to_dst_type
from nibabel import Nifti1Image
import numpy as np
from numpy.typing import ArrayLike
from numpy.typing import NDArray
NDArrayF64 = NDArray[np.float64]
class ItkImage():
"""The ItkImage class represents an image in ITK (Insight Segmentation and Registration Toolkit) format."""
def __init__(self, image: itkImageBase3):
"""Initialize an ItkImage object from the given itkImageBase3 object."""
self.image = image
@classmethod
def from_numpy(cls, image: ArrayLike, affine: ArrayLike) -> 'ItkImage':
"""
Create an ItkImage from a numpy array and affine matrix.
Args:
image: numpy array for image data
affine: affine matrix for image orientation and spacing
Returns:
The newly created ItkImage instance.
"""
return cls(
ITKWriter.create_backend_obj(
np.asarray(image, np.float64),
channel_dim=None,
affine=np.asarray(affine, np.float64),
affine_lps_to_ras=False, # False if the affine is in itk convention
))
@classmethod
def from_nibabel(cls, image: Nifti1Image) -> 'ItkImage':
"""
Create an ItkImage from a nibabel Nifti1Image instance.
Args:
image: the nibabel Nifti1Image instance.
Returns:
The newly created ItkImage instance.
"""
return cls.from_numpy(image.get_fdata(), image.affine)
@classmethod
def from_monai(cls, image: MetaTensor) -> 'ItkImage':
"""
Create an ItkImage from a Monai MetaTensor instance.
Args:
image: the Monai MetaTensor instance.
Returns:
The newly created ItkImage instance.
"""
return cls.from_numpy(image.array, np.asarray(image.affine))
def to_monai(self) -> MetaTensor:
"""
Convert an ITK image to a MetaTensor object.
Returns:
A MetaTensor object containing the image and affine.
"""
reader = ITKReader(affine_lps_to_ras=False)
image_array, meta_data = reader.get_data(self.image)
image_array = convert_to_dst_type(image_array, dst=image_array,
dtype=itk.D)[0] # type: ignore
return MetaTensor.ensure_torch_and_prune_meta(image_array, meta_data) # type: ignore
def to_numpy(self) -> Tuple[NDArrayF64, NDArrayF64]:
"""
Convert an ITK image to a tuple of numpy arrays.
Returns:
A tuple of numpy arrays containing the image and affine.
"""
image = self.to_monai()
return image.array, image.affine.numpy()
def to_nibabel(self, header=None) -> Nifti1Image:
"""
Convert this image to a nibabel Nifti1Image.
The returned image is a nibabel.Nifti1Image object,
which provides a convenient way to manipulate and visualize NIfTI image data.
Args:
header: Custom header information to be used for the returned Nifti1Image.
If not provided, default header information will be used.
Returns:
The equivalent image as a Nifti1Image object.
"""
image = self.to_monai()
return Nifti1Image(image.array, image.affine, header=header)
@property
def ndim(self) -> int:
"""Determine the number of dimensions in the image."""
return self.image.ndim # type: ignore
@property
def affine(self) -> NDArrayF64:
"""Affine matrix of the image in affine format."""
return self.to_monai().affine.numpy()
@property
def image_size(self) -> NDArrayF64:
"""Size of the image in numpy array format."""
image_size: ArrayLike = self.image.GetLargestPossibleRegion().GetSize() # type: ignore
return np.asarray(image_size, np.float64)
@property
def spacing(self) -> NDArrayF64:
"""Spacing between voxels in each dimension in numpy array format."""
spacing: ArrayLike = self.image.GetSpacing() # type: ignore
return np.asarray(spacing, np.float64)
@property
def spacing_matrix(self) -> NDArrayF64:
"""Spacing matrix in affine format."""
return build_affine(direction=np.diag(self.spacing))
@property
def origin(self) -> NDArrayF64:
"""Origin of the image in physical space in numpy array format."""
origin: ArrayLike = self.image.GetOrigin() # type: ignore
return np.asarray(origin, np.float64)
@property
def direction(self) -> NDArrayF64:
"""Direction matrix of the image in numpy array format."""
direction: ArrayLike = self.image.GetDirection() # type: ignore
return np.asarray(direction, np.float64)
@property
def direction_matrix(self) -> NDArrayF64:
"""Direction matrix in affine format."""
direction: ArrayLike = self.image.GetDirection() # type: ignore
return build_affine(direction=direction)
@property
def center(self) -> NDArrayF64:
"""
Calculate the center of the ITK image based on its origin, size, and spacing.
This center is equivalent to the implicit image center that MONAI uses.
Returns:
The center of the image as a list of coordinates.
"""
return self.direction @ (((self.image_size / 2.0) - 0.5) * self.spacing) + self.origin
def compute_reference_space_affine(self, ref_image: 'ItkImage'):
ndim = self.ndim
# Spacing and direction as matrices
spacing_matrix = self.spacing_matrix[:ndim, :ndim]
ref_spacing_matrix = ref_image.spacing_matrix[:ndim, :ndim]
direction_matrix = self.direction_matrix[:ndim, :ndim]
ref_direction_matrix = ref_image.direction_matrix[:ndim, :ndim]
# Matrix calculation
matrix = ref_direction_matrix @ ref_spacing_matrix @ np.linalg.inv(
direction_matrix) @ np.linalg.inv(spacing_matrix)
# Offset calculation
pixel_offset = -1
image_size = ref_image.image_size
translation = (ref_direction_matrix @ ref_spacing_matrix -
direction_matrix @ spacing_matrix) @ (image_size + pixel_offset) / 2
translation += ref_image.origin - self.origin
# Convert matrix ITK matrix and translation to MONAI affine matrix
ref_affine_matrix = self.affine_to_monai(direction=matrix, translation=translation)
return ref_affine_matrix
def affine_to_monai(self,
direction: ArrayLike,
translation: ArrayLike,
center_of_rotation: Optional[ArrayLike] = None,
reference_image: Optional['ItkImage'] = None) -> NDArrayF64:
"""
Compute affine transformation matrix to affine format.
Args:
direction: An N-dimensional array representing rotation.
translation: An N-dimensional array representing translation.
center_of_rotation: The center of rotation. If provided, the affine
matrix will be adjusted to account for the difference
between the center of the image and the center of rotation.
reference_image: Reference image to transform to.
Returns:
The affine transformation matrix in affine format.
"""
# Create affine matrix that includes translation
ndim = self.ndim
transform_direction = build_affine(direction=direction, ndim=ndim)
transform_translation = build_affine(translation=translation, ndim=ndim)
transform_affine = transform_translation @ transform_direction
# Adjust offset when center of rotation is different from center of the image
if center_of_rotation:
offset = self.center - np.asarray(center_of_rotation, np.float64)
offset_matrix = build_affine(translation=offset)
transform_affine = np.linalg.inv(offset_matrix) @ transform_affine @ offset_matrix
# Adjust based on spacing. It is required because MONAI does not update the
# pixel array according to the spacing after a transformation. For example,
# a rotation of 90deg for an image with different spacing along the two axis
# will just rotate the image array by 90deg without also scaling accordingly.
transform_affine = np.linalg.inv(
self.spacing_matrix) @ transform_affine @ self.spacing_matrix
# Adjust direction
transform_affine = np.linalg.inv(
self.direction_matrix) @ transform_affine @ self.direction_matrix
if reference_image:
transform_affine = transform_affine @ self.compute_reference_space_affine(
reference_image)
return transform_affine
def build_affine(direction: Optional[ArrayLike] = None,
translation: Optional[ArrayLike] = None,
ndim: int = 3) -> NDArray[np.float64]:
"""
Build an affine matrix for 3D data based on the provided direction and translation matrix.
The direction and translation parameters are written to the affine matrix in the following way:
[[dir00, dir01, dir02, tx],
[dir10, dir11, dir12, ty],
[dir20, dir21, dir22, tz],
[ 0, 0, 0, 1]]
Args:
direction: Direction matrix as array of 9 values or as matrix with shape 3x3.
translation: Translation matrix with 3 values for x, y and z.
Returns:
The affine matrix with shape 4x4.
Examples:
>>> build_affine()
array([[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[0., 0., 0., 1.]])
>>> build_affine(translation=[1, 2, 3])
array([[1., 0., 0., 1.],
[0., 1., 0., 2.],
[0., 0., 1., 3.],
[0., 0., 0., 1.]])
"""
affine = np.eye(ndim + 1, dtype=np.float64)
if direction is not None:
affine[:ndim, :ndim] = np.asarray(direction, dtype=np.float64).reshape(ndim, ndim)
if translation is not None:
affine[:ndim, ndim] = np.asarray(translation, dtype=np.float64)
return affine |
2adb7fb
to
7a253b2
Compare
Good news: The issue related to the non-diagonal direction cosines is fixed now! :) @Spenhouet, could you test? @aylward, Great suggestion. I added some assertions to make sure that only zero indices and matching shapes are used. The checking can be further improved during integration. |
translation += np.asarray(ref_image.GetOrigin()) - np.asarray(image.GetOrigin()) | ||
|
||
# Convert matrix ITK matrix and translation to MONAI affine matrix | ||
ref_affine_matrix = itk_to_monai_affine(image, matrix=matrix, translation=translation) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only thing this will do is to put the direction and translation computed here into a single affine matrix.
One can also just return the affine here (check the build_affine
method of my shared code in my last comment on the thread).
Only thing is that this method then need to be applied before line 181 (# Adjust direction
)
Nice! I tested it with the samples above and looks good to me. Thanks for pushing out these improvements! I can also confirm that differently sized image do not work. I padded the reference image with an adjusted affine: The green image is the ITK output (where the registration also failed - not really sure why, this should have been easy), the gray image is the output when applying the returned affine. So for the meantime I added a check: if reference_image and np.any(self.image_size != reference_image.image_size):
raise NotImplementedError('Handling of non-matching image sizes is not implemented.') Shouldn't this be something like getting the size difference and then shifting by that and at the end shifting back? |
# reference image space. | ||
if reference_image: | ||
assert_itk_regions_match_array(reference_image) | ||
assert image.shape == reference_image.shape, "ITK-MONAI bridge: shape mismatch between image and reference image" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Spenhouet - you're right! It seems strange to me that the reference image (which provides a reference coordinate system) and the image providing the index-to-physical space transform have to have the same shape. Seems like all operations should be in terms of Index, Spacing, Origin, and Direction. Line 129 uses image size...but I'm not certain why offset of the origins isn't sufficient. Digging into it...
That it can not handle image with different size still really bugs me. I tried around in the direction of:
And then applying this to the transform affine. @ntatsisk You seem to have a deeper understanding of this. Do you think different image sizes is also something you are able to make work? |
I can't promise much right now. I could take a look at it some point next week, and at least provide some more clues. |
I also tested to convert an ITK rigid registration transform to perform it via a monai affine transform and found more oddities. For context, I defined the X, Y and Z rotation matrices based on the radians given by the ITK rigid registration transform like this: cos = np.cos(radians, dtype=np.float64)
sin = np.sin(radians, dtype=np.float64)
x, y, z = [0, 1, 2] # Assign indices for better readability
Rx = np.asarray([
[1.0, 0.0, 0.0, 0.0],
[0.0, cos[x], -sin[x], 0.0],
[0.0, sin[x], cos[x], 0.0],
[0.0, 0.0, 0.0, 1.0],
], dtype=np.float64)
Ry = np.asarray([
[cos[y], 0.0, sin[y], 0.0],
[0.0, 1.0, 0.0, 0.0],
[-sin[y], 0.0, cos[y], 0.0],
[0.0, 0.0, 0.0, 1.0],
], dtype=np.float64)
Rz = np.asarray([
[cos[z], -sin[z], 0.0, 0.0],
[sin[z], cos[z], 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0],
], dtype=np.float64) I did expect the transform parameters output by Based on the Euler angles, I did expect the rotation matrix computation to be The thing is, neither I fear not actually having a general solution again and just something that works with my current test data. |
The following may be helpful: "The Euler angle representation of a rotation is not unique and depends on the order of rotations. In general there are 12 options. This class supports two of them, ZXY and ZYX. The default is ZXY. These functions set and get the value which indicates whether the rotation is ZYX or ZXY." |
@mstaring Thanks for the doc reference. That clarifies it. I found no way to set this flag for the registration method, so I will have to assume ZXY as default. EDIT 1: I tried changing it like this (and similar): euler_transform = itk.Euler3DTransform.New()
euler_transform.SetComputeZYX(True)
parameter_map['Transform'] = euler_transform But this gives an exception since I apparently can not assign an instance here. I did not find any direct parameter for the parameter map to set the ZYX order. EDIT 2: Okay, this really has some downwards effects in my code. Changing this order to ZYX would be really beneficial or else I need to have to make all code take the ITK behavior into consideration. |
Nice work!
Assuming you used the default Euler transform instead of specifying a
Versor Transform, and you're calling GetParameters(), then the first three
parameters are AngleX, AngleY, and AngleZ. This is from line 104 of
ITK/Modules/Transform/include/itkEuler3DTransform.hxx
https://github.com/InsightSoftwareConsortium/ITK/blob/f0d342e0b81e690194cd3fe993316bca7570402b/Modules/Core/Transform/include/itkEuler3DTransform.hxx#L104
Another way of getting those values is to call GetAngleX(), GetAngleY(),
and GetAngleZ(). That confirms that the transform is a Euler transform,
and the code is more transparent. The GetParameters() function is really
intended to be used by the optimizer.
However, did you confirm the center of rotation is zero? I'm not too
familiar with Elastix, so I don't know if it uses a center of rotation.
Most optimizers move the center of rotation to the center of the fixed
image or the center of mass of the data in the fixed image. This helps to
decouple rotation and translation in the practical/common case that the
problem is that the patient's head rotated in the scanner.
You can ask Elastix / ITK for the center of rotation by calling GetCenter()
or by calling GetFixedParameters(). Note how Parameters and
FixedParameters provide a way for optimizers to work efficiently. Their
duos are GetAngleX(), GetCenter(), and GetTranslation().
If you don't want to deal with center of rotation, you should use
GetMatrix() and GetOffset(). They have the center of rotation factored
out. They are perhaps the best way for passing a transform to MONAI.
…On Mon, Feb 20, 2023 at 3:47 AM Sebastian Penhouet ***@***.***> wrote:
I also tested to convert an ITK rigid registration transform to perform it
via a monai affine transform and found more oddities.
For context, I defined the X, Y and Z rotation matrices based on the
radians given by the ITK rigid registration transform like this:
cos = np.cos(radians, dtype=np.float64)sin = np.sin(radians, dtype=np.float64)x, y, z = [0, 1, 2] # Assign indices for better readabilityRx = np.asarray([
[1.0, 0.0, 0.0, 0.0],
[0.0, cos[x], -sin[x], 0.0],
[0.0, sin[x], cos[x], 0.0],
[0.0, 0.0, 0.0, 1.0],
], dtype=np.float64)Ry = np.asarray([
[cos[y], 0.0, sin[y], 0.0],
[0.0, 1.0, 0.0, 0.0],
[-sin[y], 0.0, cos[y], 0.0],
[0.0, 0.0, 0.0, 1.0],
], dtype=np.float64)Rz = np.asarray([
[cos[z], -sin[z], 0.0, 0.0],
[sin[z], cos[z], 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0],
], dtype=np.float64)
I did expect the transform parameters output by
itk.elastix_registration_method for a rigid registration to be in the
order [x_angle, y_anlge, z_angle].
@aylward <https://github.com/aylward> Can you confirm if this assumption
is correct?
Based on the Euler angles, I did expect the rotation matrix computation to
be Rz @ Ry @ Rx but monai's create_rotate function of
monai.transforms.utils is computing Rx @ Ry @ Rz instead.
@wyli <https://github.com/wyli> Is this a bug in monai or is my
understanding wrong?
The thing is, neither Rz @ Ry @ Rx nor Rx @ Ry @ Rz give correct results
(output images are slightly tilted in comparison to the direct ITK
registration output).
I did have to compute Rx @ Rz @ Ry to get the right results for my test
data.
This seems odd and suggest that something is wrong.
It might be my assumption about the order of the angles output by the ITK
registration.
It might be, that these angles actually need to be sorted some way? Based
on the orientation?
I fear not actually having a general solution again and just something
that works with my current test data.
—
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACEJLZYUELP7U3OFVF6X6LWYMVTXANCNFSM6AAAAAAS7YEXPI>
.
You are receiving this because you were mentioned.Message ID:
<InsightSoftwareConsortium/itk-torch-transform-bridge/pull/6/c1436560748@
github.com>
--
Stephen R. Aylward, Ph.D.
Senior Director of Strategic Initiatives
Chair of MONAI Advisory Board
---
Kitware: *Delivering innovation.*
|
@aylward Thanks for the details and the link to the respective code. So to summarize, the parameter object actually contains the angles in the order I'm using the same methods / implementation as discussed above so stuff like center of rotation and origin should already be taken care of by that. I'm just converting the angles into an affine matrix and then continue with the existing code. Do you know if it is possible to set the SetComputeZYX(True) / m_ComputeZYX flag of the EulerTransform for the registration? Right now, I only provide a parameter object for the registration. I found not option in the parameter object to specify this behavior. I also found no way to directly provide an instance of the EulerTransform. EDIT: For who ever this might be helpful to, I made my function account for radians order and rotation order: def build_rotation_3d(radians: NDArray,
radians_oder: str = 'XYZ',
rotation_order: str = 'ZYX',
dims: List[str] = ['X', 'Y', 'Z']) -> NDArray:
x_rad, y_rad, z_rad = radians[(np.searchsorted(dims, list(radians_oder)))]
x_cos, y_cos, z_cos = np.cos([x_rad, y_rad, z_rad], dtype=np.float64)
x_sin, y_sin, z_sin = np.sin([x_rad, y_rad, z_rad], dtype=np.float64)
x_rot = np.asarray([
[1.0, 0.0, 0.0, 0.0],
[0.0, x_cos, -x_sin, 0.0],
[0.0, x_sin, x_cos, 0.0],
[0.0, 0.0, 0.0, 1.0],
])
y_rot = np.asarray([
[y_cos, 0.0, y_sin, 0.0],
[0.0, 1.0, 0.0, 0.0],
[-y_sin, 0.0, y_cos, 0.0],
[0.0, 0.0, 0.0, 1.0],
])
z_rot = np.asarray([
[z_cos, -z_sin, 0.0, 0.0],
[z_sin, z_cos, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0],
])
rotations = np.asarray([x_rot, y_rot, z_rot])[(np.searchsorted(dims, list(rotation_order)))]
return rotations[0] @ rotations[1] @ rotations[2] |
Fixes #5708 Fixes #4117 ### Description This is a migration of the PR (by @ntatsisk https://github.com/ntatsisk) InsightSoftwareConsortium/itk-torch-transform-bridge#6 into MONAI. ### Types of changes <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [x] Non-breaking change (fix or new feature that would not break existing functionality). - [ ] Breaking change (fix or new feature that would cause existing functionality to change). - [x] New tests added to cover the changes. - [ ] Integration tests passed locally by running `./runtests.sh -f -u --net --coverage`. - [x] Quick tests passed locally by running `./runtests.sh --quick --unittests --disttests`. - [ ] In-line docstrings updated. - [ ] Documentation updated, tested `make html` command in the `docs/` folder. --------- Signed-off-by: Felix Schnabel <f.schnabel@tum.de>
Yes - the ordering can be confusing, but ITK, Elastix, 3D Slicer, VTK,
ParaView, Osirix, and many other platforms use this ordering...and I think
it would not be good for the field to make using a different ordering very
easy. It is likely to cause such errors to really become commonplace, and
in medicine that is paramount.
For MONAI, what about creating a converter? This will then allow MONAI to
work with 3D Slicer and the many other tools out there. The converter
would be part of MONAI core. It would take a transform matrix and offset
and convert it to whatever representation it wants. Then Get/SetMatrix
and Get/SetOffset become the basis/API of communication between MONAI and
the world? It would also work for a variety of the ITK transforms that
derive from the MatrixOffsetTransformBase class, e.g., versor, projection,
affine, ...
s
…On Mon, Feb 20, 2023 at 9:18 AM Sebastian Penhouet ***@***.***> wrote:
@aylward <https://github.com/aylward> Thanks for the details and the link
to the respective code.
You are right, I also go confused a bit with the order of the angles in
the transform parameter object and the order in which the rotation matrices
are applied.
So to summarize, the parameter object actually contains the angles in the
order XYZ but per default the Euler Transform in ITK applies the
rotations in the ZXY order.
I'm using the same methods / implementation as discussed above so stuff
like center of rotation and origin should already be taken care of by that.
I'm just converting the angles into an affine matrix and then continue with
the existing code.
Do you know if it is possible to set the SetComputeZYX(True) /
m_ComputeZYX flag of the EulerTransform
<https://itk.org/Doxygen/html/classitk_1_1Euler3DTransform.html#a55ee51b3ed16fc92d0cee6d2113a0a1d>
for the registration? Right now, I only provide a parameter object for the
registration. I found not option in the parameter object to specify this
behavior. I also found no way to directly provide an instance of the
EulerTransform.
—
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACEJL4PGVHLP2U6GV7CVEDWYN4K5ANCNFSM6AAAAAAS7YEXPI>
.
You are receiving this because you were mentioned.Message ID:
<InsightSoftwareConsortium/itk-torch-transform-bridge/pull/6/c1437095078@
github.com>
--
Stephen R. Aylward, Ph.D.
Senior Director of Strategic Initiatives
Chair of MONAI Advisory Board
---
Kitware: *Delivering innovation.*
|
Hi @aylward, I agree, this kind of convertor will be convenient... @Spenhouet I forgot to say that I also created a feature request for that utility earlier today Project-MONAI/MONAI#6029. |
thanks @Spenhouet @ntatsisk @Shadow-Devil the itk <> monai bridge is now merged into MONAI core Project-MONAI/MONAI#5897, for the record I put the latest usage here (code example mostly implemented by @Spenhouet and @ntatsisk): from pathlib import Path
import itk
import numpy as np
import torch
import monai
folder = Path("../MONAI-extra-test-data")
moving_path = folder / "real_moving.nii.gz"
fixed_path = folder / "real_fixed.nii.gz"
ndim = 3
# Perform registration with itk
# Load images with itk
moving_image = itk.imread(str(moving_path), itk.F)
fixed_image = itk.imread(str(fixed_path), itk.F)
# Import Default Parameter Map
parameter_object = itk.ParameterObject.New() # type: ignore
affine_parameter_map = parameter_object.GetDefaultParameterMap("affine", 4)
affine_parameter_map["ResampleInterpolator"] = ["FinalLinearInterpolator"]
parameter_object.AddParameterMap(affine_parameter_map)
# Call registration function
result_image, result_transform_parameters = itk.elastix_registration_method( # type: ignore
fixed_image, moving_image, parameter_object=parameter_object
)
itk.imwrite(result_image, "real_itk_reg_itk.nii.gz", compression=True)
parameter_map = result_transform_parameters.GetParameterMap(0)
transform_parameters = parameter_map["TransformParameters"]
direction = np.array(transform_parameters[: ndim * ndim], dtype=float).reshape((ndim, ndim))
translation = np.array(transform_parameters[-ndim:], dtype=float).tolist()
center_of_rotation = np.array(parameter_map["CenterOfRotationPoint"], dtype=float).tolist()
print(transform_parameters)
print(direction)
print(translation)
# Apply affine transform with MONAI
moving_image_mt = monai.data.itk_image_to_metatensor(moving_image)
fixed_image_mt = monai.data.itk_image_to_metatensor(fixed_image)
monai_affine_transform = monai.data.itk_to_monai_affine(
image=moving_image,
matrix=direction,
translation=translation,
center_of_rotation=center_of_rotation,
reference_image=fixed_image,
)
monai_transform = monai.transforms.Affine(affine=monai_affine_transform, padding_mode="zeros", dtype=torch.float64)
output_metatensor, _ = monai_transform(moving_image_mt, mode="bilinear")
output_metatensor.affine = fixed_image_mt.affine # as the resampled image is finally in the reference image space
# save the output
# option 1
filename = "itk_bridge.nii.gz"
print(f"itk writing {filename}")
itk_obj = monai.data.metatensor_to_itk_image(output_metatensor)
itk.imwrite(itk_obj, filename)
# option 2
monai_saver = monai.transforms.SaveImage(resample=False, writer="ITKWriter")
monai_saver.set_options(init_kwargs={"affine_lps_to_ras": False})
monai_saver(output_metatensor) |
Thanks for that insight. I was not aware. To me it seemed that is an ITK specific thing. Coming from the other side (ML + Python Development), ZYX ordering felt like the norm, also going by the Wikipedia definition for general 3D rotations but I'm not deep into this so this might just have been my wrongful first impression. I performed a quick search into some python packages providing this or similar functionality to see what they are doing:
You are essentially saying "it would be good if the field could agree on one way to do it" and I agree, that would be the best. Less to worry about for the end-users. |
@Spenhouet - great points! Sorry if my message implied otherwise, but I fully support the flexibility this pull request provides, and I am glad that @Wenqi is propagating it. Very important and helpful work! Also, the code @Wenqi cited (from you and others) is outstanding - using a matrix for communication between platforms instead of angles, to avoid ambiguity - sorry for not seeing it sooner - that seems like the right approach! I see now that you were asking about how to change the order of angles in Elastix, is that right? Regretfully, I don't use elastix, so I'm not the best one to answer that. |
elastix uses ITK under the hood, also for the EulerTransform. You can set See also the code: |
I now see that this parameter is not documented in the header file. @ntatsisk could you add this to |
Ah, nice. Seems straight forward but I can not make it work right now. The rotations are still in ZXY order. The parameter object is set like this? parameter_object = itk.ParameterObject.New()
default_rigid_parameter_map = parameter_object.GetDefaultParameterMap('rigid')
default_rigid_parameter_map['ComputeZYX'] = ('true',)
parameter_object.AddParameterMap(default_rigid_parameter_map)
result_image, result_transform_parameters = itk.elastix_registration_method(
fixed_image, moving_image,
parameter_object=parameter_object) That is not my code but my code should effectively be identical and I checked the output for my |
Allows for conversion of the affine matrix between ITK and MONAI. From the limited tests done, it takes care of arbitrary affine matrix, origin, center of rotation and direction cosines. Currently it only works for 3D though. Related to InsightSoftwareConsortium/ITKElastix#126, and a quick test showed it also addresses InsightSoftwareConsortium/ITKElastix#145.
I think it is a good start, but I understand that it needs more work to tidy it up, make it more generic and have it covered by tests. I am looking forward to your feedback/ideas @thewtex @HastingsGreer. What is the best way to move forward here?
cc. @mstaring, @N-Dekker