-
-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
antsAI unexpected behavior #1529
Comments
A few comments as to what might be causing issues:
From the docs:
You're only sampling ever 90 degrees of rotation around the three axes, this is way too coarse. Your convergence |
Agreed, -s 90 will search rotations of -180, 0, +90 only. Reduce this to I usually use very few iterations and a lot of start points, try We normally downsample brain images before doing this, but given that this data is not very large and less detailed than a brain, I think it's fine to run at full resolution. You could maybe downsample both by a factor of 2 with You can also try In your antsRegistration command, you probably don't want to go down to a shrink factor of 8, or use so much smoothing. Internally, it won't downsample by that much anyway, because there would be hardly any voxels left. |
Valid points; I will try them. I also wonder what is the difference between antsAI and antsAffineInitializer? |
The transforms are defined in physical space. The voxel to physical space transform of the input image is in the image header. So downsampling changes how a voxel index is mapped to physical space, but the same point in physical space will be transformed the same way. Example:
downsample should overlap in physical space with image, and so should the deformed images. So you can use your downsampled images in antsAI for speed. But if it's running acceptably fast, you can skip this. You can also smooth the images to improve robustness, either before downsampling or as a standalone step.
|
Exactly. It was originally an attempt to make the interface a bit nicer, in part, by using the ants command line options. |
|
I find |
Sure. Fine with me. |
So it works!? I will reproduce tomorrow with these parameters. The offtopic and general question - I see ANTs is capable of many things, including registration of images with different modalities. But I came across numerous Deep Learning solutions for registration. Why are they developed, and what specific tasks do they solve that classic algorithms like ANTs can't? |
I don't think this format lends itself to answering this type of question with sufficient clarity especially since it involves ongoing research. I would recommend looking at the various review articles that have been written on the topic. If I were forced to answer, I would respond briefly with:
Because of the significant potential for speed-up and/or accuracy.
Off the top of my head? Not many. However, this case where there are significant angular differences is a definite possibility. |
I've reproduced, it works like a charm! May I clarify a bit more to develop the success? Ultimately I need something like a templateCommand.. script to make an average template from a sample collection, but currently it is overcomplicated for me and doesn't run smoothly. So later I will file another issue to resolve errors. |
From a general affine matrix my understanding is that it's complicated to extract rotation. If there's no shear component, it gets easier because SVD can represent the rotation and scaling in its component matrices. The ANTs template scripts are not designed to handle large rotations between the input images. Such variations, if they exist, would need to be dealt with in preprocessing. Even if the template could be constructed from the space of randomly oriented images, knowing the average rotation of an image with respect to some coordinate frame doesn't seem that useful. You'd still need to test a wide range of initializations to do the pairwise registrations. |
Yes, see my implementation (with references) used for another application |
Dear fellows, Some remarks to my last comments. |
I'm glad it worked. You can open new issues if there's problems with the tools specifically, or discussions for more general topics. |
Describe the problem
Greetings!. I'm trying to apply ANTS for pairwise registration. My objects are 3D NMR images of plant seeds, they can have different orientations, and I understood that antsRegistration is not supposed to work with drastic affine differences.
So I tried antsAI to produce a rough affine transformation, but it seems it doesn't work as expected.
I created a pair to test - the second object is rotated 60 deg on one of the 3D axes.
antsRegistration can manage to find a rigid transformation (for greater rotations - it doesn't find), but antsAI gives a weird transformation.
When I apply antsApplyTransforms, the rotation is wrong. (It is ok with matrix from antsRegistration when rotation is small)
I am not sure how to debug and what I am missing.
Please advise. I spent a lot of time combing through the documentation, but I am still a beginner at comprehending complex registration techniques and terms, so I tried to follow the existing examples which are scattered in discussions and manuals.
To Reproduce
There are two files in the attachment nmr.zip: the source file, and the rotated file (I used monai.transforms.Rotate in Python to rotate). Two commands to produce transformation matrix:
The resulting matrix Rigid_antsAI.mat seems to be wrong but antsReg0GenericAffine.mat looks fine.
And they are quite different according to antsTransformInfo
When I apply transformations, only the registered object with antsReg0GenericAffine.mat rotates back correctly:
System information (please complete the following information)
ANTs version information
ANTs code version: ANTs Version: v2.4.3
ANTs installation type: ants-2.4.3-centos7-X64-gcc.zip downloaded from https://github.com/ANTsX/ANTs/releases
Additional information
The text was updated successfully, but these errors were encountered: