-
Notifications
You must be signed in to change notification settings - Fork 294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed resampling of BOLD into MNI space - starting with v20.1.1 #2307
Comments
Hi @eburkevt, your report does look good overall (there are little things one could pick up on, but they definitely don't explain the spatial normalization issue you are seeing). To debug this, I think there are a couple of useful things to do on your end:
|
I'm seeing this issue also with all the spatially normalized outputs for functional images ( I'll rerun with Thanks for your help. |
Hi @oesteban I retained the former Freesurfer and work directory contents to shorten the processing time a bit, so not sure if that would have had any effect. |
This should be fine. Can I ask you to share the full report with us? I want to check whether there's any clue in them to suspect this is going on underneath. |
I shared the full report on my google drive. I just preprocessed some recently acquired fMRI data (a multiband image sequence) with fMRIPrep v 20.2.0 LTS, This fMRI data has isotropic voxels, 2.1 x 2.1 x 2.1 mm (the problem dataset from UCLA has 3 x 3 x 4 mm voxels). The spatial normalization here looks fine ... is it possible that fMRI data with non-isotropic voxels are the issue? |
@eburkevt thanks for the updated reports. I think we can say normalization is pretty good for both MNI152NLin6Asym and MNI152NLin2009cAsym based on the reports. That means the problem is in the resampling to MNI space. The fact that you don't experience any problems with another dataset also indicates there's something particular about the orientation headers of the failing dataset. Anisotropy of voxels should not play any role here (and fMRIPrep processes anisotropic datasets without issues). Could you share one subject of the failing dataset for me to debug more thoroughly? |
If we could get the headers of a good and bad file from the same study, that would simplify diagnosis. |
I attached the first Nifti rsfMRI volume (gzip) from subject 10316 from the UCLA fMRI dataset in an e-mail.
Here's an example of the s/q-form header info (please see e-mail reply)
![image](https://user-images.githubusercontent.com/5192968/97096101-59c39600-1635-11eb-8ace-4e0109b439ce.png)
Thanks,
Chris
…On 10/24/2020 6:53 PM, Chris Markiewicz wrote:
If we could get the headers of a good and bad file from the same
study, that would simplify diagnosis.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2307 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABHT2CGZHQ3IRO7TIT7LOD3SMNLHLANCNFSM4SVHCHNA>.
|
Hi @eburkevt, this is ds030 so it is available at openneuro. That's great news as we have tons of data for testing. |
Hi @oesteban, that's good news. I was running some tests using fmriprep v20.2.0 LTS and did find that if I turned off Freesurfer reconstruction (--fs-no-reconall), the MNI normalized images for the subject above (sub-10316 from ds030) appear to be okay. I don't if this is diagnostic or not, but thought I would pass it along. Here was my fmriprep call .. singularity run --cleanenv ${FMRIPREP}/fmriprep-20.2.0.lts.simg |
Was this issue fixed in the latest release of fMRIPrep (version 20.2.1 release November 6, 2020)? |
No, sorry, we haven't gotten to this yet. I'm not sure about anybody else's schedule, but I for one won't have a chance to dig into this until the new year. |
Okay, thanks for the update.
Is there a way to tell if an fMRI dataset will be affected by this bug other than by visual examination of the MNI normalization? For example, is there anything in the qform/sform headers that is diagnostic? The latest versions of fMRIPrep seem to work okay with most of our datasets, and only the UCLA and COBRE data sets seem to be affected so far.
On 11/24/2020 10:11:29 AM, Chris Markiewicz <notifications@github.com> wrote:
No, sorry, we haven't gotten to this yet. I'm not sure about anybody else's schedule, but I for one won't have a chance to dig into this until the new year.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub [#2307 (comment)], or unsubscribe [https://github.com/notifications/unsubscribe-auth/ABHT2CDFPSOTFHFM35ZIASLSRPEKBANCNFSM4SVHCHNA].
|
This dataset is added to set the ground for testing the new reference calculation workflow (#601). Additionally, I will upload derivatives from fMRIPrep with the objective of setting up some regressions tests for nipreps/fmriprep#2307. Related: #601, nipreps/fmriprep#2307.
This dataset is added to set the ground for testing the new reference calculation workflow (#601). Additionally, I will upload derivatives from fMRIPrep with the objective of setting up some regressions tests for nipreps/fmriprep#2307. Related: #601, nipreps/fmriprep#2307.
Hi @eburkevt , I am currently using the same UCLA dataset in the frame of my bachelor thesis and I'm experiencing a very similar problem in the normalization of BOLD into MNI space... fMRIPrep runs with no errors but the preprocessed BOLD image looks highly distorted (see attachment). I'm wondering if you found a solution to the problem. |
I do not have a solution to this problem. I would suggest using an older version of fMRIPrep from at least as far back as the 20.0.7 version to preprocess the UCLA (and similar) datasets. |
Thanks for the posts, all. Just a note that I'm collecting bugs to prepare for a new push, and I hope to diagnose this one in the next week or so. Please feel free to continue adding any information you think would be helpful. |
The structarray was used directly and the extra axis actually changed the order of operations when the direction cosines were scaled by the voxel sizes. A new test case has been added for which this error was apparent. This bug caused nipreps/fmriprep#2307, nipreps/fmriprep#2393, and nipreps/fmriprep#2410. nipreps/fmriprep#2444 just works around the problem by using ``lta_convert`` instead of *NiTransforms*. The ``lta_convert`` tool can be now dropped. Resolves: #125
Okay, so the specific error condition is anisotropic voxels, but there's a class of images with anisotropic voxels that would not be affected:
Edit: That wasn't entirely true. Adding a new comment to ensure it gets seen. |
Here's a simple script to test: #!/usr/bin/env python
import sys
import numpy as np
import nibabel as nb
def test(img):
zooms = np.array([img.header.get_zooms()[:3]])
A = img.affine[:3, :3]
cosines = A / zooms
diff = A - cosines * zooms.T
return not np.allclose(diff, 0), np.max(np.abs(diff))
if __name__ == "__main__":
for fname in sys.argv[1:]:
affected, severity = test(nb.load(fname))
if affected:
print(f"{fname} is affected ({severity=})") Anything below 1e-7 seems safe. Up to 1e-2 seems likely to be fine, but I'd check. Above that, I would not be surprised to see visible artifacts, but again would check. |
For reference, the image Oscar was testing with had severity ~0.728. |
I ran with 20.2.0 and recently saw this warning. I have not seen anything like the errors shown above when spot-checking a few subjects outputs to MNI and an MNI-space pediatric template, and also ran the script on all my EPI inputs, giving all |
Correct, that should be fine. Though I'd be interested in a file that showed both affected and a severity of 0. Could you share the first kilobyte of such a file? |
Hi, having issues running the script above. First, I'm getting an invalid syntax error on the last print( command. Is the script as posted above correct? Also (and apologies for being simplistic), should we simply run the script as it or change the (img) to the preproc_bold.nii file? |
The script should work on Python 3.6 and higher. The syntax error is probably due to the f-string. It's intended to be run with |
Had the same issue earlier (servers running python 3.6). Apparently the |
Ah, thanks. You can remove the =, it just won't show the word severity. |
Are you simply after a file with severity of 0.0, in which case I have examples of that from publicly accessible data? Or have misunderstood? |
Hello, still unable to run the script (using python 3.8.5). Could someone please be explicit for me about which file(s) need to be checked with this approach and the syntax used to run the script? Newbie with python. |
@effigies I don't have an affected file with a severity of 0.0, sorry if what I posted was unclear! I was trying to check for any way my dataset might have been affected despite not seeing any of the artifacts above. And @CFGEIER --I changed the last 4 lines of the script to the below, which will print severity for each file and should work with earlier than 3.8 python, and saved it as
Loop:
|
@utooley - thank you so much! Got it working now :) |
I have multiple images from |
In principle it should be safe to reuse the working directory because the
nipype graph has changed.
…On Fri, Aug 20, 2021, 17:17 Dorota Jarecka ***@***.***> wrote:
I have multiple images from abide studies that seems to be affected. I
understand that I should use 20.2.3 and rerun my analysis, but it's not
clear to me if I can use anything from the previous work directory (run
with fmriprep20.2.1)
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#2307 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAESDRRNWFMDJKWYFQBMMI3T5ZWY5ANCNFSM4SVHCHNA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
|
Generally, reusing within the same minor version should be okay |
You should be able to reuse working directories within a minor release series. |
Hi, we processed a few commonly used datasets, and after we saw the warning for version 20.2.0, I went back to check how many subjects were affected. So, I am posting the information below as it might be useful for others. All the information is for resting state fMRI only.
For the datasets that were affected, these are some statistics for their severity. UKBIOBANK:
HCP
Cobre
fbirn II
I hope it helps! |
We are getting poor MNI normalizations (see images below) using fMRIPrep versions 20.1.1 and 20.2.0 LTS. We had good normalizations with version 20.0.6. I have not tested any other versions.
What version of fMRIPrep are you using?
versions 20.2.0 LTS and 20.1.1
Top image is example of expected normalization in SPM12, middle image is the normalization obtained
using fMRIPrep v20.1.1 (similar issues with v20.2.0), bottom image is normalization obtained
using fMRIPrep v20.0.6 (no issues).
What kind of installation are you using? Containers (Singularity, Docker), or "bare-metal"?
Singularity
What is the exact command-line you used?
Have you checked that your inputs are BIDS valid?
Yes, using bids-validator
Did fMRIPrep generate the visual report for this particular subject? If yes, could you share it?
Shared with nipreps@gmail.com on my Google Drive under 'fMRIPrep'
Can you find some traces of the error reported in the visual report (at the bottom) or in crashfiles?
fMRIPrep finished without any errors but produced a poor MNI normalization
Are you reusing previously computed results (e.g., FreeSurfer, Anatomical derivatives, work directory of previous run)?
In the last run (using v20,2,0) I used prior Freesurfer and work output files, but in past runs, I cleared both the Freesurfer derivatives and work directories before running.
fMRIPrep log
If you have access to the output logged by fMRIPrep, please make sure to attach it as a text file to this issue.
fmriprep.o1057033.txt
The text was updated successfully, but these errors were encountered: