-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider "EPImean": using the mean across all tasks/runs to do EPI -> T1 coregistrations #253
Comments
Does this still seem like a good idea, and, just to be clear, the mechanics of this would be to move to an (EPI -> mean EPI) FLIRT alignment + (mean EPI -> T1w)? |
Correct. |
Thinking about what's involved in this:
This will involve a fairly major refactor from the current, isolated processing of each BOLD series. One problem with this approach is that you can get different results based on whether your analyze all tasks or just a subset in the same process. However, if we took an EPInorm approach, then there's a common space to which all EPIs are registered that doesn't depend on which subset of tasks have been specified. WDYT? |
+1 to EPInorm approach. And we have the paper to support the decision. I would leave that for the future though, beyond 1.0.0. |
Opened an issue for EPInorm, if that's the route we decide to go. |
This would help with cases such as rewardBeast sub-317 where half of the runs have bad distortion and signal dropout, but the other half are fine. Such cases are rare though.
The text was updated successfully, but these errors were encountered: