-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume-based QC check after segmentation #201
Conversation
Allows importing check_volume within run_prediction.py in FastSurferCNN/.
…into fix/vol-qc-check
If QC check fails for a single subject: exit with error. If it fails for a batch of subjects: report the number of failures.
I discussed with @LeHenschel that probably we should create directories for the outputs and write the logfile there. |
We could:
In ether case, the sub-directory could be something other than |
If I understood it correctly, running single input stops the process anyway, and there is no output written, if the qc-check passes, right? Do we need a separate log-file in this case? We could add the failure message output to the log-file that is anyway generated if the user specifies it or print it to stdout. For multi-subject, we anyhow have the scripts directory and can either add a log-file there or again add the output to deep-seg.log (because the segmentation failed = related to the network/input data)?
For this I would honestly create a log-file in a user specified location and just return the list with all failed subjects. This seems more useful than having a qc.txt in each failed case which just contains "failed"? Unless there is some more subject-specific information explaining what failed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation looks ok to me. Do you have test cases somewhere to run the code on?
i think the multi-processing might benefit from writing the failed cases to a list directly instead of just appending it to a list. This will be lost in case the run_prediction.py script fails on a subject (for other reasons than the volume check).
At the moment, for single input, if the seg fails the QC check, the process is not stopped before saving outputs; it just returns an error code once it's done and logs a warning message.
I think the idea is that
There are some test cases in
Noted. It would be better to write directly to file for a subject-list-style Ok, so the current suggestion is:
|
Implemented as described above, with a single, optional failed subject-list file in the batch processing case. |
Description
This adds the existing QC check from
quick_qc.py
inrun_prediction.py
to evaluate segmentations, particularly to halt processing of cases that fail the test.If a single subject is processed, the inference script exits with an error if the segmentation fails the QC check. This is propagated to
run_fastsurfer.sh
, which will also fail before runningrecon-surf
(unlesssurf_only
).If multiple subjects are processed, only a warning message is logged for every failing subject, in addition to a count of failures at the end.
The list of failed subjects is maintained. This can be used to write a
qc.txt
for every subject, indicating that this case failed this QC check (and possibly others to be added in the future), or alternatively a list of all failed cases.An open question is where
qc.txt
could be located.(@m-reuter)