Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume-based QC check after segmentation #201

Merged
merged 9 commits into from
Nov 21, 2022

Conversation

af-a
Copy link
Contributor

@af-a af-a commented Oct 24, 2022

Description

This adds the existing QC check from quick_qc.py in run_prediction.py to evaluate segmentations, particularly to halt processing of cases that fail the test.

If a single subject is processed, the inference script exits with an error if the segmentation fails the QC check. This is propagated to run_fastsurfer.sh, which will also fail before running recon-surf (unless surf_only).

If multiple subjects are processed, only a warning message is logged for every failing subject, in addition to a count of failures at the end.

The list of failed subjects is maintained. This can be used to write a qc.txt for every subject, indicating that this case failed this QC check (and possibly others to be added in the future), or alternatively a list of all failed cases.

An open question is where qc.txt could be located.

(@m-reuter)

Allows importing check_volume within run_prediction.py in FastSurferCNN/.
If QC check fails for a single subject: exit with error.
If it fails for a batch of subjects: report the number of failures.
@m-reuter
Copy link
Member

I discussed with @LeHenschel that probably we should create directories for the outputs and write the logfile there.

@af-a
Copy link
Contributor Author

af-a commented Oct 26, 2022

We could:

  1. If batch processing (a Freesurfer directory structure is assumed), add $SUBJECT_DIR/scripts/qc.txt.
  2. If single processing (a Freesurfer directory structure is not guaranteed), create $SEG_PATH/scripts/qc.txt OR always output a pre-defined directory structure (for e.g., Freesurfer's) in run_prediction.py and save to a pre-defined location ($SUBJECT_DIR/scripts/qc.txt).

In ether case, the sub-directory could be something other than scripts/.

@LeHenschel
Copy link
Member

LeHenschel commented Oct 26, 2022

If I understood it correctly, running single input stops the process anyway, and there is no output written, if the qc-check passes, right? Do we need a separate log-file in this case? We could add the failure message output to the log-file that is anyway generated if the user specifies it or print it to stdout.

For multi-subject, we anyhow have the scripts directory and can either add a log-file there or again add the output to deep-seg.log (because the segmentation failed = related to the network/input data)?

The list of failed subjects is maintained. This can be used to write a qc.txt for every subject, indicating that this case failed this QC check (and possibly others to be added in the future), or alternatively a list of all failed cases.

For this I would honestly create a log-file in a user specified location and just return the list with all failed subjects. This seems more useful than having a qc.txt in each failed case which just contains "failed"? Unless there is some more subject-specific information explaining what failed.

Copy link
Member

@LeHenschel LeHenschel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation looks ok to me. Do you have test cases somewhere to run the code on?

i think the multi-processing might benefit from writing the failed cases to a list directly instead of just appending it to a list. This will be lost in case the run_prediction.py script fails on a subject (for other reasons than the volume check).

FastSurferCNN/run_prediction.py Outdated Show resolved Hide resolved
@af-a
Copy link
Contributor Author

af-a commented Oct 28, 2022

If I understood it correctly, running single input stops the process anyway, and there is no output written, if the qc-check passes, right?

At the moment, for single input, if the seg fails the QC check, the process is not stopped before saving outputs; it just returns an error code once it's done and logs a warning message.

For this I would honestly create a log-file in a user specified location and just return the list with all failed subjects. This seems more useful than having a qc.txt in each failed case which just contains "failed"? Unless there is some more subject-specific information explaining what failed.

I think the idea is that qc.txt will show failed/success for a list of future qc checks.

Implementation looks ok to me. Do you have test cases somewhere to run the code on?

There are some test cases in /groups/ag-reuter/projects/ahmed/test_set_la5c_adni.

i think the multi-processing might benefit from writing the failed cases to a list directly instead of just appending it to a list.

Noted. It would be better to write directly to file for a subject-list-style qc.txt.


Ok, so the current suggestion is:

  1. for single processing: no log file.
  2. for batch processing: i) if we want to always create a file listing QC results from now on, a log file in each $SUBJECT/scripts/; ii) if we don't want that, a single log file just listing failed subjects (location specified through an argument).

@af-a af-a changed the title Volume-based QC check after segmentation [WIP] Volume-based QC check after segmentation Nov 2, 2022
@af-a
Copy link
Contributor Author

af-a commented Nov 15, 2022

Implemented as described above, with a single, optional failed subject-list file in the batch processing case.

@af-a af-a changed the title [WIP] Volume-based QC check after segmentation Volume-based QC check after segmentation Nov 15, 2022
@m-reuter m-reuter merged commit 54f6e64 into Deep-MI:feature/vinn Nov 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants