diff --git a/EDITING.md b/EDITING.md index 6b9b2e4e..bf7203c2 100644 --- a/EDITING.md +++ b/EDITING.md @@ -30,8 +30,8 @@ Therefore, currently three types of edits are possible: Instead of using the original scan as input, you can perform a bias field correction as a pre-processing step. This can also be achieved by running the *asegdkt module* twice (using different subject ids). The second time you input the bias field corrected image ```orig_nu.mgz``` that was provided from the first run. This can help brighten up some regions and improve segmentation quality for some difficult cases. -- Step_1: In first iteration of field correction method run full pipeline as follows: - ``` +- Step 1: In first iteration of field correction method run full pipeline as follows: + ```bash # Source FreeSurfer export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh @@ -43,18 +43,18 @@ that was provided from the first run. This can help brighten up some regions and # Run FastSurfer ./run_fastsurfer.sh --t1 $datadir/subjectX/t1-weighted-nii.gz \ --sid subjectX --sd $fastsurferdir \ - --parallel --threads 4 - ``` -- Step_2: Run pipeline again for the second time, however this time input the bias field corrected image i.e ```orig_nu.mgz``` instead of original input image which was produced in first iteration. The file ```orig_nu.mgz``` can be found in output directory in mri subfolder. The output produced from the second iteration can be saved in a different output directory for comparative analysis with the output produced in first iteration. + --parallel --threads 4 --3T ``` +- Step 2: Run pipeline again for the second time, however this time input the bias field corrected image i.e ```orig_nu.mgz``` instead of original input image which was produced in first iteration. The file ```orig_nu.mgz``` can be found in output directory in mri subfolder. The output produced from the second iteration can be saved in a different output directory for comparative analysis with the output produced in first iteration. + ```bash # Run FastSurfer ./run_fastsurfer.sh --t1 $datadir/subjectX/t1-weighted-nii.gz \ --sid subjectX --sd $fastsurferdir \ - --parallel --threads 4 + --parallel --threads 4 --3T ``` -- Step_3: Run freeview or visualization - ``` +- Step 3: Run freeview or visualization + ```bash freeview /path/to/output_directory/orig_nu.mgz ``` Note: ```orig_nu.mgz``` file is not a segmented file, for segmentation load ```aparc.DKTatlas+aseg.deep.edited.mgz``` in freeview. @@ -64,20 +64,20 @@ that was provided from the first run. This can help brighten up some regions and You can manually edit ```aparc.DKTatlas+aseg.deep.mgz```. This is similar to aseg edits in FreeSurfer. You can fill-in undersegmented regions (with the correct segmentation ID). To re-create the aseg and mask run the following command before continuing with other modules: -- Step_1: Assuming that you have run the full fastsurfer pipeline once as described in method_1 and succesfully produced segmentations and surfaces -- Step_2: Execute this command where reduce_to_aseg.py is located - ``` - python3 reduce_to_aseg.py -i sid/mri/aparc.DKTatlas+aseg.deep.edited.mgz - -o sid/mri/aseg.auto_noCCseg.mgz - --outmask sid/mri/mask.mgz +- Step 1: Assuming that you have run the full fastsurfer pipeline once as described in method_1 and succesfully produced segmentations and surfaces +- Step 2: Execute this command where reduce_to_aseg.py is located + ```bash + python3 reduce_to_aseg.py -i sid/mri/aparc.DKTatlas+aseg.deep.edited.mgz \ + -o sid/mri/aseg.auto_noCCseg.mgz \ + --outmask sid/mri/mask.mgz \ --fixwm ``` Assuming you have edited ```aparc.DKTatlas+aseg.deep.edited.mgz``` in freeview, step_2 will produce two files i.e ```aseg.auto_noCCseg.mgz``` and ```mask.mgz ``` in the specified output folder. The ouput files can be loaded in freeview as a load volume. Edit-->load volume -- Step_3: For this step you would have to copy segmentation files produced in step_1, edited file ```aparc.DKTatlas+aseg.deep.edited.mgz``` and re-created file produced in step_2 in new output directory beforehand. +- Step 3: For this step you would have to copy segmentation files produced in step_1, edited file ```aparc.DKTatlas+aseg.deep.edited.mgz``` and re-created file produced in step_2 in new output directory beforehand. In this step you can then run surface module as follows: - ``` + ```bash # Source FreeSurfer export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh @@ -89,7 +89,7 @@ You can manually edit ```aparc.DKTatlas+aseg.deep.mgz```. This is similar to ase # Run FastSurfer ./run_fastsurfer.sh --t1 $datadir/subjectX/t1-weighted-nii.gz \ --sid subjectX --sd $fastsurferdir \ - --parallel --threads 4 + --parallel --threads 4 \ --surf_only ``` Note: ```t1-weighted-nii.gz``` would be the original input mri image. @@ -99,11 +99,11 @@ You can manually edit ```aparc.DKTatlas+aseg.deep.mgz```. This is similar to ase ## 3. Brainmask Edits: When surfaces go out too far, e.g. they grab dura, you can tighten the mask directly, just edit ```mask.mgz```and start the *surface module*. -- Step_1: Assuming that you have run the full fastsurfer pipeline once as described in method_1 and succesfully produced segmentations and surfaces -- Step_2: Edit ```mask.mgz``` file in freeview -- Step_3: Run the pipeline again in order to get the surfaces but before running the pipeline again do not forget to copy all the segmented files in to new input and output directory. +- Step 1: Assuming that you have run the full fastsurfer pipeline once as described in method_1 and succesfully produced segmentations and surfaces +- Step 2: Edit ```mask.mgz``` file in freeview +- Step 3: Run the pipeline again in order to get the surfaces but before running the pipeline again do not forget to copy all the segmented files in to new input and output directory. Note: The files in output folder should be pasted in the subjectX folder, the name of subjectX should be the same as it was used in step_1 otherwise it would raise an error of missing files even though the segmentation files exists in output folder. - ``` + ```bash # Source FreeSurfer export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh @@ -115,7 +115,7 @@ When surfaces go out too far, e.g. they grab dura, you can tighten the mask dire # Run FastSurfer ./run_fastsurfer.sh --t1 $datadir/subjectX/t1-weighted-nii.gz \ --sid subjectX --sd $fastsurferdir \ - --parallel --threads 4 + --parallel --threads 4 \ --surf_only ``` diff --git a/INSTALL.md b/INSTALL.md index dfc41295..55887ec4 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -21,7 +21,7 @@ Non-NVIDIA GPU architectures (Apple M1, AMD) are not officially supported, but e Assuming you have singularity installed already (by a system admin), you can build an image easily from our Dockerhub images. Run this command from a directory where you want to store singularity images: -``` +```bash singularity build fastsurfer-gpu.sif docker://deepmi/fastsurfer:latest ``` Additionally, [the Singularity README](Singularity/README.md) contains detailed directions for building your own Singularity images from Docker. @@ -33,7 +33,7 @@ Our [README](README.md#example-2--fastsurfer-singularity) explains how to run Fa This is very similar to Singularity. Assuming you have Docker installed (by a system admin) you just need to pull one of our pre-build Docker images from dockerhub: -``` +```bash docker pull deepmi/fastsurfer:latest ``` @@ -48,7 +48,7 @@ In a native install you need to install all dependencies (distro packages, FreeS You will need a few additional packages that may be missing on your system (for this you need sudo access or ask a system admin): -``` +```bash sudo apt-get update && apt-get install -y --no-install-recommends \ wget \ git \ @@ -58,7 +58,7 @@ sudo apt-get update && apt-get install -y --no-install-recommends \ If you are using **Ubuntu 20.04**, you will need to upgrade to a newer version of libstdc++, as some 'newer' python packages need GLIBCXX 3.4.29, which is not distributed with Ubuntu 20.04 by default. -``` +```bash sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test sudo apt install -y g++-11 ``` @@ -73,7 +73,7 @@ If you are using pip, make sure pip is updated as older versions will fail. We recommend to install conda as your python environment. If you don't have conda on your system, an admin needs to install it: -``` +```bash wget --no-check-certificate -qO ~/miniconda.sh https://repo.continuum.io/miniconda/Miniconda3-py38_4.11.0-Linux-x86_64.sh chmod +x ~/miniconda.sh sudo ~/miniconda.sh -b -p /opt/conda && \ @@ -83,7 +83,7 @@ rm ~/miniconda.sh #### 3. FastSurfer Get FastSurfer from GitHub. Here you can decide if you want to install the current experimental "dev" version (which can be broken) or the "stable" branch (that has been tested thoroughly): -``` +```bash git clone --branch stable https://github.com/Deep-MI/FastSurfer.git cd FastSurfer ``` @@ -92,7 +92,7 @@ cd FastSurfer Create a new environment and install FastSurfer dependencies: -``` +```bash conda env create -f ./fastsurfer_env_gpu.yml conda activate fastsurfer_gpu ``` @@ -101,17 +101,17 @@ If you do not have an NVIDIA GPU, replace `./fastsurfer_env_gpu.yml` with the c If you only want to run the surface pipeline, replace `./fastsurfer_env_gpu.yml` with the reconsurf-only environment file `./fastsurfer_env_reconsurf.yml`. Next, add the fastsurfer directory to the python path (make sure you have changed into it already): -``` +```bash export PYTHONPATH="${PYTHONPATH}:$PWD" ``` This will need to be done every time you want to run FastSurfer, or you need to add this line to your `~/.bashrc` if you are using bash, for example: -``` +```bash echo "export PYTHONPATH=\"\${PYTHONPATH}:$PWD\"" >> ~/.bashrc ``` You can also download all network checkpoint files (this should be done if you are installing for multiple users): -``` +```bash python3 FastSurferCNN/download_checkpoints.py --all ``` @@ -128,15 +128,15 @@ We have successfully run the segmentation on an AMD GPU (Radeon Pro W6600) using Build the Docker container with ROCm support. -``` -docker build --rm=true -t deepmi/fastsurfer:amd -f ./Docker/Dockerfile_FastSurferCNN_AMD . +```bash +python Docker/build.py --device rocm --tag my_fastsurfer:rocm ``` You will need to add a couple of flags to your docker run command for AMD, see [the Readme](README.md#example-1--fastsurfer-docker) for `**other-docker-flags**` or `**fastsurfer-flags**`: -``` +```bash docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd \ --device=/dev/dri --group-add video --ipc=host --shm-size 8G \ - **other-docker-flags** deepmi/fastsurfer:amd \ + **other-docker-flags** my_fastsurfer:rocm \ **fastsurfer-flags** ``` Note, that this docker image is experimental, uses a different Python version and python packages, so results can differ from our validation results. Please do visual QC. @@ -158,7 +158,7 @@ Start it and set Memory to 15 GB under Preferences -> Resources (or the largest Second, pull one of our Docker containers. Open a terminal window and run: -``` +```sh docker pull deepmi/fastsurfer:latest ``` @@ -173,7 +173,7 @@ On modern Macs with the Apple Silicon M1 or M2 ARM-based chips, we recommend a n If you do not have git and a recent bash (version > 4.0 required!) installed, install them via the packet manager, e.g. brew. This installs brew and then bash: -``` +```sh /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" brew install bash ``` @@ -183,7 +183,7 @@ Make sure you use this bash and not the older one provided with MacOS! #### 2. Python Create a python environment, activate it, and upgrade pip. Here we use pip, but you should also be able to use [conda](#2-conda--for-python-): -``` +```sh python3 -m venv $HOME/python-envs/fastsurfer source $HOME/python-envs/fastsurfer/bin/activate python3 -m pip install --upgrade pip @@ -191,7 +191,7 @@ python3 -m pip install --upgrade pip #### 3. FastSurfer and Requirements Clone FastSurfer: -``` +```sh git clone --branch stable https://github.com/Deep-MI/FastSurfer.git cd FastSurfer export PYTHONPATH="${PYTHONPATH}:$PWD" @@ -199,21 +199,21 @@ export PYTHONPATH="${PYTHONPATH}:$PWD" Install the FastSurfer requirements -``` +```sh python3 -m pip install -r requirements.mac.txt ``` If this step fails, you may need to edit ```requirements.mac.txt``` and adjust version number to what is available. On newer M1 Macs, we also had issues with the h5py package, which could be solved by using brew for help (not sure this is needed any longer): -``` +```sh brew install hdf5 export HDF5_DIR="$(brew --prefix hdf5)" pip3 install --no-binary=h5py h5py ``` You can also download all network checkpoint files (this should be done if you are installing for multiple users): -``` +```sh python3 FastSurferCNN/download_checkpoints.py --all ``` @@ -226,7 +226,7 @@ To run the full pipeline, install and source also the supported FreeSurfer versi #### 4. Apple AI Accelerator support You can also try the experimental support for the Apple Silicon AI Accelerator by setting `PYTORCH_ENABLE_MPS_FALLBACK` and passing `--device mps`: -``` +```sh export PYTORCH_ENABLE_MPS_FALLBACK=1 ./run_fastsurfer.sh --seg_only --device mps .... ``` diff --git a/README.md b/README.md index 6b49a485..499710cc 100644 --- a/README.md +++ b/README.md @@ -97,9 +97,10 @@ Next, you will need to select the `*fastsurfer-flags*` and replace `*fastsurfer- The `*fastsurfer-flags*` will usually include the subject directory (`--sd`; Note, this will be the mounted path - `/output` - for containers), the subject name/id (`--sid`) and the path to the input image (`--t1`). For example: ```bash -... --sd /output --sid test_subject --t1 /data/test_subject_t1.nii.gz +... --sd /output --sid test_subject --t1 /data/test_subject_t1.nii.gz --3T ``` Additionally, you can use `--seg_only` or `--surf_only` to only run a part of the pipeline or `--no_biasfield`, `--no_cereb` and `--no_asegdkt` to switch off some segmentation modules (see above). +Here, we have also added the `--3T` flag, which tells fastsurfer to register against the 3T atlas for ICV estimation (eTIV). In the following, we give an overview of the most important options, but you can view a full list of options with @@ -128,6 +129,7 @@ In the following, we give an overview of the most important options, but you can #### Surface pipeline arguments (optional) * `--surf_only`: only run the surface pipeline recon_surf. The segmentation created by FastSurferCNN must already exist in this case. +* `--3T`: for Talairach registration, use the 3T atlas instead of the 1.5T atlas (which is used if the flag is not provided). This gives better (more consistent with FreeSurfer) ICV estimates (eTIV) for 3T and better Talairach registration matrices, but has little impact on standard volume or surface stats. * `--fstess`: Use mri_tesselate instead of marching cube (default) for surface creation * `--fsqsphere`: Use FreeSurfer default instead of novel spectral spherical projection for qsphere * `--fsaparc`: Use FS aparc segmentations in addition to DL prediction (slower in this case and usually the mapped ones from the DL prediction are fine) @@ -136,7 +138,7 @@ In the following, we give an overview of the most important options, but you can * `--no_surfreg`: Skip the surface registration (`sphere.reg`), which is generated automatically by default. To safe time, use this flag to turn this off. #### Other -* `--threads`: Target number of threads for all modules (segmentation, surface pipeline), select `1` to force FastSurfer to only really use one core. +* `--threads`: Target number of threads for all modules (segmentation, surface pipeline), `1` (default) forces FastSurfer to only really use one core. Note, that the default value may change in the future for better performance on multi-core architectures. * `--vox_size`: Forces processing at a specific voxel size. If a number between 0.7 and 1 is specified (below is experimental) the T1w image is conformed to that isotropic voxel size and processed. If "min" is specified (default), the voxel size is read from the size of the minimal voxel size (smallest per-direction voxel size) in the T1w image: If the minimal voxel size is bigger than 0.98mm, the image is conformed to 1mm isometric. @@ -164,7 +166,7 @@ docker run --gpus all -v /home/user/my_mri_data:/data \ --fs_license /fs_license/license.txt \ --t1 /data/subjectX/t1-weighted.nii.gz \ --sid subjectX --sd /output \ - --parallel + --parallel --3T ``` Docker Flags: @@ -179,6 +181,7 @@ FastSurfer Flags: * The `--sid` is the subject ID name (output folder name) * The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) * The `--parallel` activates processing left and right hemisphere in parallel +* The `--3T` changes the atlas for registration to the 3T atlas for better Talairach transforms and ICV estimates (eTIV) Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-v` arguments (part after colon). @@ -207,7 +210,7 @@ singularity exec --nv \ --fs_license /fs_license/license.txt \ --t1 /data/subjectX/t1-weighted.nii.gz \ --sid subjectX --sd /output \ - --parallel + --parallel --3T ``` #### Singularity Flags @@ -221,6 +224,7 @@ singularity exec --nv \ * The `--sid` is the subject ID name (output folder name) * The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) * The `--parallel` activates processing left and right hemisphere in parallel +* The `--3T` changes the atlas for registration to the 3T atlas for better Talairach transforms and ICV estimates (eTIV) Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-v` arguments (part after colon). @@ -252,23 +256,53 @@ fastsurferdir=/home/user/my_fastsurfer_analysis # Run FastSurfer ./run_fastsurfer.sh --t1 $datadir/subjectX/t1-weighted-nii.gz \ --sid subjectX --sd $fastsurferdir \ - --parallel --threads 4 + --parallel --threads 4 --3T ``` The output will be stored in the $fastsurferdir (including the aparc.DKTatlas+aseg.deep.mgz segmentation under $fastsurferdir/subjectX/mri (default location)). Processing of the hemispheres will be run in parallel (--parallel flag) to significantly speed-up surface creation. Omit this flag to run the processing sequentially, e.g. if you want to save resources on a compute cluster. -### Example 4: Native FastSurfer on multiple subjects +### Example 4: FastSurfer on multiple subjects -In order to run FastSurfer on multiple cases which are stored in the same directory, prepare a subjects_list.txt file listing the names line by line: -subject1\n -subject2\n -subject3\n +In order to run FastSurfer on multiple cases, you may use the helper script `brun_subjects.sh`. This script accepts multiple ways to define the subjects, for example a subjects_list file. +Prepare the subjects_list file as follows: +``` +subject_id1=path_to_t1\n +subject2=path_to_t1\n +subject3=path_to_t1\n ... -subject10\n +subject10=path_to_t1\n +``` +Note, that all paths (`path_to_t1`) are as if you passed them to the `run_fastsurfer.sh` script via `--t1 ` so they may be with respect to the singularity or docker file system. Absolute paths are recommended. -And invoke the following command (make sure you have enough resources to run the given number of subjects in parallel!): +The `brun_fastsurfer.sh` script can then be invoked in docker, singularity or on the native platform as follows: +#### Docker +```bash +docker run --gpus all -v /home/user/my_mri_data:/data \ + -v /home/user/my_fastsurfer_analysis:/output \ + -v /home/user/my_fs_license_dir:/fs_license \ + --entrypoint "/fastsurfer/brun_fastsurfer.sh" \ + --rm --user $(id -u):$(id -g) deepmi/fastsurfer:latest \ + --fs_license /fs_license/license.txt \ + --sd /output --subjects_list /data/subjects_list.txt \ + --parallel --3T +``` +#### Singularity +```bash +singularity exec --nv \ + --no-home \ + -B /home/user/my_mri_data:/data \ + -B /home/user/my_fastsurfer_analysis:/output \ + -B /home/user/my_fs_license_dir:/fs_license \ + ./fastsurfer-gpu.sif \ + /fastsurfer/run_fastsurfer.sh \ + --fs_license /fs_license/license.txt \ + --sd /output \ + --subjects_list /data/subjects_list.txt \ + --parallel --3T +``` +#### Native ```bash export FREESURFER_HOME=/path/to/freesurfer source $FREESURFER_HOME/SetUpFreeSurfer.sh @@ -276,16 +310,19 @@ source $FREESURFER_HOME/SetUpFreeSurfer.sh cd /home/user/FastSurfer datadir=/home/user/my_mri_data fastsurferdir=/home/user/my_fastsurfer_analysis -mkdir -p $fastsurferdir/logs # create log dir for storing nohup output log (optional) - -while read p ; do - echo $p - nohup ./run_fastsurfer.sh --t1 $datadir/$p/t1-weighted.nii.gz - --sid $p --sd $fastsurferdir > $fastsurferdir/logs/out-${p}.log & - sleep 90s -done < ./data/subjects_list.txt + +# Run FastSurfer +./brun_fastsurfer.sh --subjects_list $datadir/subjects_list.txt \ + --sd $fastsurferdir \ + --parallel --threads 4 --3T ``` +#### Flags +The `brun_fastsurfer.sh` script accepts almost all `run_fastsurfer.sh` flags (exceptions are `--t1` and `--sid`). In addition, +* the `--parallel_subjects` runs all subjects in parallel (experimental, parameter may change in future releases). This is particularly useful for surfaces computation `--surf_only`. +* to run segmentation in series, but surfaces in parallel, you may use `--parallel_subjects surf`. +* these options are in contrast (and in addition) to `--parallel`, which just parallelizes the hemispheres of one case. + ### Example 5: Quick Segmentation For many applications you won't need the surfaces. You can run only the aparc+DKT segmentation (in 1 minute on a GPU) via @@ -313,7 +350,7 @@ docker run --gpus all -v $datadir:/data \ --t1 /data/subject1/t1-weighted.nii.gz \ --asegdkt_segfile /output/subject1/aparc.DKTatlas+aseg.deep.mgz \ --conformed_name $outputdir/subject1/conformed.mgz \ - --threads 4 --seg_only + --threads 4 --seg_only --3T ``` ### Example 6: Running FastSurfer on a SLURM cluster via Singularity diff --git a/Singularity/README.md b/Singularity/README.md index 23aa4e34..51e7267e 100644 --- a/Singularity/README.md +++ b/Singularity/README.md @@ -40,7 +40,7 @@ singularity exec --nv \ --fs_license /fs/license.txt \ --t1 /data/subjectX/orig.mgz \ --sid subjectX --sd /output \ - --parallel + --parallel --3T ``` ### Singularity Flags * `--nv`: This flag is used to access GPU resources. It should be excluded if you intend to use the CPU version of FastSurfer @@ -53,6 +53,7 @@ singularity exec --nv \ * The `--sid` is the subject ID name (output folder name) * The `--sd` points to the output directory (its mounted name inside docker: /home/user/my_fastsurfer_analysis => /output) * The `--parallel` activates processing left and right hemisphere in parallel +* The `--3T` switches to the 3T atlas instead of the 1.5T atlas for Talairach registration. Note, that the paths following `--fs_license`, `--t1`, and `--sd` are __inside__ the container, not global paths on your system, so they should point to the places where you mapped these paths above with the `-B` arguments. @@ -73,7 +74,7 @@ singularity exec --no-home \ --fs_license /fs/license.txt \ --t1 /data/subjectX/orig.mgz \ --sid subjectX --sd /output \ - --parallel + --parallel --3T ``` # Singularity Best Practice diff --git a/Tutorial/Complete_FastSurfer_Tutorial.ipynb b/Tutorial/Complete_FastSurfer_Tutorial.ipynb index 096a0b27..ca880abe 100644 --- a/Tutorial/Complete_FastSurfer_Tutorial.ipynb +++ b/Tutorial/Complete_FastSurfer_Tutorial.ipynb @@ -433,6 +433,8 @@ "cell_type": "code", "source": [ "#@title The first part of FastSurfer creates a whole-brain segmentation into 95 classes. Here, we use the pretrained deep-learning network FastSurferCNN using the checkpoints stored at the open source project deep-mi/fastsurfer to to run the model inference on a single image.\n", + "\n", + "# Note, you should also add --3T, if you are processing data from a 3T scanner.\n", "! FASTSURFER_HOME=$FASTSURFER_HOME \\\n", " $FASTSURFER_HOME/run_fastsurfer.sh --t1 $img \\\n", " --sd \"{SETUP_DIR}fastsurfer_seg\" \\\n", diff --git a/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb b/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb index 402e937e..4fb0696e 100644 --- a/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb +++ b/Tutorial/Tutorial_FastSurferCNN_QuickSeg.ipynb @@ -409,6 +409,8 @@ ], "source": [ "#@title The first part of FastSurfer creates a whole-brain segmentation into 95 classes. Here, we use the pretrained deep-learning network FastSurferCNN using the checkpoints stored at the open source project deep-mi/fastsurfer to to run the model inference on a single image.\n", + "\n", + "# Note, you should also add --3T, if you are processing data from a 3T scanner.\n", "! FASTSURFER_HOME=$FASTSURFER_HOME \\\n", " $FASTSURFER_HOME/run_fastsurfer.sh --t1 $img \\\n", " --sd \"{SETUP_DIR}fastsurfer_seg\" \\\n", diff --git a/brun_fastsurfer.sh b/brun_fastsurfer.sh index 2771482b..2422589e 100755 --- a/brun_fastsurfer.sh +++ b/brun_fastsurfer.sh @@ -25,7 +25,8 @@ surf_only="false" seg_only="false" debug="false" run_fastsurfer="default" -parallel_subjects="false" +parallel_subjects="1" +parallel_surf="false" statusfile="" function usage() @@ -41,8 +42,8 @@ OR brun_fastsurfer.sh [other options] Other options: -brun_fastsurfer.sh [...] [--batch "/"] [--parallel_subjects] [--run_fastsurfer