Skip to content

Freesurfer 7 Installation

neured edited this page May 2, 2020 · 2 revisions

First, download the appropriate package from FreeSurfer 7 Downloads

In Ubuntu 18 or 20, download the CentOS7 package then follow these instructions for installation. On a non-shared Linux computer I like to install FreeSurfer in a home sub-directory.

You will need a license file (license.txt). You can get one by clicking this link.

If you want to run the hippocampal and aymgdala subfield/nuclei segmentations, first install the Matlab runtime (if you don't have the appropriate version of Matlab [e.g., 2014b) installed).

Do this by running: fs_install_mcr R2014b

On my computer running Ubuntu 20.04 LTS, when I tried to run a basic recon-all (not segmentHA_T1.sh), I received the following error (this is edited to remove the paths and other specific identifying information)

Subject Stamp: freesurfer-linux-centos7_x86_64-7.0.0-20200429-3a03ebd Current Stamp: freesurfer-linux-centos7_x86_64-7.0.0-20200429-3a03ebd INFO: SUBJECTS_DIR is /[path]/[to]/freesurfer/subjects Actual FREESURFER_HOME /[path]/[to]/freesurfer Linux [computer name] 5.4.0-28-generic #32-Ubuntu SMP Wed Apr 22 17:40:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux lta_convert: /[path]/[to]/freesurfer/MCRv84/sys/os/glnxa64/libstdc++.so.6: version 'GLIBCXX_3.4.19' not found (required by lta_convert) ERROR: Executable 'lta_convert -all-info failed! Is it missing from the distribution?'

Fixing this error was done by first removing the offending file rm /[path]/[to]/freesurfer/MCRv84/sys/os/glnxa64/libstdc++.so.6 and then creating a symbolic link to the file elsewhere on my computer ln -s /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /[path]/[to]/freesurfer/MCRv84/sys/os/glnxa64/libstdc++.so.6

Then I re-ran recon-all (here's an example that will use 4 cores for some steps using OpenMP) recon-all -s [subject] -i /[path]/[to]/[input]/[dicom or nifti] -all -parallel

Note, to change the number of threads (will depend on how many cores your computer has), modify the command like this (to use 2 cores; change the number to use another number of cores):

recon-all -s [subject] -i /[path]/[to]/[input]/[dicom or nifti] -all -parallel -openmp 2

Parallelization enables two forms of compute parallelization that significantly reduces the runtime. As a point of reference, using a new-ish workstation (2015+), the recon-all -all runtime is just under 3 hours. When the -parallel flag is specified at the end of the recon-all command-line, it will enable 'fine-grained' parallelized code, making use of OpenMP, embedded in many of the binaries, namely affecting mri_em_register and mri_ca_register. By default, it instructs the binaries to use 4 processors (cores), meaning, 4 threads will run in parallel in some operations (manifested in 'top' by mri_ca_register, for example, showing 400% CPU utilization). This can be overridden by including the flag -openmp after -parallel, where is the number of processors you'd like to use (ex. 8 if you have an 8 core machine). Note that this parallelization was introduced in v5.3, but many new routines were OpenMP-parallelized in v6. The other form of parallelization, a 'coarse' form, enabled when the -parallel flag is specified, is such that during the stages where left and right hemispheric data is processed, each hemi binary is run separately (and in parallel, manifesting itself in 'top' as two instances of mris_sphere, for example). Note that a couple of the hemi stages (eg. mris_sphere) make use of a tiny amount of OpenMP code, which means that for brief periods, as many as 8 cores are utilized (2 binaries running code that each make use of 4 threads). In general, though, a 4 core machine can easily handle those periods. Be aware that if you enable this -parallel flag on instances of recon-all running through a job scheduler (like a cluster), it may not make your System Administrator happy if you do not pre-allocate a sufficient number of cores for your job, as you will be taking cycles from other cores that may be running jobs belonging to other cluster users.

I need to run more tests but I haven't seen many speed benefits from using more than 3 or 4 cores but if you have a processor with 8 or more real cores (plus hyperthreading), you might see additional benefits from running with more than 4. Typically I've seen the time cut in half by using -parallel. However, if you have multiple brains to process and sufficient RAM and cores, it is usually faster to run 4 subjects at once rather than using those 4 cores to run 1 subject, for example.