Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Using Freesurfer on the FMRIB cluster

Freesurfer (https://surfer.nmr.mgh.harvard.edu/) is available for use on the ODD cluster, with various versions installed, selectable via environment modules.

Freesurfer requires a license file to operate - you can register for use at:

https://surfer.nmr.mgh.harvard.edu/fswiki/License

Once you have a license file, put it in your home folder, named .freesurfer-license and then run:

module add freesurfer-license
freesurfer-license ~/.freesurfer-license

Version specific instructions

Freesurfer 8.1

This version is available in three different configurations:

  • freesurfer/8.1.0-MCR-R2019b - this is the standard install of Freesurfer but may not be able to run the Freeview GUI due to incompatibilities with the MATLAB runtime other components require.
  • freesurfer/8.1.0-CUDA-MCR-R2019b - this version includes CUDA enabled python components for use on GPU nodes.
  • freesurfer-freeview/8.1.0 - this version lacks the MATLAB runtime and should be used to run Freeview

High-speed recon-all

Version 8 of Freesurfer includes a multi-threaded version of recon-all that can drastically reduce the runtime of this processing step. This requires significant amounts of memory (ca 85GB), so you will need to request appropriate numbers of threads and memory for this to complete successfully. This mode is turned on with an environment variable.

An example submission command would be:

fsl_sub -T 240 -R 90 -s 8 --export FS_V8_XOPTS=1 recon_all <my options>

This will request 90GB of RAM, 8 threads and a runtime of 4 hours.

We would recommend running this once, finding out how much RAM/time it takes and adjusting subsequent submissions appropriately - leave at least a 1GB overhead on memory requirements.