NEW Slurm Cluster Usage
The BMRC cluster has transitioned to the SLURM cluster software and the fsl_sub module now uses the SLURM cluster.
SLURM is significantly different from Grid Engine, in particular, there are no RAM limits for jobs. We STRONGLY recommend that you specify RAM (with fsl_sub's -R option) to ensure efficient use of the cluster, without it, all jobs will will default to requesting 15GB of RAM. This also means that the -S/--noramsplit option is meaningless.
fsl_sub's native options remain the same, but of note, SLURM does not support parallel environments, so when requesting multi-thread jobs slots you can use -s <number>. If you provide a parallel environment name this will be discarded, so existing scripts should continue to work as is.
Interactive tasks are started in a completely different manner - see BMRC's documentation.
BMRC's documentation: https://www.medsci.ox.ac.uk/divisional-services/support-services-1/bmrc/using-the-bmrc-cluster-with-slurm
GPU hardware information and usage is available at: https://www.medsci.ox.ac.uk/for-staff/resources/bmrc/gpu-resources-slurm