Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

​The BMRC cluster has transitioned to the SLURM cluster software and the fsl_sub module now uses the SLURM cluster.

​​​​SLURM is significantly different from Grid Engine, in particular, there are no RAM limits for jobs. We STRONGLY recommend that you specify RAM (with fsl_sub's -R option) to ensure efficient use of the cluster, without it, all jobs will will default to requesting 15GB of RAM. This also means that the -S/--noramsplit option is meaningless.

fsl_sub's native options remain the same, but of note, SLURM does not support parallel environments, so when requesting multi-thread jobs slots you can use -s <number>. If you provide a parallel environment name this will be discarded, so existing scripts should continue to work as is.

Interactive tasks are started in a completely different manner - see BMRC's documentation.

BMRC's documentation: https://www.medsci.ox.ac.uk/divisional-services/support-services-1/bmrc/using-the-bmrc-cluster-with-slurm

GPU hardware information and usage is available at: https://www.medsci.ox.ac.uk/for-staff/resources/bmrc/gpu-resources-slurm