FMRIB OOD Compute Cluster (New cluster)
Information on the new compute facilities hosted by FMRIB
FMRIB Facilities
Where the HPC facilities provided by the BioMedical Research Computing (BMRC) team are not suitable for your research, FMRIB hosts a small compute cluster with a comparable featureset.
Server Resources
The FMRIB cluster comprises four general purpose computers, two dedicated to interactive tasks and a small number of GPU hosts.
Interactive/GUI Access
Running graphical software is achieved using the cluster's Virtual Desktop Infrastructure service.
Introduction OxCIN's OOD Compute Facilities
In addition to other Medical Sciences Division or wider-Oxford compute facilities, FMRIB hosts the OxCIN compute facilities, commonly known as OOD. Charges apply for this service - see our charges page for details.
OOD is a compute cluster formed from rack-mounted multi-core computers. To ensure efficient use of the hardware, tasks are distributed amongst these computers using queuing software in the most optimum manner, based on job characteristics (e.g. length, memory requirements, GPU (CUDA) needs, etc), with available time shared out amongst all users. This means you usually have to wait a while for your jobs to complete, but there is the potential to utilise significantly more compute resources than you would have available to you on a single computer such as your laptop or a desktop computer.
For interactive discovery tasks, OOD includes a web interface (using the Open On Demand software) at https://ood.fmrib.ox.ac.uk [Oxford Only] that allows users to launch Linux desktop environment, Jupyter Notebook, R Studio and Mathworks MATLAB sessions; all utilising the underlying compute cluster infrastructure with associated high-memory/core count and CUDA capable hardware.
Portions of this service were funded from Wellcome grants - when publishing work that used these facilities please acknowledge this funding as per our acknowledgment page.
Details of the hardware used:
Period | Hardware Details |
April 2024-September 2024 | FMRIB Cluster Nodes (4-2024) |
October 2024-February 2025 | FMRIB Cluster Nodes (10-2024) |
March 2025- | FMRIB Cluster Nodes (2025) |
For details on how to use this facility - see our OOD documentation and cluster job submission pages.
Initial setup of your account
Getting started
Access to our cluster is primarily via our Open OnDemand web interface (VDI), but for short-term non-graphical access your can SSH to a redundent pair of machines clint.fmrib.ox.ac.uk.
clint.fmrib.ox.ac.uk must not be used for any processing, they are purely for you to manipulate your files, upload/download data and submit tasks to the queues and to gain access to other resources (they enforce limits on CPU and memory, so significant tasks are likely to take a long time or fail due to lack of RAM). You may use the VDI hosts for any compute purpose.
On first connecting your SSH client will ask if you trust the remote server - check that the 'fingerprint' against those published on our SSH page.
Using Software
Software is managed using the Environment Modules tools, so to access most neuroimaging software you will need to load the appropriate module.
Storage locations
User home folders (at /home/fs0/<username>) are intentionally limited in size so do not store any project data in this location. Users have a dedicated scratch folder /vols/Scratch/<username> and you may have access to a research group storage area. Please see our storage locations page.
Graphical Programs
WIN Centre users have access to a web-based Virtual Desktop Infrastructure that can be used to run graphical software on any of our interactive nodes - these hosts are not accessable by any other route (no direct SSH or VNC access is possible).
Submitting to the Cluster
For information on submitting jobs to the CPU or GPU cluster nodes see our job submission pages.
Submitting jobs to the FMRIB SLURM compute cluster
Please see our job submission and monitoring section.
FMRIB Cluster Servers - Hardware Overview
Interactive Machines
To run interactive tasks, e.g. MATLAB or non-queued software OxCIN has two machines available for use as detailed in the table below. These machines are access via the Virtual Desktop Infrastructure.
Hostname | Queues | Access via | Memory | CPU cores/model |
---|---|---|---|---|
clcpu01-02 | interactive | VDI | 1TB | 48-Core AMD EPYC 7643 @ 2.3GHz |
clcpu05 | interactive | VDI | 2.3TB | 2x 24-Core AMD EPYC 9224 @ 2.5GHz |
clcpu06 | interactive | VDI | 2.3TB | 2x 24-Core AMD EPYC 9274F @ 4GHz |
Compute NODES
Hostname | Queues | Access via | Memory | CPU cores/model |
---|---|---|---|---|
clcpu03-04 | long, short | Slurm submission | 1TB | 48-Core AMD EPYC 7643 @ 2.3GHz |
clcpu07-08 | long, short | Slurm submission | 1.1TB | 48-Core AMD EPYC 9474F @ 3.6GHz |
GPU Machines
Hostname | Queues | GPU Card and Quantity | Memory | CPU cores/model |
---|---|---|---|---|
clgpu01 | gpu_short, gpu_long | A30 (24GB) x 2 split into four 12 GB units (MIG) | 384GB | 24-Core AMD EPYC 9254 @ 2.9GHz |
clgpu02 | gpu_short, gpu_long | A30 (24GB) x 2 split into four 12 GB units (MIG) | 384GB | 24-Core AMD EPYC 9254 @ 2.9GHz |
clgpu03 | gpu_short, gpu_long | A30 (24GB) x 2 split into four 12 GB units (MIG) | 384GB | 24-Core AMD EPYC 9254 @ 2.9GHz |
clgpu04 | gpu_short, gpu_long | H100 (80GB) x 1 and H100 (80GB) split into 7 10GB units (MIG) | 384GB | 24-Core AMD EPYC 9254 @ 2.9GHz |