Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Information on the new compute facilities hosted by FMRIB

FMRIB Facilities

Where the HPC facilities provided by the BioMedical Research Computing (BMRC) team are not suitable for your research, FMRIB hosts a small compute cluster with a comparable featureset.

Server Resources

The FMRIB cluster comprises four general purpose computers, two dedicated to interactive tasks and a small number of GPU hosts.

Interactive/GUI Ac​​cess

Running graphical software is achieved using the cluster's Virtual Desktop Infrastructure service.

Introduction to the FMRIB Cluster

Introduction​

​WIN@FMRIB operate a compute cluster formed from rack-mounted multi-core computers. To ensure efficient use of the hardware, tasks are distributed amongst these computers using queuing software. Compute tasks are distributed amongst the cluster in the most optimum manner with available time shared out amongst all users. This means you usually have to wait a while for your jobs to complete, but there is the potential to utilise vastly more compute resources than you would have available to you on a single computer such as your laptop or a workstation.



Initial setup of your account

Getting started​

Access to our cluster is primarily via our Open OnDemand web interface (VDI), but for short-term non-graphical access your can SSH to a redundent pair of machines clint.fmrib.ox.ac.uk.

clint.fmrib.ox.ac.uk must not​ be used for any processing, they are purely for you to manipulate your files, upload/download data and submit tasks to the queues and to gain access to other resources (they enforce limits on CPU and memory, so significant tasks are likely to take a long time or fail due to lack of RAM). You may use the VDI hosts for any compute purpose.

On first connecting your SSH client will ask if you trust the remote server - check that the 'fingerprint' against those published on our SSH page.

Using Software

Software is managed using the Environment Modules tools, so to access most neuroimaging software you will need to load the appropriate module.

Storage locations

​User home folders (at /home/fs0/<username>) are intentionally limited in size so do not store any project data in this location. Users have a dedicated scratch folder /vols/Scratch/<username> and you may have access to a research group storage area. Please see our storage locations page.

Graphical Programs

​WIN Centre users have access to a web-based Virtual Desktop Infrastructure that can be used to run graphical software on any of our interactive nodes - these hosts are not accessable by any other route (no direct SSH or VNC access is possible).

Submitting to the Cluster​

For information on submitting jobs to the CPU or GPU cluster nodes see our job submission pages.

Submitting jobs to the FMRIB SLURM compute cluster

Please see our job submission and monitoring section.

FMRIB Cluster Servers​ - Hardware Overview

Interactive Machines

To run interactive tasks, e.g. MATLAB or non-queued software WIN has two machines available for use as detailed in the table below.​ These machines are access via the Virtual Desktop Infrastructure.

Interactive Machines
HostnameQueues​Access viaMemoryCPU cores/model
clcpu01-02 interactive VDI 1TB 48 AMD EPYC 7643 @ 2.3GHz

Compute NODES

HostnameQueuesAccess viaMemoryCPU cores/model
clcpu03-04 long, short Slurm submission 1TB 48 AMD EPYC 7643 @ 2.3GHz

GPU Machines​

Whilst we switch over from the old cluster, there are limited GPU resources available. New hardware is on order.

GPU Machines
HostnameQueuesGPU Card and Quantity​Memory​CPU cores/model
clgpu01 gpu_short, gpu_long A30 (24GB) x 2 split into four 12 GB units 384GB 24 AMD EPYC 9254 @ 2.9GHz
clgpu02 (coming soon) gpu_short, gpu_long A30 (24GB) x 2 split into four 12 GB units 384GB 24 AMD EPYC 9254 @ 2.9GHz
clgpu03 (coming soon) gpu_short, gpu_long A30 (24GB) x 2 split into four 12 GB units 384GB 24 AMD EPYC 9254 @ 2.9GHz
clgpu04 (due June) gpu_short, gpu_long H100 (80GB) x 2 384GB 24 AMD EPYC 9254 @ 2.9GHz