Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

How to request a GPU for your job

Whilst GPU tasks can simply be submitted to the cuda.q queue fsl_sub also provides helper options which can automatically select a GPU queue and select the appropriate CUDA toolkit for you.

If we were to have different hardware capabilities on GPUs (we don't at FMRIB) then it can also select specific card types. The options of interest all begin --coprocessor:

  • ​-c|--coprocessor <coprocessor name>: This selects the coprocessor with the given name (see fsl_sub --help for details of available coprocessors)
  • --coprocessor_multi <number>: This allows you to request multiple GPUs. On the FMRIB cluster you can select no more than two GPUs. You will automatically be given a two-slot openmp parallel environment
  • --coprocessor_class <class>: (Not relevant at FMRIB) This would allow you to select which GPU hardware model you require, e.g. V for Volta cards
  • --coprocessor_class_strict: If a class is requested you will normally be allocated a card at least as capable as the model requested. By adding this option you ensure that you only get the GPU model you asked for
  • --coprocessor_toolkit <toolkit version>: This allows you to select the API toolkit your sofware needs. This will automatically make available the requested CUDA libraries where these haven't been compiled into the software