Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The storage locations available to you on WIN servers/desktops

Introduction

There are several file servers in operation at FMRIB and OHBA providing access to storage locations resilient to hardware failure. Each location has differing performance/protection tradeoffs and usage costs.

Access to the file locations is provided through interactive servers and select Linux desktops.

Data storage facilities are split into several blocks: a general purpose cluster home directory which holds the users configuration files and scripts; a larger high-performance un-backed up scratch area for analysis runs in progress (group folders also available), and group/projects data storage areas for more static data e.g. results. A summary of these locations is in the summary tab below.

Charging rates for each area are reviewed on an annual basis, and where applicable are charged per-month (pro-rata). Details of the current charges are available on the Computing Charges​ page.

Summary of storage locations

* sftp.fmrib.ox.ac.uk is a new service, launched in spring 2024. If you discover any issues with it's use, please advise computing-help@win.ox.ac.uk.

​Building ​Available from

Location

​Purpose

Default Quota

Speed

Capacity

Data Protection/
Encryption

Backup Regularity

​FMRIB ​​ ​

​Linux hosts
SCP/SFTP:
  sftp.fmrib.ox.ac.uk*
  jalapeno.fmrib.ox.ac.uk

/home/fs0/username

FMRIB Cluster profiles, scripts etc​

10GB

Low

Low

High security
No encryption

Local: 3 hourly snapshots

On-site server:

bihourly (encrypt-at-rest)

Off-site server: daily (encrypt-at-rest)

​Linux hosts
SCP/SFTP:
  sftp.fmrib.ox.ac.uk*
  jalapeno.fmrib.ox.ac.uk

Home folder Scratch Link

​Analysis in progress

100GB

High

Medium

Medium security
Encrypted at rest

No backup

Linux hosts
SCP/SFTP:
  sftp.fmrib.ox.ac.uk*
  jalapeno.fmrib.ox.ac.uk

/vols/Data/<project>

​Medium term online dataset storage

As requested

Low

High

High security
No encryption

Local: 3 hourly snapshots

On-site server: bihourly (encrypt-at-rest)

Off-site server: daily (encrypt-at-rest)

​OHBA ​ ​Linux hosts
SCP/SFTP: 
  hbafs_ssh_gw.ohba.ox.ac.uk
​/ohba/pi/<pi shortname> ​Individual research group storage ​As requested ​Low ​Medium ​High Security
Encryption at rest
​Local: 3 hourly snapshots

On off-site server: daily (encrypt-at-rest)

Linux hosts
SCP/SFTP: 
 
hbafs_ssh_gw.ohba.ox.ac.uk
​/ohba/projects/<projectname> ​Project based shared storage As requested​ ​Low ​Medium ​High Security
Encryption at rest
​Local: 3 hourly snapshots

On off-site server: daily (encrypt-at-rest)

How usage is controlled

Introduction

To ensure fair-share usage and enable capacity based charging of the shared resources we limit your file space capacity via two mechanisms, depending on where the data is housed.

/vols/Data, group /vols/Scratch and /ohba/pi|projects folders are controlled at the file system size, user /vols/Scratch and home folders are controlled by user quotas.

Checki​ng your disk usage

Per-shar​​e Limits

/vols/Data, /ohba/pi|projects and group scratch folders (/vols/Scratch/groupname) exist as individual file systems, so you can easily check on the current disk usage using the df command. Before you use this you must ensure the file system is available for interrogation, which you can ensure by changing into the share first:

3 jbloggs@jalapeno $ cd /vols/Data/jbloggs
4 jbloggs@jalapeno $ df -h .
Filesystem           Size      Used    Avail Use% Mounted on
fs2data-10g.data.fmrib:/volumes/Data/Shared/jbloggs
                      20G      612M      20G   3% /vols/Data/jbloggs

To get a breakdown of how your folders contribute to this disk usage, use the du command, e.g.:

du -sk .

will report the size of the current folder in KB

du -sh *

will report the size all objects in the current folder individually in appropriate units.

du -sk * | sort -n

will report the size of all objects sorted such that the biggest item is last in the list.

Where your group head has asked for quotas to be enforced on /vols/Data/* then you can check your quota usage using the quota​ command (see below).

Exceeding your disk space

The file system size controls enforced for these areas cannot be exceeded, and when reached it can be somewhat difficult to free up space again (a feature of copy-on-write file systems). Should you fill your area then you have two options: 

  1. Contact us to request additional space - please indicate whether you need a small increment to allow deletion or a permanent increase in space.

  2. Find a large file you no longer require and issue the command:
    cat /dev/null > thelargefile
    

    This will truncate the file by-passing the copy-on-write mechanism allowing you to begin deletion in earnest.

User/grou​​p Quotas

Usage in home and user scratch folders is controlled using a quota system which is more relaxed in its enforcement of disk usage than the system used for /vols/Data - it is perfectly possible to slightly exceed your allocated space if writing occurs fast enough. To check your current usage use the quota command (on jalapeno):

4 jbloggs@jalapeno $ quota
Disk quotas for user jbloggs (uid 1234):
    File Space       Used  Available      Total
    /home/fs0   26.1 GiB    5.9 GiB     32 GiB
    /vols/Scratch   14.1 GiB   10.9 GiB     25 GiB

    N.B. Shared storage (/vols/Data or group scratch folders) is not
included in the above report - use 'df -h <foldername>'.

If you receive an error when running the quota command please try again, the server is most probably busy doing other things. 

Miss-matches between reported quota and disk usage

The quota system records the size of all files and folders belonging to your account, no matter where they reside (for example in someone else's home folder), so it is perfectly feasible to have a reported quota usage that exceeds that of the contents of your home folder. If you find that du output does not match your quota report, look in locations that you may have shared files with others, e.g. if you have a shared analysis project with the files being stored in someone else's scratch folder then any files you wrote there will contribute to your quota.

How to recover an accidentally delete file/folder and backup data on scratch

/home/fs0, /vols/Data, /ohba/ sn​apshots

The local snapshots on the home, /vols/Data and /ohba/* file systems are browseable on the access computer (e.g. jalapeno.fmrib.ox.ac.uk for home and /vols/Data and hbafs0-gw.ohba.ox.ac.uk in OHBA). To access these snapshots you need to visit a hidden folder .zfs in the root of the share; for /vols/Data and /ohba/* shares this will be the shared folder, e.g. /vols/Data/myproject or /ohba/projects/myproject. The snapshots of the home folders are in /home/fs0/.zfs

Within this hidden folder you will find a sub-folder snapshot and within that date-stamped snapshot folders, e.g.

/home/fs0/.zfs/snapshot/autosnap_2023-02-26_19:21:00_hourly

NB The timestamp is US format, e.g. YYYY-MM-DD

In the case of home folders you will need to cd into <snapshotname>/<username>/ to see your files.

To recover files, simply copy out of the appropriate snapshot folder back to the original location (or other location as appropriate).

Older backups

For backups from earlier time points please computing-help@win.ox.ac.uk for advice.​

Limitations

The snapshot system remembers the state of the file system at the point in time that the snapshot is taken, if your file/folder did not exist at the time the last snapshot was taken then it will not have been backed up.

Due to bandwidth limitations and operational constraints we do not guarantee that the snapshot regime is always adhered to.

Protecting Data on Scratch

Scratch is not intended to provide any significant protection against file deletion/overwrites. Any important, unchanging (or rarely changing) data should ideally be moved to /vols/Data, but you can also use the tape archive facility to make user driven point-in-time backups of a folder. Please note that we do not routinely delete archives from tape, so please be considerate in your use of this option so as to not fill the resource we have.

To take an archive of a folder without removing it from disk, you may use the -k option to the archive command - see Archiving Data.

Technical information on how data is stored - may be used when preparing Data Management Plans or Data Privacy Assessments

Technical​​ information

FMRIB and OHBA's file stores use the ZFS file system to store data. This is a check-summed copy-on-write file system that verifies data blocks whenever they are accessed (and on a regular schedule for rarely accessed files). Data is distributed over multiple disks with sufficient parity data to ensure that disk failures do not cause data loss. Any data failing checksum are, where possible, automatically repaired or the corruption of the file is notified to an administrator, so minimising the likelihood of silent data rot. The copy-on-write method of operation ensures that the file system always remains in a consistent state.

/home/fs0, /vols/Data and /ohba/* folders are located in triple-disk redundant data stores. This allows the system to survive up to three disk failures in any one set of disks (there is more than one set). This level of data security comes at the cost of lower overall performance, especially with respect to cluster tasks. To aid performance, SSD based storage is used to improve write performance.

/vols/Scratch is located on a mirror-pair redundant data store. The mirror pair allows survival from loss of one of the disks in each pair (of which there are many). This means it is less secure against hardware failure than the other stores, but the performance is much higher with respect to multiple computers accessing the store at once (e.g. the cluster). The speed increase is however traded for overall capacity, so this share is smaller than /vols/Data store.

/vols/Data and /vols/Scratch are shared from high-availability servers allowing some system maintenance tasks to take place with minimal disruption to service.

/vols/Scratch and /ohba/* folders utilise at-rest encryption to ensure that data from failed disks is not recoverable.