Storage and data transfer
Once you can log in to the cluster, then you may want to set up your computation and you need your data. Therefore, two questions arise:
1. Where to store data?
In this document, we explain you the storage system and give examples how to transfer your data between your local computer and the cluster.
Upload a directory from you local computer to /cluster/scratch/username ($SCRATCH) on Euler
$ scp -r dummy_dir firstname.lastname@example.org:/cluster/scratch/username/
Log in to the cluster and check your disk space quota
$ cd $HOME $ pwd /cluster/home/username
$HOME is a safe, long-term storage for critical data (program source, scripts, etc.) and is accessible only by the user (owner). This means other people cannot read its contents.
There is a disk quota of 16/20 GB and a maximum of 80’000/100’000 files (soft/hard quota). You can check the quota with the command lquota.
Its content is saved every hour/day/week using snapshot, which is stored in the hidden .snapshot directory.
$ cd $SCRATCH $ pwd /cluster/scratch/username
$SCRATCH is a fast, short-term storage for computations running on the cluster. It is created automatically upon first access (cd $SCRATCH) and visible (mounted) only when accessed.
It has strict usage rules (see $SCRATCH/__USAGE_RULES__ for details) and has no backup.
/scratch on each compute node ($TMPDIR)
The local scratch is intended for serial, I/O-intensive applications. Therefore, it has a very short life span. Data are deleted automatically when the job ends.
Scratch space must be requested by the job and has no backup.
Shareholders can buy the space on Project and Work as much as they need, and manage access rights. Quota can be checked with lquota. The content is backed up multiple times per week.
$ cd /cluster/project/groupname
Similar to $HOME, but for groups, it is a safe, long-term storage for critical data.
$ cd /cluster/work/groupname
Similar to global scratch, but without purge, it is a fast, short-or medium-term storage for large computations.
The folder is visible only when accessed.
Groups who have purchased storage on the central NAS of ETH provided ID Systemdienste can access it on our clusters.
Groups who are operating their own NAS can export a shared file system via NFS to Euler. The user and group ID's on the NAS needs to be consistent with ETH user names and groups.
The NAS share needs to be mountable via NFSv3 (shares that only support CIFS cannot be mounted on the HPC clusters), and exported to the subnet of our HPC clusters. The NAS is then mounted automatically on our clusters under
File system comparison
|File system||Life span||Max size||Snapshots||Backup||Small files||Large files|
|$SCRATCH||2 weeks||2.5 TB||-||-||o||✓✓|
|Local /scratch||duration of job||800 GB||-||-||✓✓||o|
- Snapshots: up to 3 weeks
- Backup: up to 90 days
Data transfer with command line tools
Upload a file to the cluster
Upload dummy_file from your workstation to your home directory on Euler
scp dummy_file email@example.com:
Download a file from the cluster
Download dummy_file from Euler to the current directory on your workstation
scp firstname.lastname@example.org:dummy_file .
Upload a directory to the cluster
Copy a directory to Euler
scp -r dummy_dir email@example.com:
Exercise: upload a directory with rsync
Create two files in the dummy directory and use rsync to transfer the folder
mkdir dummy_dir touch dummy_dir/dummy_file1 dummy_dir/dummy_file2 rsync -av dummy_dir firstname.lastname@example.org:dummy_dir