Difference between revisions of "Storage and data transfer"
Line 105: | Line 105: | ||
The NAS is then mounted automatically on our clusters under | The NAS is then mounted automatically on our clusters under | ||
− | /nfs/servername/sharename | + | /nfs/servername/sharename |
</td> | </td> | ||
</tr> | </tr> |
Revision as of 09:20, 14 January 2021
Personal storage$HOME$ cd $HOME $ pwd /cluster/home/username $HOME is a safe, long-term storage for critical data (program source, scripts, etc.) and is accessible only by the user (owner). This means other people cannot read its contents. There is a disk quota of 16/20 GB and a maximum of 80’000/100’000 files (soft/hard quota). You can check the quota with the command lquota. Its content is saved every hour/day/week using snapshot, which is stored in the hidden .snapshot directory. |
Global Scratch$ cd $SCRATCH $ pwd /cluster/scratch/username $SCRATCH is a fast, short-term storage for computations running on the cluster. It is created automatically upon first access (cd $SCRATCH) and visible (mounted) only when accessed. It has strict usage rules (see $SCRATCH/__USAGE_RULES__for details) and has no backup. |
Local Scratch/scratch on each compute node ($TMPDIR) The local scratch is intended for serial, I/O-intensive applications. Therefore, it has a very short life span. Data are deleted automatically when the job ends. Scratch space must be requested by the job (see “Batch System”) and has no backup. |
Project$ cd /cluster/project/groupname Similar to $HOME, but for groups, it is a safe, long-term storage for critical data. Shareholders can buy as much space as they need, and manage access rights. Quota can be checked with lquota. The content is backed up multiple times per week. |
Work$ cd /cluster/work/groupname Similar to global scratch, but without purge, it is a fast, short-or medium-term storage for large computations. Shareholders can buy as much space as they need and manage access rights. The folder is visible only when accessed. Quota can be checked with lquota. The content is backed up multiple times per week. |
Central NASGroups who have purchased storage on the central NAS of ETH can access it on our clusters. The NAS share needs to be mountable via NFS (shares that only support CIFS cannot be mounted on the HPC clusters), and exported to the subnet of our HPC clusters (please contact ID Systemdienste and ask them for an NFS export of your NAS share). The NAS share is mounted automatically on our clusters under /nfs/servername/sharename |
Other NASGroups who are operating their own NAS can export a shared file system via NFS to Euler. The user and group ID's on the NAS needs to be consistent with ETH user names and groups. NAS needs to support NFSv3 (this is currently the only NFS version that is supported from our side) and needs to be exported to the subnet of our HPC clusters. The NAS is then mounted automatically on our clusters under /nfs/servername/sharename |