Difference between revisions of "Storage and data transfer"

From ScientificComputing
Jump to: navigation, search
 
(26 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
 +
<table style="width: 100%;">
 +
<tr valign=top>
 +
<td style="width: 30%; text-align:left">
 +
< [[Accessing the cluster|Accessing the cluster]]
 +
</td>
 +
<td style="width: 35%; text-align:center">
 +
[[Main_Page|Home]]
 +
</td>
 +
<td style="width: 35%; text-align:right">
 +
[[Modules and applications]] >
 +
</td>
 +
</tr>
 +
</table>
 +
 +
 
<table>
 
<table>
 
<tr valign=top>
 
<tr valign=top>
<td style="width: 40%; background: white;">
+
<td style="width: 35%; background: white;">
 
[[Image:storage.png|370px]]
 
[[Image:storage.png|370px]]
 
</td>
 
</td>
<td style="width: 40%; background: white;">
+
<td style="width: 2%; background: white;">
 +
</td>
 +
<td style="width: 63%; background: white;">
  
Once you can log in to the cluster, then you may want to set up your computation and you need your data. Therefore, two questions arise:<br />
+
Once you can log in to the cluster, you can start setting up your calculation job and you need your data. Therefore, two questions arise:<br />
  
 
1. Where to store data?<br />
 
1. Where to store data?<br />
 
2. How to transfer data?<br />
 
2. How to transfer data?<br />
  
In this document, we explain you the storage system and give examples how to transfer your data between your local computer and the cluster.  
+
Here, we explain the storage system on the cluster and give examples how to transfer data between your local computer and the cluster.  
  
 
=== Quick examples ===
 
=== Quick examples ===
Upload a directory from you local computer to /cluster/scratch/username ($SCRATCH) on Euler
+
Upload a directory from your local computer to /cluster/scratch/username ($SCRATCH) on Euler
 
  $ scp -r dummy_dir username@euler.ethz.ch:/cluster/scratch/username/
 
  $ scp -r dummy_dir username@euler.ethz.ch:/cluster/scratch/username/
  
 
Log in to the cluster and check your disk space quota  
 
Log in to the cluster and check your disk space quota  
 
  $ lquota
 
  $ lquota
 +
+-----------------------+-------------+------------+---------------+---------------+
 +
| Storage location:    | Quota type: | Used:      | Soft quota:  | Hard quota:  |
 +
+-----------------------+-------------+------------+---------------+---------------+
 +
| /cluster/home/sfux    | space      |    8.85 GB |      17.18 GB |      21.47 GB |
 +
| /cluster/home/sfux    | files      |      25610 |        80000 |        100000 |
 +
+-----------------------+-------------+------------+---------------+---------------+
 +
| /cluster/shadow      | space      |    4.10 kB |      2.15 GB |      2.15 GB |
 +
| /cluster/shadow      | files      |          2 |        50000 |        50000 |
 +
+-----------------------+-------------+------------+---------------+---------------+
 +
| /cluster/scratch/sfux | space      |  237.57 kB |      2.50 TB |      2.70 TB |
 +
| /cluster/scratch/sfux | files      |        29 |      1000000 |      1500000 |
 +
+-----------------------+-------------+------------+---------------+---------------+
 
</td>
 
</td>
 
</tr>
 
</tr>
 
</table>
 
</table>
  
 
+
== Personal storage for all users ==
== Personal storage ==
 
 
=== $HOME ===
 
=== $HOME ===
 
  $ cd $HOME
 
  $ cd $HOME
Line 31: Line 59:
 
  /cluster/home/username  
 
  /cluster/home/username  
  
$HOME is a safe, long-term storage for critical data (program source, scripts, etc.) and is accessible only by the user (owner). This means other people cannot read its contents.
+
* $HOME is a safe, long-term storage for critical data (program source, scripts, etc.) and is accessible only by the user (owner). This means other people cannot read its contents.
 
+
* There is a disk quota of 16/20 GB and a maximum of 80’000/100’000 files (soft/hard quota). You can check the quota with the command lquota.
There is a disk quota of 16/20 GB and a maximum of 80’000/100’000 files (soft/hard quota). You can check the quota with the command lquota.
+
* Its content is saved every hour/day using snapshot, which is stored in the hidden .snapshot directory.
 
 
Its content is saved every hour/day/week using snapshot, which is stored in the hidden .snapshot directory.
 
 
<table>
 
<table>
 
<tr valign=top>
 
<tr valign=top>
 
<td style="width: 37%; background: white;">
 
<td style="width: 37%; background: white;">
 +
 
=== Global Scratch ===
 
=== Global Scratch ===
 
  $ cd $SCRATCH
 
  $ cd $SCRATCH
Line 44: Line 71:
 
  /cluster/scratch/username
 
  /cluster/scratch/username
  
$SCRATCH is a fast, short-term storage for computations running on the cluster. It is created automatically upon first access (cd $SCRATCH) and visible (mounted) only when accessed.
+
* $SCRATCH is a fast, short-term storage for computations running on the cluster. It is created automatically upon first access (cd $SCRATCH) and visible (mounted) only when accessed.
  
It has strict usage rules (see [[Personal_scratch_usage_rules|$SCRATCH/__USAGE_RULES__]] for details) and has no backup.
+
* It has strict usage rules (see [[Personal_scratch_usage_rules|$SCRATCH/__USAGE_RULES__]] for details) and has no backup.
 
</td>
 
</td>
 
<td style="width: 3%; background: white;">
 
<td style="width: 3%; background: white;">
Line 55: Line 82:
 
/scratch on each compute node ($TMPDIR)
 
/scratch on each compute node ($TMPDIR)
  
The local scratch is intended for serial, I/O-intensive applications. Therefore, it has a very short life span. Data are deleted automatically when the job ends.  
+
* The local scratch is intended for serial, I/O-intensive applications. Therefore, it has a very short life span. Data are deleted automatically when the job ends.  
 
+
* Scratch space must be requested by the job and has no backup.
Scratch space must be requested by the job and has no backup.
 
  
 
[[Using_local_scratch|See how to use local scratch]]
 
[[Using_local_scratch|See how to use local scratch]]
Line 64: Line 90:
 
</table>
 
</table>
  
== Group storage ==
+
== Group storage for shareholders ==
 
Shareholders can buy the space on Project and Work as much as they need, and manage access rights.  
 
Shareholders can buy the space on Project and Work as much as they need, and manage access rights.  
 
Quota can be checked with lquota. The content is backed up multiple times per week.
 
Quota can be checked with lquota. The content is backed up multiple times per week.
Line 94: Line 120:
 
<tr valign=top>
 
<tr valign=top>
 
<td style="width: 37%; background: white;">
 
<td style="width: 37%; background: white;">
=== Central NAS ===
+
=== Central NAS/CDS ===
Groups who have purchased storage on the central NAS of ETH provided ID Systemdienste can access it on our clusters.  
+
Groups who have purchased storage on the central NAS/CDS of ETH provided by ID Systemdienste can access it on our clusters.  
 
</td>
 
</td>
 
<td style="width: 3%; background: white;">
 
<td style="width: 3%; background: white;">
Line 130: Line 156:
 
| Local /scratch || duration of job ||800 GB || - ||-||  &#x2713;&#x2713; || o
 
| Local /scratch || duration of job ||800 GB || - ||-||  &#x2713;&#x2713; || o
 
|-
 
|-
| Central NAS || flexible || flexible || &#x2713; || optional ||  &#x2713; || &#x2713;
+
| Central NAS || flexible || flexible || &#x2713; || &#x2713; ||  &#x2713; || &#x2713;
 
|}
 
|}
  
 
Retention time
 
Retention time
* Snapshots: up to 3 weeks
+
* Snapshots: up to 7 days
 
* Backup: up to 90 days
 
* Backup: up to 90 days
  
Line 195: Line 221:
 
== Further reading ==
 
== Further reading ==
 
* [[Storage systems|User guide: Storage systems]]
 
* [[Storage systems|User guide: Storage systems]]
 +
* [[Unified_quota_wrapper | Unified quota wrapper]]
 
* [[Too_much_space_is_used_by_your_output_files|Too much space is used by your output files]]
 
* [[Too_much_space_is_used_by_your_output_files|Too much space is used by your output files]]
 
* [[Best_practices_on_Lustre_parallel_file_systems|Best practices guide for Lustre file system]]
 
* [[Best_practices_on_Lustre_parallel_file_systems|Best practices guide for Lustre file system]]
 +
 +
 +
<table style="width: 100%;">
 +
<tr valign=top>
 +
<td style="width: 30%; text-align:left">
 +
< [[Accessing the cluster|Accessing the cluster]]
 +
</td>
 +
<td style="width: 35%; text-align:center">
 +
[[Main_Page|Home]]
 +
</td>
 +
<td style="width: 35%; text-align:right">
 +
[[Modules and applications]] >
 +
</td>
 +
</tr>
 +
</table>

Latest revision as of 09:22, 1 October 2021

< Accessing the cluster

Home

Modules and applications >


Storage.png

Once you can log in to the cluster, you can start setting up your calculation job and you need your data. Therefore, two questions arise:

1. Where to store data?
2. How to transfer data?

Here, we explain the storage system on the cluster and give examples how to transfer data between your local computer and the cluster.

Quick examples

Upload a directory from your local computer to /cluster/scratch/username ($SCRATCH) on Euler

$ scp -r dummy_dir username@euler.ethz.ch:/cluster/scratch/username/

Log in to the cluster and check your disk space quota

$ lquota
+-----------------------+-------------+------------+---------------+---------------+
| Storage location:     | Quota type: | Used:      | Soft quota:   | Hard quota:   |
+-----------------------+-------------+------------+---------------+---------------+
| /cluster/home/sfux    | space       |    8.85 GB |      17.18 GB |      21.47 GB |
| /cluster/home/sfux    | files       |      25610 |         80000 |        100000 |
+-----------------------+-------------+------------+---------------+---------------+
| /cluster/shadow       | space       |    4.10 kB |       2.15 GB |       2.15 GB |
| /cluster/shadow       | files       |          2 |         50000 |         50000 |
+-----------------------+-------------+------------+---------------+---------------+
| /cluster/scratch/sfux | space       |  237.57 kB |       2.50 TB |       2.70 TB |
| /cluster/scratch/sfux | files       |         29 |       1000000 |       1500000 |
+-----------------------+-------------+------------+---------------+---------------+

Personal storage for all users

$HOME

$ cd $HOME
$ pwd
/cluster/home/username 
  • $HOME is a safe, long-term storage for critical data (program source, scripts, etc.) and is accessible only by the user (owner). This means other people cannot read its contents.
  • There is a disk quota of 16/20 GB and a maximum of 80’000/100’000 files (soft/hard quota). You can check the quota with the command lquota.
  • Its content is saved every hour/day using snapshot, which is stored in the hidden .snapshot directory.

Global Scratch

$ cd $SCRATCH
$ pwd
/cluster/scratch/username
  • $SCRATCH is a fast, short-term storage for computations running on the cluster. It is created automatically upon first access (cd $SCRATCH) and visible (mounted) only when accessed.

Local Scratch

/scratch on each compute node ($TMPDIR)

  • The local scratch is intended for serial, I/O-intensive applications. Therefore, it has a very short life span. Data are deleted automatically when the job ends.
  • Scratch space must be requested by the job and has no backup.

See how to use local scratch

Group storage for shareholders

Shareholders can buy the space on Project and Work as much as they need, and manage access rights. Quota can be checked with lquota. The content is backed up multiple times per week.

Project

$ cd /cluster/project/groupname

Similar to $HOME, but for groups, it is a safe, long-term storage for critical data.

Work

$ cd /cluster/work/groupname

Similar to global scratch, but without purge, it is a fast, short-or medium-term storage for large computations.

The folder is visible only when accessed.

External Storage

Central NAS/CDS

Groups who have purchased storage on the central NAS/CDS of ETH provided by ID Systemdienste can access it on our clusters.

Other NAS

Groups who are operating their own NAS can export a shared file system via NFS to Euler. The user and group ID's on the NAS needs to be consistent with ETH user names and groups.

The NAS share needs to be mountable via NFSv3 (shares that only support CIFS cannot be mounted on the HPC clusters), and exported to the subnet of our HPC clusters. The NAS is then mounted automatically on our clusters under

/nfs/servername/sharename

File system comparison

File system Life span Max size Snapshots Backup Small files Large files
$HOME permanent 16 GB o
$SCRATCH 2 weeks 2.5 TB - - o ✓✓
/cluster/project 4 years flexible optional
/cluster/work 4 years flexible - o ✓✓
Local /scratch duration of job 800 GB - - ✓✓ o
Central NAS flexible flexible

Retention time

  • Snapshots: up to 7 days
  • Backup: up to 90 days

Data transfer with command line tools

Using scp command

Upload dummy_file from your workstation to your home directory on Euler

$ scp dummy_file username@euler.ethz.ch:

Download dummy_file from Euler to the current directory on your workstation

$ scp username@euler.ethz.ch:dummy_file .

Copy a directory to Euler

$ scp -r dummy_dir username@euler.ethz.ch:

Example: upload a directory with rsync

Create two files in the dummy directory and use rsync to transfer the folder

$ mkdir dummy_dir
$ touch dummy_dir/dummy_file1 dummy_dir/dummy_file2
$ rsync -av dummy_dir username@euler.ethz.ch:dummy_dir


Data transfer with graphical tools

Table: Graphical file transfer programs
Linux macOS Windows
FileZilla FileZilla
Cyberduck
WinSCP
PSCP
FileZilla
Cyberduck

WinSCP

Winscp1.png

Winscp2.png

Further reading


< Accessing the cluster

Home

Modules and applications >