Difference between revisions of "Storage and data transfer"

From ScientificComputing
Jump to: navigation, search
Line 48: Line 48:
 
</tr>
 
</tr>
 
</table>
 
</table>
 
=== Check your quota ===
 
Check your quota with the command
 
$ lquota
 
 
Screen output
 
+-----------------------------+-------------+------------------+------------------+------------------+
 
| Storage location:          | Quota type: | Used:            | Soft quota:      | Hard quota:      |
 
+-----------------------------+-------------+------------------+------------------+------------------+
 
| /cluster/home/jarunanp      | space      |        10.23 GB |        17.18 GB |        21.47 GB |
 
| /cluster/home/jarunanp      | files      |            66386 |            80000 |          100000 |
 
+-----------------------------+-------------+------------------+------------------+------------------+
 
| /cluster/shadow            | space      |          4.10 kB |          2.15 GB |          2.15 GB |
 
| /cluster/shadow            | files      |                2 |            50000 |            50000 |
 
+-----------------------------+-------------+------------------+------------------+------------------+
 
| /cluster/scratch/jarunanp  | space      |          5.99 GB |          2.50 TB |          2.70 TB |
 
| /cluster/scratch/jarunanp  | files      |            11176 |          1000000 |          1500000 |
 
+-----------------------------+-------------+------------------+------------------+------------------+
 
  
 
== Group storage ==
 
== Group storage ==
Line 99: Line 81:
 
=== Central NAS ===
 
=== Central NAS ===
 
Groups who have purchased storage on the central NAS of ETH can access it on our clusters.  
 
Groups who have purchased storage on the central NAS of ETH can access it on our clusters.  
 
 
</td>
 
</td>
 
<td style="width: 3%; background: white;">
 
<td style="width: 3%; background: white;">
 
</td>
 
</td>
 
<td style="width: 40%; background: white;">
 
<td style="width: 40%; background: white;">
 
 
=== Other NAS ===
 
=== Other NAS ===
 
Groups who are operating their own NAS can export a shared file system via NFS to Euler. The user and group ID's on the NAS needs to be consistent with ETH user names and groups.
 
Groups who are operating their own NAS can export a shared file system via NFS to Euler. The user and group ID's on the NAS needs to be consistent with ETH user names and groups.
 
</td>
 
</td>
 
</tr>
 
</tr>
 +
</table>
 +
<table>
 
<tr valign=top>
 
<tr valign=top>
 
<td style="width: 100%; background: white;">
 
<td style="width: 100%; background: white;">
Line 117: Line 99:
 
</table>
 
</table>
  
<br>
+
== Large versus small file systems ==
 
[[Image:storage_summarized_table.png|900px]]
 
[[Image:storage_summarized_table.png|900px]]
  
 +
== Check your quota ==
 +
Check your quota with the command
 +
$ lquota
 +
 +
Screen output
 +
+-----------------------------+-------------+------------------+------------------+------------------+
 +
| Storage location:          | Quota type: | Used:            | Soft quota:      | Hard quota:      |
 +
+-----------------------------+-------------+------------------+------------------+------------------+
 +
| /cluster/home/jarunanp      | space      |        10.23 GB |        17.18 GB |        21.47 GB |
 +
| /cluster/home/jarunanp      | files      |            66386 |            80000 |          100000 |
 +
+-----------------------------+-------------+------------------+------------------+------------------+
 +
| /cluster/shadow            | space      |          4.10 kB |          2.15 GB |          2.15 GB |
 +
| /cluster/shadow            | files      |                2 |            50000 |            50000 |
 +
+-----------------------------+-------------+------------------+------------------+------------------+
 +
| /cluster/scratch/jarunanp  | space      |          5.99 GB |          2.50 TB |          2.70 TB |
 +
| /cluster/scratch/jarunanp  | files      |            11176 |          1000000 |          1500000 |
 +
+-----------------------------+-------------+------------------+------------------+------------------+
 
== Further reading ==
 
== Further reading ==
 
* [[Storage systems|User guide: Storage systems]]
 
* [[Storage systems|User guide: Storage systems]]
 
* [[Too_much_space_is_used_by_your_output_files|Too much space is used by your output files]]
 
* [[Too_much_space_is_used_by_your_output_files|Too much space is used by your output files]]
 
* [[Best_practices_on_Lustre_parallel_file_systems|Best practices guide for Lustre file system]]
 
* [[Best_practices_on_Lustre_parallel_file_systems|Best practices guide for Lustre file system]]

Revision as of 14:26, 27 January 2021

Storage.png

Personal storage

$HOME

$ cd $HOME
$ pwd
/cluster/home/username 

$HOME is a safe, long-term storage for critical data (program source, scripts, etc.) and is accessible only by the user (owner). This means other people cannot read its contents.

There is a disk quota of 16/20 GB and a maximum of 80’000/100’000 files (soft/hard quota). You can check the quota with the command lquota.

Its content is saved every hour/day/week using snapshot, which is stored in the hidden .snapshot directory.

Global Scratch

$ cd $SCRATCH
$ pwd
/cluster/scratch/username

$SCRATCH is a fast, short-term storage for computations running on the cluster. It is created automatically upon first access (cd $SCRATCH) and visible (mounted) only when accessed.

It has strict usage rules (see $SCRATCH/__USAGE_RULES__ for details) and has no backup.

Local Scratch

/scratch on each compute node ($TMPDIR)

The local scratch is intended for serial, I/O-intensive applications. Therefore, it has a very short life span. Data are deleted automatically when the job ends.

Scratch space must be requested by the job and has no backup.

See how to use local scratch

Group storage

Shareholders can buy the space on Project and Work as much as they need, and manage access rights. Quota can be checked with lquota. The content is backed up multiple times per week.

Project

$ cd /cluster/project/groupname

Similar to $HOME, but for groups, it is a safe, long-term storage for critical data.

Work

$ cd /cluster/work/groupname

Similar to global scratch, but without purge, it is a fast, short-or medium-term storage for large computations.

The folder is visible only when accessed.

External Storage

Central NAS

Groups who have purchased storage on the central NAS of ETH can access it on our clusters.

Other NAS

Groups who are operating their own NAS can export a shared file system via NFS to Euler. The user and group ID's on the NAS needs to be consistent with ETH user names and groups.

The NAS share needs to be mountable via NFSv3 (shares that only support CIFS cannot be mounted on the HPC clusters), and exported to the subnet of our HPC clusters. The NAS is then mounted automatically on our clusters under

/nfs/servername/sharename

Large versus small file systems

900px

Check your quota

Check your quota with the command

$ lquota

Screen output

+-----------------------------+-------------+------------------+------------------+------------------+
| Storage location:           | Quota type: | Used:            | Soft quota:      | Hard quota:      |
+-----------------------------+-------------+------------------+------------------+------------------+
| /cluster/home/jarunanp      | space       |         10.23 GB |         17.18 GB |         21.47 GB |
| /cluster/home/jarunanp      | files       |            66386 |            80000 |           100000 |
+-----------------------------+-------------+------------------+------------------+------------------+ 
| /cluster/shadow             | space       |          4.10 kB |          2.15 GB |          2.15 GB |
| /cluster/shadow             | files       |                2 |            50000 |            50000 |
+-----------------------------+-------------+------------------+------------------+------------------+
| /cluster/scratch/jarunanp   | space       |          5.99 GB |          2.50 TB |          2.70 TB |
| /cluster/scratch/jarunanp   | files       |            11176 |          1000000 |          1500000 |
+-----------------------------+-------------+------------------+------------------+------------------+

Further reading