Sandbox Euler

From ScientificComputing
Revision as of 11:30, 22 December 2021 by Aneeser (talk | contribs) (Storage)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Compute Nodes

Clusters #nodes #cores/node CPUs Clock speed Memory Mem clock speed Local scratch [MB] Servers/suppliers Network
Euler III 1215 4 Intel Xeon E3-1585Lv5 3.0-3.7 GHz 32 GB 2133 MHz 211,293.0 Hewlett-Packard m710x 10G/40G Ethernet
Euler IV 288 36 Intel Xeon Gold 6150 2.7-3.7 GHz 192 GB 2666 MHz 348,582.0 Hewlett-Packard XL230k Gen10 100 Gb/s InfiniBand EDR network
Euler V 352 24 Intel Xeon Gold 5118 2.3 GHz nominal, 3.2 GHz peak 96 GB 2400 MHz 348,582.0 Hewlett-Packard BL460c Gen10
Euler VI 216 128 AMD EPYC 7742 2.25 GHz nominal, 3.4 GHz peak 512 GB 3200 MHz 920,618.0 Swiss company Dalco AG 100 Gb/s InfiniBand HDR network
Euler VII 292 128 AMD EPYC 7H12 2.6 GHz nominal, 3.3 GHz peak 256 GB 3200 MHz 348,582.0 HPE ProLiant XL225n Gen10 Plus 100 Gb/s InfiniBand HDR network

Storage

Euler contains two types of storage system:

  • An enterprise-class NAS system (NetApp FAS 9000 & AFF A800) for long-term storage, such as home directories, applications, virtual machines, project data, etc.
  • A high-performance Lustre parallel file system (DDN ES14KX) for short- and medium-term storage, such as scratch and work file systems

Home directories and other critical data are backed up daily; all other data (except scratch) are backed up at least once per week for disaster recovery.

Networks

Euler contains multiple networks:

  • A common 10 Gb/s Ethernet network for data transfer between the storage systems and the cluster's compute and login nodes
  • Three separate 56 Gb/s InfiniBand FDR networks for data transfer between the compute nodes themselves (e.g. MPI)
  • A 100 Gb/s InfiniBand EDR network for data transfer within Euler IV (MPI) and between Euler IV and the new Lustre high-performance storage system
  • A 200/100 Gb/s InfiniBand HDR network for data transfer within Euler VI, with 100 Gb/s from compute nodes to switches and multiple 200 Gb/s links between switches