Difference between revisions of "Euler"

From ScientificComputing
Jump to: navigation, search
Line 3: Line 3:
 
'''Euler''' stands for '''''Erweiterbarer, Umweltfreundlicher, Leistungsfähiger ETH-Rechner.''''' It is an evolution of the [[Brutus]] concept. Euler also incorporates new ideas from the ''Academic Compute Cloud'' project in 2012–2013 as well as the ''Calculus'' prototype in 2013.
 
'''Euler''' stands for '''''Erweiterbarer, Umweltfreundlicher, Leistungsfähiger ETH-Rechner.''''' It is an evolution of the [[Brutus]] concept. Euler also incorporates new ideas from the ''Academic Compute Cloud'' project in 2012–2013 as well as the ''Calculus'' prototype in 2013.
  
The Euler cluster is not destined to ''replace'' Brutus, at least not in the near future, but to ''complement'' it. Whereas Brutus is optimized for high-throughput, Euler is designed squarely for speed.
+
Euler has been regularly expanded since its conception in 2013. The first phase — Euler I — was purchased at the end of 2013 and is in operation since the beginning of 2014. The second phase — Euler II — was purchased at the end of 2014 and is in operation at the beginning of 2015; additional compute nodes have been added at the end of 2015. The third phase — Euler III — has just been ordered and is expected to become operational at the beginning of 2017.
  
== Life time ==
+
== Euler I ==
  
2014-
+
Euler I contains '''448''' compute nodes — Hewlett-Packard BL460c Gen8 —, each equipped with two '''12-core''' [http://ark.intel.com/products/75283/ Intel Xeon E5-2697v2] processors (2.7 GHz nominal, '''3.0–3.5 GHz''' peak). All nodes are equipped with DDR3 memory clocked at 1866 MHz (64 × 256 GB; 32 × 128 GB; 352 × 64 GB) and are connected to two '''high speed networks''' (10 Gb/s Ethernet for file access; 56 Gb/s InfiniBand FDR for parallel computations).
  
== Wiki ==
+
== Euler II ==  
  
[http://brutuswiki.ethz.ch/brutus/Getting_started_with_Euler Getting started with Euler]
+
Euler II contains '''768''' compute nodes of a newer generation — BL460c Gen9 —, each equipped with two 12-core Intel Xeon E5-2680v3 processors (2.5 GHz). All nodes are equipped with DDR4 memory clocked at 2133 MHz (32 × 512 GB; 32 × 256 GB; 32 × 128 GB; 672 × 64 GB) and are connected to two '''high speed networks''' (10 Gb/s Ethernet for file access; 56 Gb/s InfiniBand FDR for parallel computations).
  
== Compute nodes ==
+
== Euler III ==
  
The '''first phase''' of Euler contains a total of '''448''' compute nodes — Hewlett-Packard BL460c Gen8 —, each equipped with two '''12-core''' [http://ark.intel.com/products/75283/ Intel Xeon E5-2697v2] processors (2.7 GHz nominal, '''3.0–3.5 GHz''' peak). All nodes are equipped with DDR3 memory clocked at 1866 MHz (64 × 256 GB; 32 × 128 GB; 352 × 64 GB) and are connected to two '''high speed networks''' (10 Gb/s Ethernet for file access; 56 Gb/s InfiniBand FDR for parallel computations).
+
Euler III will consist of '''1215''' compute nodes — Hewlett-Packard m710x —, each equipped with a quad-core Xeon E3-1285Lv5 processor (3.0-3.7 GHz) and 32 GB of DDR4 memory (2400 MHz). All these nodes are connected to the rest of the cluster via 10G/40G Ethernet.
 
 
Compared to Brutus, Euler offers:
 
 
 
* '''3x''' more performance per core (28 vs 8.8 GF peak)
 
 
 
* '''36%''' more performance per node (576 vs 422 GF peak)
 
 
 
* '''30%''' more computing capacity overall (260 vs 200 TF)
 
 
 
The '''second phase''' of Euler contains '''320''' compute nodes of a newer generation — BL460c Gen9 —, each equipped with two 12-core Intel Xeon E5-2680v3 processors and 64 GB of DDR4 memory. These additional nodes — in production since March 2015 — increase cluster's computing capacity to approximately '''570 TF'''.
 
== Networking ==
 
* All nodes are connected to the cluster's Gigabit Ethernet backbone
 
* All nodes are connected to a high-speed InfiniBand FDR network
 
 
 
== Trivia ==
 
  
 
== External Links ==
 
== External Links ==
 
* https://www.ethz.ch/de/news-und-veranstaltungen/eth-news/news/2014/05/euler-mehr-power-fuer-die-forschung.html
 
* https://www.ethz.ch/de/news-und-veranstaltungen/eth-news/news/2014/05/euler-mehr-power-fuer-die-forschung.html
 
* https://blogs.ethz.ch/id/2014/05/09/der-neue-hpc-cluster-euler-ist-verfugbar
 
* https://blogs.ethz.ch/id/2014/05/09/der-neue-hpc-cluster-euler-ist-verfugbar

Revision as of 10:26, 6 October 2016

Introduction

Euler stands for Erweiterbarer, Umweltfreundlicher, Leistungsfähiger ETH-Rechner. It is an evolution of the Brutus concept. Euler also incorporates new ideas from the Academic Compute Cloud project in 2012–2013 as well as the Calculus prototype in 2013.

Euler has been regularly expanded since its conception in 2013. The first phase — Euler I — was purchased at the end of 2013 and is in operation since the beginning of 2014. The second phase — Euler II — was purchased at the end of 2014 and is in operation at the beginning of 2015; additional compute nodes have been added at the end of 2015. The third phase — Euler III — has just been ordered and is expected to become operational at the beginning of 2017.

Euler I

Euler I contains 448 compute nodes — Hewlett-Packard BL460c Gen8 —, each equipped with two 12-core Intel Xeon E5-2697v2 processors (2.7 GHz nominal, 3.0–3.5 GHz peak). All nodes are equipped with DDR3 memory clocked at 1866 MHz (64 × 256 GB; 32 × 128 GB; 352 × 64 GB) and are connected to two high speed networks (10 Gb/s Ethernet for file access; 56 Gb/s InfiniBand FDR for parallel computations).

Euler II

Euler II contains 768 compute nodes of a newer generation — BL460c Gen9 —, each equipped with two 12-core Intel Xeon E5-2680v3 processors (2.5 GHz). All nodes are equipped with DDR4 memory clocked at 2133 MHz (32 × 512 GB; 32 × 256 GB; 32 × 128 GB; 672 × 64 GB) and are connected to two high speed networks (10 Gb/s Ethernet for file access; 56 Gb/s InfiniBand FDR for parallel computations).

Euler III

Euler III will consist of 1215 compute nodes — Hewlett-Packard m710x —, each equipped with a quad-core Xeon E3-1285Lv5 processor (3.0-3.7 GHz) and 32 GB of DDR4 memory (2400 MHz). All these nodes are connected to the rest of the cluster via 10G/40G Ethernet.

External Links