From ScientificComputing
Revision as of 15:53, 24 January 2018 by Byrdeo (talk | contribs) (Compute nodes)

Jump to: navigation, search


Leonhard is a cluster designed for “big data” applications. This cluster is financed jointly by a small group of shareholders and IT Services, and by a special grant from the Executive Board of ETH.

As the name implies, Leonhard is closely related to Euler. It is operated by the same group and offers the same software environment. One part of Leonhard is a normal system intended for shareholders dealing with open (public) research data; the other part is a secure system intended for confidential data, like personalized medicine. In general, the name Leonhard refers to the whole cluster, and its two parts are called respectively Leonhard Open and Leonhard Med. For security reasons, each part has its own dedicated storage, network and login nodes.

Unlike Euler, which is open to all members of ETH without restriction, Leonhard is reserved exclusively to the groups who have invested in it (the so-called shareholders). Temporary guest accounts will be created on demand for prospective shareholders who want to try it out before buying in.



Each part of Leonhard is attached to a dedicated high-performance parallel file system with a capacity of 2.0 PB each (1 PB = 10^15 bytes).

Compute nodes

Leonhard contains four types of compute nodes:

  • 36 Standard nodes equipped with two 18-core Intel Xeon E5-2697v4 processors and 128 or 512 GB of memory
  • 48 Big data nodes equipped with two 18-core Intel Xeon Gold 6140 processors and 128, 512 or 768 GB of memory
  • 22 GPU nodes equipped with two 10-core Xeon E5-2650v4 processors, 256 GB of memory and 8 Nvidia GTX 1080 GPUs
  • 50 GPU nodes equipped with two 10-core Xeon E5-2650v4 processors, 256 GB of memory and 8 Nvidia GTX 1080 Ti GPUs

In total Leonhard contains 4,464 CPU cores and 1,884,160 CUDA cores. The GPUs alone have a peak performance (single precision) of 5.7 PF (1 PF = 10^15 floating-point operations per second).


All compute nodes are connected together and to the cluster's parallel file systems via a 100 Gb/s InfiniBand EDR network.

Service description

The official service description and the current price list are available on the IT service catalogue.