Leonhard is a cluster designed for “big data” applications. This cluster is financed jointly by a small group of shareholders and IT Services, and by a special grant from the Executive Board of ETH.
As the name implies, Leonhard is closely related to Euler. It is operated by the same group and offers the same software environment. One part of Leonhard is a normal system intended for shareholders dealing with open (public) research data; the other part is a secure system intended for confidential data, like personalized medicine. In general, the name Leonhard refers to the whole cluster, and its two parts are called respectively Leonhard Open and Leonhard Med. For security reasons, each part has its own dedicated storage, network and login nodes.
Unlike Euler, which is open to all members of ETH without restriction, Leonhard is reserved exclusively to the groups who have invested in it (the so-called shareholders). Temporary guest accounts will be created on demand for prospective shareholders who want to try it out before buying in.
Each part of Leonhard is attached to a dedicated high-performance parallel file system with a capacity of 2.0 PB each (1 PB = 10^15 bytes).
Leonhard contains several types of compute nodes:
- 36 compute nodes equipped with two 18-core Intel Xeon E5-2697v4 processors and 128 or 512 GB of memory
- 48 compute nodes equipped with two 18-core Intel Xeon Gold 6140 processors and 96, 384 or 768 GB of memory
- 5 huge-memory nodes equipped with two 24-core AMD Epyc 7451 processors and 2048 GB of memory
- 22 GPU nodes equipped with two 10-core Xeon E5-2630v4 processors, 256 GB of memory and 8 Nvidia GTX 1080 GPUs
- 70 GPU nodes equipped with two 10-core Xeon E5-2630v4 processors, 256 GB of memory and 8 Nvidia GTX 1080 Ti GPUs
- 4 high-end GPU nodes (Nvidia DGX-1) equipped with 4 20-core Xeon E5-2698v4 processors, 512 GB of memory and 8 Nvidia V100 GPUs
In total Leonhard contains 5,128 CPU cores and 2,334,720 CUDA cores. The GPUs alone have a peak performance (single precision) of 7.0 PF (1 PF = 10^15 floating-point operations per second).
All compute nodes are connected together and to the cluster's parallel file systems via a 100 Gb/s Ethernet network.
The official service description and the current price list are available on the IT service catalogue.