Euler III Beta Testing

From ScientificComputing
Revision as of 12:09, 27 February 2017 by Urbanb (talk | contribs) (Link to IP ranges for Euler compute nodes.)

Jump to: navigation, search

The Euler III extension to the Euler cluster is available for all interested beta testers.

Serial jobs or single-node parallel jobs using from 1 to 4 cores, use up to 30 GB of total memory and request up to 24 hours are good candidates to run on these nodes.

The new nodes run CentOS 7 exclusively. All other production nodes in Euler currently run CentOS 6. Expect that some programs may not run due to missing libraries. In this case, please tell us! Jobs that rely on Infiniband will not run on these nodes. Open MPI jobs will run but MVAPICH2 jobs will not run without special care (see below).

Known issues

Missing libraries
Euler III nodes run CentOS 7, unlike the rest of Euler, which runs CentOS 6. Some libraries may be missing. Let us know if you encounter this problem.
Infiniband and MPI
Euler III nodes do not have an Infiniband network, but they do have a fast, low-latency Ethernet interconnect. See below for details.
NAS NFS mounts
Euler III nodes are in a different IP range than the rest of the Euler nodes. You need to change your NAS export and/or update your firewall to include the new IP addresses.

Submitting beta jobs

To submit a job to run on the beta Euler III nodes, you must request the beta resource, e.g.,

bsub -R beta [other bsub options] ./my_command

Submitting parallel jobs

While the Euler III nodes are targeted to serial and shared-memory parallel jobs, multi-node parallel jobs are still accepted. You need to request at most four cores per node:

bsub -R beta -R "span[ptile=4]" [other bsub options] ./my_command

For MVAPICH2 you need to tell the system that Infiniband is not available,

module load interconnect/ethernet

before loading the MPI module.

Open MPI
Open MPI 1.6.5;has been tested to work with acceptable performance.
MVAPICH2
MVAPICH2 2.1 works but preliminary results show low scalability. You need to load the interconnect/ethernet module.
Intel MPI
Intel MPI 5.1.3 has been tested.