Cluster user survey 2019
Contents
- 1 Introduction
- 2 Summary
- 3 Changes based on user feedback
- 4 Results
- 4.1 About participants
- 4.1.1 What is your role?
- 4.1.2 In which field are you working?
- 4.1.3 How important are computations for your work?
- 4.1.4 What kind of Euler/Leonhard user are you?
- 4.1.5 How many years of experience do you have in using a cluster/supercomputer?
- 4.1.6 How often do you use the Euler/Leonhard cluster?
- 4.1.7 What is your account status?
- 4.2 Application/software requirements
- 4.2.1 Which type of software do you use?
- 4.2.2 Which are the top 3 centrally installed software that you are using?
- 4.2.3 Are you happy with the centrally provided software?
- 4.2.4 Is it important for you to keep the older versions of a software?
- 4.2.5 Do you need any software that is not provided centrally?
- 4.2.6 Do you plan to use Singularity containers on Euler/Leonhard?
- 4.2.7 Are you using other container technologies?
- 4.3 Code development
- 4.3.1 How important are the following programming languages for your work?
- 4.3.2 Are you happy with the centrally provided tools for the code development?
- 4.3.3 Which parallelization libraries are you using?
- 4.3.4 Which scientific libraries are you using?
- 4.3.5 Which machine learning libraries/frameworks are you using?
- 4.4 Resource requirements
- 4.4.1 What type of computation are you running?
- 4.4.2 How many cores do you typically use per job?
- 4.4.3 What is the maximum memory that you need per node (GB)?
- 4.4.4 What is the typical runtime of your computations?
- 4.4.5 How happy are you with the performance of the cluster?
- 4.4.6 What kind of GPU computations do you typically run (if applicable)?
- 4.4.7 What model of GPU are you using on Leonhard (if applicable)?
- 4.4.8 How many GPUs per job do you typically use (if applicable)?
- 4.4.9 How do you see your future needs?
- 4.5 Using the batch system
- 4.6 System stability and availability
- 4.7 Documentation and support
- 4.7.1 How often do you search for information on the wiki scicomp.ethz.ch?
- 4.7.2 How satisfied are you with the wiki?
- 4.7.3 How often did you use the following support channels? (times per year)
- 4.7.4 Are you happy with the support?
- 4.7.5 Would you like to have regular trainings on the following topics?
- 4.8 Shareholder model
- 4.9 Central cluster and computing infrastructures
- 4.10 Additional services offered by Scientific IT Services (SIS)
- 4.1 About participants
Introduction
In September 2019 the HPC Group of Scientific IT Services launched a cluster user survey in order to know the individual needs, assess user satisfaction and further tailor our service portfolio to our customers and users. Moreover we want to better understand how our customers differentiate us from other computing infrastructure providers and address potential gaps.
All cluster users that used at least 1 CPU core on average in the past 6 month were invited to fill out a (very long) questionnaire. Out of these 954 users, 234 participated in the survey. This corresponds to a return rate of 24.53%, which is very impressive for a large survey with many questions.
We would like to take this opportunity to thank all these users for taking the time to help us improve our service.
Summary
- Participants
- We have many experienced (2+ years) cluster users
- Access to computing capacity is important for the entire ETH research community from traditional quantitative disciplines such as physics and engineering over life sciences to social sciences
- Application/software requirements
- The wide range of centrally provided software is appreciated
- There should be more documentation about the centrally provided applications and libraries
- Containers are not yet as widely used as we thought
- Code development
- Python is the most used programming language on our clusters
- Availability of tools for code development is good, but documentation could be better
- Resource requirements
- Next to parallel jobs, there is a large amount of serial computations
- Most parallel jobs use a single full compute node
- Users appreciate the possibility to choose among compute nodes with various memory sizes
- Most jobs do not use a large amount of memory, however, availability of nodes with more than 1 TB of memory is important to some users
- Different run time limits are justified; interestingly, a large fraction of users needs run times limits beyond 24 hours
- We observe a very high satisfaction of our users with regards to overall performance of the clusters
- Most computations which use GPU acceleration use a single GPU
- Half of the participants of the survey expect growing needs of computational resources
- Batch system
- Most users, directly use bsub to submit their jobs on the cluster
- Users are mostly happy about the configuration of the batch system. Runtime limits and waiting time could be improved
- Many additional features (X11, interactive jobs, bjob_connect etc.) of the batch system are not well known, except bbjobs
- System stability and availability
- Perception of stability and availability of both, the Euler and the Leonhard clusters are very good.
- Cluster maintenance windows are perceived as well organized, transparent to the users and well communicated
- Almost no lost jobs are reported due to node failure or file system issues
- Documentation and support
- Information on the wiki is accurate, mostly up-to-date, but it could be more detailed and contain more examples
- Cluster support is well reachable and provides fast, accurate and polite answers
- There is a large demand for technical trainings and workshops
- Shareholder model
- Our current shareholder model is well accepted and established
- Most shares are financed through department/institute budgets
- Central cluster and computing infrastructure
- For ETH researchers the most important infrastructures, next to Euler and Leonhard, are the ones at CSCS
- Most users, that use additional platforms next to Euler/Leonhard, consider the ETH clusters as equal or better than the other resources.
- Additional services offered by scientific IT services (SIS)
- Roughly a third of the participants indicate interested in expertise and services that go beyond the basic support of the HPC team
Changes based on user feedback
Already implemented
- We redesigned the front page of the wiki to show more information
- In particular, the front page now shows the 20 most recently installed software on Euler and Leonhard (this was requested by multiple users participating in the survey)
- The status of our clusters & services is now displayed at the top of every wiki page, which allows user to immediately see if a service is down
On our todo list
- The documentation about our systems and services needs to be augmented and improved overall
- We will to add more examples to the wiki
- We will create a short summary of the "Using the batch system" documentation
Results
This section provides aggregated results for all questions in the survey. For the visualization of the answers, pie chart and donut charts are used for single-choice questions and bar charts for multiple-choice questions. As in the survey, questions are grouped by topic.
About participants
What is your role?
In which field are you working?
How important are computations for your work?
What kind of Euler/Leonhard user are you?
How many years of experience do you have in using a cluster/supercomputer?
How often do you use the Euler/Leonhard cluster?
What is your account status?
Application/software requirements
Which type of software do you use?
Which are the top 3 centrally installed software that you are using?
Are you happy with the centrally provided software?
Is it important for you to keep the older versions of a software?
Do you need any software that is not provided centrally?
Do you plan to use Singularity containers on Euler/Leonhard?
Are you using other container technologies?
Code development
How important are the following programming languages for your work?
Are you happy with the centrally provided tools for the code development?
Which parallelization libraries are you using?
Which scientific libraries are you using?
Which machine learning libraries/frameworks are you using?
Resource requirements
What type of computation are you running?
How many cores do you typically use per job?
What is the maximum memory that you need per node (GB)?
What is the typical runtime of your computations?
How happy are you with the performance of the cluster?
What kind of GPU computations do you typically run (if applicable)?
What model of GPU are you using on Leonhard (if applicable)?
How many GPUs per job do you typically use (if applicable)?
How do you see your future needs?
Using the batch system
How do you use the batch system on Euler/Leonhard?
Can you split long computations into shorter jobs?
Are you happy with the current configuration of the batch system?
Do you use the following features of the batch system?
System stability and availability
How happy are you with the stability and availability of the cluster?
How happy are you with the planned cluster maintenance?
How often have you lost jobs due to a problem on the cluster, e.g. node failure or file system issue?
Documentation and support
How often do you search for information on the wiki scicomp.ethz.ch?
How satisfied are you with the wiki?
How often did you use the following support channels? (times per year)
Are you happy with the support?
Would you like to have regular trainings on the following topics?
If you had the choice, how would you prefer to get resources on Euler/Leonhard?
Central cluster and computing infrastructures
Which of the following computing infrastructures are the most important for your research?
How do you rate Euler/Leonhard in comparison with the other computing infrastructures used most frequently?