Difference between revisions of "Tutorials"
From ScientificComputing
(→Reference data) |
m (→Batch system) |
||
(6 intermediate revisions by 2 users not shown) | |||
Line 12: | Line 12: | ||
** [[Using_the_batch_system|Using the batch system]] | ** [[Using_the_batch_system|Using the batch system]] | ||
** [[Applications|Third-party applications]] | ** [[Applications|Third-party applications]] | ||
− | * [[Getting_started_with_GPUs|Getting started with GPUs | + | * [[Getting_started_with_GPUs|Getting started with GPUs]] |
* [[Compiling_an_application|Compiling an application]] | * [[Compiling_an_application|Compiling an application]] | ||
− | * [https://scicomp.ethz.ch/ | + | * [https://scicomp.ethz.ch/public/lsla/index2.html '''Slurm/LSF Submission Line Advisor'''] |
* [[Parallel_computing|Parallel computing]] | * [[Parallel_computing|Parallel computing]] | ||
+ | * [[Rendering_without_display|Rendering without display]] | ||
==For advanced users== | ==For advanced users== | ||
Line 22: | Line 23: | ||
* [[Debugging_your_code|Debugging your code]] | * [[Debugging_your_code|Debugging your code]] | ||
===Batch system=== | ===Batch system=== | ||
− | * [[ | + | * [[Transition_from_LSF_to_Slurm|Transition from LSF to Slurm]] |
+ | * [[LSF to Slurm quick reference]] | ||
+ | |||
+ | The following articles are currently being adapted from LSF to Slurm; please use with caution: | ||
* [[Job_chaining|Job chaining (dependency conditions)]] | * [[Job_chaining|Job chaining (dependency conditions)]] | ||
* [[Job_arrays|Job arrays]] | * [[Job_arrays|Job arrays]] |
Latest revision as of 16:21, 29 November 2022
Contents
For new users
Linux tutorials
Cluster tutorials
- Getting started with ETH clusters
- Getting started with GPUs
- Compiling an application
- Slurm/LSF Submission Line Advisor
- Parallel computing
- Rendering without display
For advanced users
Debugging and profiling
Batch system
The following articles are currently being adapted from LSF to Slurm; please use with caution:
- Job chaining (dependency conditions)
- Job arrays
- Hybrid (MPI+OpenMP) jobs
- GNU Parallel
- Node allocation for parallel jobs
- Working in multiple shareholder groups
Storage
- Too much space is used by your output files
- Using the local scratch
- Best practices guide for Lustre file system
Reference data
For software maintainers
- Setting up a software stack for a research group
- Creating a local module directory
- Unpacking RPM packages in users space
For IT Support Groups (ISG)
- Setting up a custom ETH group for a cluster shareholder
- Exporting an NFS file system to Euler
- Cluster IP ranges
Software tutorials
Centrally installed applications
- Turbomole
- Comsol MATLAB LiveLink
- Comsol files in the home directory
- Using Gaussian on Euler
- Using MATLAB's Parallel Computing Toolkit