C/C++

From ScientificComputing
Jump to: navigation, search

Definition

C and C++ are ISO standardized, general-purpose, compiled, programming languages that are widely used for system programming, embedded systems, and high-performance applications.

Compilers on Euler

On Euler the following C/C++ compilers are available through modules:

Compiler Module command
gcc 12.2.0 module load stack/2024-06
gcc 8.5.0 module load stack/2024-04
Intel oneAPI 2023.2.0 module load stack/2024-06 intel-oneapi-compilers/2023.2.0

Please note that Intel OneAPI 2023.2.0 provides still both, the classical compilers (icpc, icc) as well as the LLVM based compilers (icpx, icx). This is the last release that still provides the classical compilers, which will no longer be provided in newer releases.

The worldwide C/C++ community

C++ conferences
C++ reference page
Compiler explorer
Discord:

Reddit:

Youtube:

Example Programs

Hello World

Create a file hello.c, with the content

 #include <stdio.h>
 
 int main() {
     printf("Hello, World!\n");
     return 0;
 }

Bring GCC 12.2.0 to your command line with

module load stack/2024-06 gcc/12.2.0

Compile the program hello.c into an executable hello with

 gcc hello.c -o hello

then run the program with

 ./hello

and the program should print

 Hello, World!

to your terminal.

Multi-threading with OpenMP

OpenMP is an API for multiprocessing and allows you to run concurrent threads.
Create a file hello_omp.c, with the content

 #include <omp.h>
 #include <stdio.h>
 
 int main() {
     #pragma omp parallel
     printf("Hello from thread %d of %d\n", omp_get_thread_num(), omp_get_num_threads());
     return 0;
 }

Bring GCC 12.2.0 to your command line with

module load stack/2024-06

Compile the program hello_omp.c into an executable hello_omp with

 gcc -fopenmp hello_omp.c -o hello_omp

then run the program with

 ./hello_omp

and the program should print something like this

 Hello from thread 0 of 8
 Hello from thread 3 of 8
 Hello from thread 1 of 8
 Hello from thread 4 of 8
 Hello from thread 2 of 8
 Hello from thread 7 of 8
 Hello from thread 6 of 8
 Hello from thread 5 of 8

to your terminal.

Communication via MPI

Message Passing Interface (MPI) is a standard, that allows multiple instances of your program to intercommunicate.
Create a file hello_mpi.c, with the content

 #include <mpi.h>
 #include <stdio.h>
 
 int main(int argc, char** argv) {
     MPI_Init(&argc, &argv);
     
     int world_rank;
     MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
     
     int world_size;
     MPI_Comm_size(MPI_COMM_WORLD, &world_size);
     
     printf("Hello from process %d out of %d!\n", world_rank, world_size);
     
     MPI_Finalize();
     return 0;
 }

Bring gcc and mpicc to your command line with

module load stack/2024-06 gcc/12.2.0 openmpi/4.1.6

Compile the program hello_mpi.c into an executable hello_mpi with openmpi's compiler wrapper mpicc

mpicc hello_mpi.c -o hello_mpi

then submit a job to run the program with 4 tasks

 srun -n 4 hello_mpi

and the program should print something like this

 Hello from process 0 out of 4!
 Hello from process 2 out of 4!
 Hello from process 3 out of 4!
 Hello from process 1 out of 4!

to your terminal.

Targeting GPUs with CUDA

CUDA is a parallel computing platform that allows software to use certain types of GPUs.
Create a file hello_cuda.cu, with the content

 #include <stdio.h>
 
 __global__ void cuda_hello() {
   printf("Hello World from GPU!\n");
 }
 
 int main() {
   printf("Hello World from CPU!\n");
   cuda_hello<<<1,1>>>();
   cudaDeviceSynchronize();
   return 0;
 }

Bring nvcc to your command line with

module load stack/2024-06 gcc/12.2.0 cuda

Compile the program hello_cuda.cu into an executable hello_cuda with

 nvcc hello_cuda.cu -o hello_cuda

then submit a job to run the program on a GPU node

 srun --gpus=1 hello_cuda

and the program should print

 Hello World from CPU!
 Hello World from GPU!

to your terminal.