Compiling MPI applications on SCI-UC infrastructure

User Guide


1. Overview

The cluster provides architecture-optimized OpenMPI builds for each partition.
When you load the openmpi module on a worker node, you automatically get:

  • The correct CPU-specific build
  • Proper UCX / TCP support depending on the node
  • Matching compiler toolchain

You should always compile your MPI application inside the target partition to ensure binary compatibility and optimal performance.


2. General Compilation Workflow

Step 1 — Start an interactive session on the target partition

srun --partition=<partition_name> --pty bash

Step 2 — Load OpenMPI

source /etc/profile.d/modules.sh
module avail openmpi
module load openmpi-4.1.7
ml

(Optional) Load additional libraries


Step 3 — Verify the environment

which mpicc
which mpirun
ompi_info | grep -E "prefix|pml|osc"

This ensures you are using the correct optimized OpenMPI build.


Step 4 — Compile your application

Use the OpenMPI compiler wrappers:

Language Compiler
C mpicc
C++ mpicxx
Fortran mpifort

3. Compilation Examples

3.1 Simple C MPI Program

mpicc -O3 my_mpi_program.c -o my_mpi_program

3.2 C++ Program

mpicxx -O3 my_mpi_program.cpp -o my_mpi_program

3.3 Fortran Program

mpifort -O3 my_mpi_program.f90 -o my_mpi_program

3.4 With External Libraries

If your code depends on additional libraries:

mpicc -O3 my_code.c -o my_code -L/path/to/lib -lmylib

4. Using Makefiles

4.1 Example Makefile

CC = mpicc
CFLAGS = -O3

all: my_app

my_app: my_app.c
    $(CC) $(CFLAGS) $< -o $@

clean:
    rm -f my_app

Build with:

make

4.2 Important Notes

  • Do not hardcode gcc, g++, etc.
  • Always use MPI wrappers (mpicc, mpicxx, mpifort)
  • They automatically:
  • Link correct MPI libraries
  • Set include paths
  • Match the cluster toolchain

5. Where to Compile

Compile inside the same partition where you will run:

srun --partition=partition_name --pty bash
  • Compiling on the UI node
  • Compiling on a different architecture

Why?

  • Different CPU instruction sets (AVX2, AVX-512, etc.)
  • Different MPI optimizations
  • Possible runtime crashes or degraded performance

6. Verifying the Binary

Check linked libraries:

ldd ./my_mpi_program

Check MPI linkage:

strings my_mpi_program | grep MPI

7. Running the parallel application

See the instructions here.