Compiling Your Own OpenMPI with Spack
This guide explains how to compile a custom OpenMPI version using Spack on the cluster, either through your group's shared Spack installation or a personal one in your
$HOME. This is useful when you need a specific OpenMPI version, custom build flags, or a configuration not provided by the system modules.
1. Prerequisites
Before you start, make sure:
- You know which partition/nodes you will be running your jobs on — this affects which architecture and network transport you should target.
- You have enough quota in your
$HOMEor a scratch filesystem (Spack installations can be several GB).
Node reference
| Nodes | Architecture | Network |
|---|---|---|
| wn051–058 | cascadelake | 100GbE RoCE (RDMA) |
| geocean01–04 | icelake | 1GbE Ethernet |
| geocean05–08 | sapphirerapids | 1GbE Ethernet |
| wn061–064 | westmere | 1GbE Ethernet |
| wn065–067 | cascadelake | 1GbE Ethernet |
| citimac01–12 | westmere | 1GbE Ethernet |
| citimac13–25 | broadwell | 1GbE Ethernet |
| citimac26–29 | cascadelake | 1GbE Ethernet |
| citimac30–31 | zen4 | 1GbE Ethernet |
2. Setting Up Spack
There are two possible Spack installations you can use, depending on your situation:
| Group-shared Spack | Personal Spack | |
|---|---|---|
| Location | /nfs/software/<group>/spack |
$HOME/spack |
| Who can use it | All members of the group | Only you |
| Who installs packages | Any group member (shared build cache) | Only you |
| Disk usage | Shared across the group | Counts against your home quota |
| When to use | Your group already has it set up, or you want to share builds with colleagues | You need full control, or your group has no shared Spack |
If you are unsure, check whether your group already has a shared installation:
ls /nfs/software/<group>/spack
If the directory exists, start with Option A. If not, or if you prefer an isolated setup, go to Option B.
Option A — Group-shared Spack
2.A.1 Activate the shared Spack instance
source /nfs/software/<group>/spack/share/spack/setup-env.sh
Replace <group> with your actual group name (e.g. geocean, citimac).
To load it automatically in every session:
echo 'source /nfs/software/<group>/spack/share/spack/setup-env.sh' >> $HOME/.bashrc
2.A.2 Verify it is active
which spack
spack --version
The path should point to the shared installation, not a system-wide one.
Note: With a shared Spack, any environment you create and any packages you install will be visible to — and can be reused by — other members of your group. Coordinate with your colleagues to avoid duplicate builds.
Option B — Personal Spack in $HOME
Use this if your group has no shared Spack, or if you want a completely independent setup.
2.B.1 Clone Spack into your home directory
git clone -c feature.manyFiles=true https://github.com/spack/spack.git $HOME/spack
Spack with a full package database can occupy several hundred MB just for the repository, plus the space taken by any packages you build. Make sure you have sufficient quota before proceeding.
2.B.2 Activate Spack in your current session
source $HOME/spack/share/spack/setup-env.sh
To load it automatically in every session:
echo 'source $HOME/spack/share/spack/setup-env.sh' >> $HOME/.bashrc
2.1 Register available compilers
Regardless of which option you chose, run this once after activating Spack:
spack compiler find
spack compilers # verify gcc appears in the list
2.2 Check system package versions
Spack needs to know the exact versions of some system packages (Slurm, PMIx, OpenSSL, etc.) so it can use them as external dependencies instead of rebuilding them. Run the following to check:
rpm -q slurm pmix perl openssl munge
pmix_info --version
openssl version
perl --version | head -1
Note: On RDMA nodes (wn051–058) only, also run:
bash ucx_info -v | grep "Library version"
3. Creating a Spack Environment
It is strongly recommended to use a dedicated Spack environment for each project or OpenMPI configuration. This keeps dependencies isolated and reproducible.
3.1 Create and activate the environment
spack env create my-openmpi
spack env activate my-openmpi
You can verify the environment is active — your prompt should show [my-openmpi], or you can run:
spack env status
3.2 Edit the environment configuration
spack config edit
This opens the spack.yaml file for your environment. Replace its contents with one of the templates in the next section.
4. Configuration Templates
Choose the template that matches the nodes where you will run your jobs.
4.1 TCP nodes (icelake — geocean01–04)
spack:
specs:
- "openmpi@4.1.7 schedulers=slurm fabrics=ucx +legacylaunchers +cxx ^pmix@4.2.9 target=icelake"
- "ucx@1.17.0 ~verbs +cma ~xpmem target=icelake"
packages:
all:
target: [icelake]
compiler: [gcc@13.1.0]
slurm:
externals:
- spec: slurm@24.11.3
prefix: /usr
buildable: false
pmix:
externals:
- spec: pmix@4.2.9
prefix: /usr
buildable: false
perl:
externals:
- spec: perl@5.26.3
prefix: /usr
buildable: false
openssl:
externals:
- spec: openssl@1.1.1k
prefix: /usr
buildable: false
view: false
concretizer:
unify: when_possible
modules:
default:
enable:
- tcl
roots:
# Option A — group-shared Spack: use a shared path so all group members can load the module
tcl: /nfs/software/<group>/modulefiles/my-openmpi
# Option B — personal Spack: use your home directory
# tcl: $HOME/modulefiles/my-openmpi
arch_folder: false
tcl:
hash_length: 0
all:
autoload: direct
projections:
all: '{name}-{version}/{compiler.name}-{compiler.version}'
include:
- openmpi
- ucx
exclude_implicits: true
Adapting to other TCP architectures: just change every occurrence of
icelaketo your target (e.g.cascadelake,broadwell,westmere,sapphirerapids,zen4) and update the module root path accordingly.
4.2 RDMA nodes (cascadelake — wn051–058)
On RDMA nodes, UCX is provided by the system MOFED installation and must not be rebuilt by Spack:
spack:
specs:
- "openmpi@4.1.7 fabrics=ucx schedulers=slurm +legacylaunchers +cxx ^pmix@4.2.9 target=cascadelake"
packages:
all:
target: [cascadelake]
compiler: [gcc@13.1.0]
ucx:
externals:
- spec: "ucx@1.18.0"
prefix: /usr
buildable: false
slurm:
externals:
- spec: "slurm@24.11.3"
prefix: /usr
buildable: false
pmix:
externals:
- spec: "pmix@4.2.9"
prefix: /usr
buildable: false
perl:
externals:
- spec: "perl@5.26.3"
prefix: /usr
buildable: false
openssl:
externals:
- spec: "openssl@1.1.1k"
prefix: /usr
buildable: false
view: false
concretizer:
unify: when_possible
modules:
default:
enable:
- tcl
roots:
# Option A — group-shared Spack: use a shared path so all group members can load the module
tcl: /nfs/software/<group>/modulefiles/my-openmpi-rdma
# Option B — personal Spack: use your home directory
# tcl: $HOME/modulefiles/my-openmpi-rdma
arch_folder: false
tcl:
hash_length: 0
all:
autoload: direct
projections:
all: '{name}-{version}/{compiler.name}-{compiler.version}'
include:
- openmpi
- ucx
exclude_implicits: true
5. Building OpenMPI
5.1 Resolve the dependency graph
spack concretize -f
5.2 Verify UCX source (important!)
Before installing, confirm that Spack is using the right UCX:
spack find -lv ucx
- TCP nodes: you should see a Spack-generated hash (not
[external]). - RDMA nodes: you should see
[external] ucx@x.x.x /usr.
If this is wrong, re-check your spack.yaml before proceeding.
5.3 Install
spack install 2>&1 | tee $HOME/spack-build.log
5.4 Generate modulefiles
spack module tcl refresh --delete-tree
6. Using Your Custom OpenMPI
6.1 Add your modulefiles to the module path
The path to use depends on which Spack option you chose in Section 2:
Option A — group-shared Spack:
module use /nfs/software/<group>/modulefiles/my-openmpi
Add to your .bashrc to make it permanent:
echo 'module use /nfs/software/<group>/modulefiles/my-openmpi' >> $HOME/.bashrc
Option B — personal Spack:
module use $HOME/modulefiles/my-openmpi
Add to your .bashrc to make it permanent:
echo 'module use $HOME/modulefiles/my-openmpi' >> $HOME/.bashrc
6.2 Load the module
module avail # should now list your openmpi
module load openmpi-4.1.7/gcc-13.1.0
6.3 Verify the installation
ompi_info | grep ucx # UCX transport must appear
ompi_info | grep -i pmix # PMIx must appear
which mpirun # must point to your Spack install, NOT /usr/bin
7. Using Your OpenMPI in a Slurm Job
Add the following lines to your job script, using the block that matches your Spack setup from Section 2:
Option A — group-shared Spack:
#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --partition=geocean
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --time=02:00:00
#SBATCH --mem=32G
source /nfs/software/<group>/spack/share/spack/setup-env.sh
module use /nfs/software/<group>/modulefiles/my-openmpi
module load openmpi-4.1.7/gcc-13.1.0
srun ./my_mpi_program
Option B — personal Spack:
#!/bin/bash
#SBATCH --job-name=myjob
#SBATCH --partition=geocean
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --time=02:00:00
#SBATCH --mem=32G
source $HOME/spack/share/spack/setup-env.sh
module use $HOME/modulefiles/my-openmpi
module load openmpi-4.1.7/gcc-13.1.0
srun ./my_mpi_program
Tip: Always use
sruninstead ofmpiruninside Slurm jobs — it integrates better with the scheduler's process management.