OpenFOAM
OpenFOAM support and limitations
Since OpenFOAM is a CFD framework that is popular across various engineering departments, a build of OpenFOAM is being maintained by the HPC Team. This means that you can expect at least one OpenFOAM build to be available in the cluster.
The official Docker images provided by the project ship with an ancient MPI that does not work well on our cluster, so we built our own images based on the MPI builds that are already available in the cluster. This has two consequences:
- The overhead for maintaining OpenFOAM builds is not low, so ideally we will only support 1 OpenFOAM version/build at a time.
- The builds that we maintain are guaranteed to offer great performance, as they will be able to fully exploit the low-latency interconnect available in the HPC cluster.
The HPC Team does not have expertise with the use of the CFD software itself, for this we refer you to other experts in your field. We encourage you to use Discourse for community discussions about engineering.
How to use OpenFOAM
The OpenFOAM builds are installed as Singularity container images (.sif files) on /hpcscratch/applications/openfoam/
.
In general, to run a command in a Singularity container, you do:
singularity exec <container_image.sif> <command>
The container images mount your home directory by default, so any command you run in the container has access to your scratch space.
Therefore, you can just write your OpenFOAM script and run the script with singularity exec
as described above.
Note that every OpenFOAM build expects your scripts to first source a config file that exists inside the container image, after which all OpenFOAM commands and environment variables become available.
The container run-help command will show which file this is:
singularity run-help /hpcscratch/applications/openfoam/OpenFOAM7.sif
OpenFOAM-7 with ParaView and OpenMPI 3.0.0
In order to load the OpenFOAM environment, you have to begin your scripts with:
source /home/openfoam/OpenFOAM-7/etc/bashrc
To run in parallel with N tasks, you just run N instances of singularity exec
of your script.
In your script, you might need to add a -parallel
option to your solver command to enable the parallel features that OpenFOAM provides.
Motorbike example
The popular OpenFOAM motorBike example that is provided with OpenFOAM follows. It consists of two scripts:
- The first,
premotor.sh
, handles the serialized part of domain decomposition - The seoncd,
motor.sh
, handles the parallelized case processing. This particular case needs 8 parallel processes to run.
premotor.sh:
# Source OpenFOAM configuration
source /home/openfoam/OpenFOAM-7/etc/bashrc
echo $HOME
echo $FOAM_RUN
# motorBike example (serial) domain decomposition
cd $FOAM_RUN
cp -r $FOAM_TUTORIALS/incompressible/pisoFoam/LES/motorBike/motorBike .
cd motorBike
blockMesh
decomposePar -force
motor.sh:
source /home/openfoam/OpenFOAM-7/etc/bashrc
echo $HOME
echo $FOAM_RUN
cd $FOAM_RUN
cd motorBike
# parallel processing
simpleFoam -parallel
All these scripts need to also be set as executable chmod ug+x premotor.sh motor.sh
.
The first you would run using a single process.
srun -n 1 singularity exec /hpcscratch/applications/openfoam/OpenFOAM7.sif /path/to/premotor.sh
While the second needs to be run with 8 processors.
srun -n 8 singularity exec /hpcscratch/applications/openfoam/OpenFOAM7.sif /path/to/motor.sh