Skip to content

Parallel jobs

HTCondor supports MPI and OpenMPI jobs within a single node in the "Vanilla Universe". To run parallel jobs on CERN batch resources, you simply need to request multiple CPUs in your job submission file. E.g.

RequestCpus = 8
The default MPI distribtion on lxplus/batch is Mpich and OpenMPI. On the CC7 lxplus and batch nodes, there is also Mvapich23 and OpenMPI3 installed locally. Of course you may also provide a path to an MPI implementation of your application (e.g. on AFS or CVMFS) with your job submission.

Please note that the HTCondor "Parallel Universe" is not supported at CERN. For those user communities who require MPI clusters spanning hundreds of cores and more, there is a dedicated HPC facility running SLURM. Please refer to Linux HPC for more information about our dedicated MPI clusters.

If you have MPI applications to run both on batch nodes with HTCondor and on the HPC SLURM cluster, you are encouraged to use Mvapich 2.3 or OpenMPI 3 as the SLURM cluster has the same MPI distibutions.

Please also note that there are orders of magnitude more servers and cores available for HTCondor than for HPC at CERN, so most users are better off running multiple jobs with a limited number of cores under HTCondor than to run on the Linux HPC facility.


Last update: November 26, 2019