This documentation is meant for the SLURM Linux HPC resource, which is dedicated to running mutli-node jobs, typically MPI programs. If you are running parallel but single-node jobs, please use HTCondor. For more information regarding access to Linux HPC, see Section Access.
CERN's HPC infrastructure runs on SLURM. For a quickstart guide, see Quickstart. CERN's HPC cluster resources are detailed in Resources.
For the purposes of this documentation, note that tasks and programs are used interchangeably. Similarly, partitions and queues can generally be regarded as synonyms.
Last update: November 26, 2019