Skip to content

Resources

Linux HPC provides different kinds of HPC resources to its users. These mainly consist of different compute queues (i.e. SLURM partitions). Some of these partitions providing different hardware capabilities. In addition, there are two separate parallel filesystems. These are meant as a scratch space for performing application I/O when loading or generating data.

Partitions

There are several partitions available in Linux HPC:

  • photon
  • inf-short
  • inf-long
  • qcd (restricted)
  • qcd-backfill
  • (cloud no longer available)

Short partitions are meant for shorter, more interactive runs. We recommend that you use short partitions mostly for trying out your application and basic performance or scalability testing. Long partitions are meant for the heavier, longer-running jobs when you are confident that your application will work on a larger number of nodes, or will be stable enough to run for extended periods of time. However, that is just a guideline, and you may of course run on any partition. Especially if one of the partitions is filled up and the other is free, you may submit to the partition with free resources, as long as it fits the timelimit.

Note that nodes within each partition are homogeneous.

Photon partition nodes (EL9)

  • CPU: 2x AMD EPYC 7302 16-Core Processor (total of 32 physical cores, no hyperthreading)
  • Memory: 512GB DDR4 3200Mhz
  • Network:
    • Infiniband 100Gbps
    • Mellanox ConnectX-6 HCAs
  • Storage:
    • Hyperconverged CephFS for /hpscratch (Over 10GbE) _ MPI:
    • OpenMPI-4
    • MVAPICH2-2.3
    • Operating System: RHEL9.2

Inf partition nodes (CC7)

  • CPU: 2x Intel(R) Xeon(R) CPU E5-2630 v4 (20 physical cores, 40 hyperthreaded)
  • Memory: 128GB DDR4 2400Mhz (8x 16GiB 18ASF2G72PDZ-2G3B1 DIMMs)
  • Network:
    • Infiniband interconnect, Mellanox MT27500 ConnectX-3
    • Integrated Intel 10Gbit ethernet for storage interconnect, system services
  • Storage:
    • Hyperconverged CephFS for /hpcscratch (over 10GbE)
    • 960GB Intel S3520 SATA3 for local scratch _ MPI:
    • OpenMPI-4
    • MVAPICH2-2.3
    • Operating System: CentOS7

Qcd partition nodes (CC7)

  • CPU: 2 x Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz (16 physical cores)
  • Memory: 64GB DDR4 2400Mhz
  • Network:
    • Infiniband interconnect, Mellanox InfiniBand FDR
    • Integrated Intel 10Gbit ethernet for storage interconnect, system services
  • Storage:
    • Hyperconverged CephFS for /hpcscratch (over 10GbE)
    • 2x Intel DC3510 800GB SSDs for local system and scratch _ MPI:
    • OpenMPI-4
    • MVAPICH2-2.3
    • Operating System: CentOS7

Please note that the qcd partition is restricted for users in the Theory QCD team, who use it for software development. The qcd-backfill partition allows other jobs to be submitted, but they will be pre-empted and killed as soon as regular qcd jobs start on the node.

Cloud nodes

This was a set of AMD EPYC nodes that were available on Azure with the previous Microsoft contract that was discontinued in spring 2022.

The cloud partition is no longer available, but such a possibility could be envisaged again in the future, depending on Cloud provider agreements and contracts.

MPI versions

We strongly recommend leaving behind OpenMPI-3 and using OpenMPI-4 instead, as it will provide much better performance. We will only support the latest mvapich2 and OpenMPI-4 versions, meaning older versions may be removed. If your application requires an older MPI version to work, please get in touch with us.

Note that OpenMPI-3 and OpenMPI-4 are ABI compatible. Similarly, mvapich2/2.2 and mvapich2/2.3 are also ABI compatible. This means that if your application was compiled for one version, it should run without issues on the other ABI-compatible version.

For instance, let's say that your application was compiled with mvapich/2.2. You may still run it under Photon, where only mvapich2/2.3 is installed, provided you do module load mpi/mvapich2/2.3 as part of your job submission script.

I/O Scratch spaces

I/O scratch spaces for project data are all based on CephFS. The main scratch space is /hpcscratch, which is where user home directories and project directories are located. Parallel programs are expected to perform I/O on this space as no tokens are required. The home and scratch space area /hpcscratch is a Hyperconverged CephFS cluster. This means that the compute nodes of the inf partitions are also working as the storage nodes for /hpcscratch data. This has two immediate consequences: First, I/O access is faster as the data is closer (especially compared to the "old" /hpcscratch), and the services is more resilient to datacenter network incidents. Second, if you are measuring the performance of highly CPU-optimized codes and run at 100% utilization, you may observe some noise or performance variability. This is due to the fact that another user job doing intensive I/O may cause a CephFS process co-located with your job to compete for CPU. The same applies to I/O performance. For most users this will not be noticeable, but it may become visible.

For QCD users, the legacy /cephfs mount from the earlier QCD setup is also available.

Note that while running applications installed on AFS or EOS is fully supported, writing program outputs directly to AFS or EOS is not supported. Users are expected to transfer result files from the scratch space to EOS.

If your application does local caching or I/O on each worker node, it is recommended to use a local disk like /tmp for such I/O, and only get results and snapshots back to the shared scratch file.

IMPORTANT: Please note that while the scratch space provides redundancy to prevent data loss, there are NO BACKUPS.


Last update: October 13, 2023