Skip to content

Spool only Schedd

As discussed in Data Flow, submission to the HTCondor schedds at CERN normally makes use of a shared filesystem, ie AFS. This is convenient, but also shared filesystems introduce instability. There are two main methods to avoid shared filesystems: spool job submission (condor_submit -spool) and usage of the xrootd file transfer plugin.

Spool submission in general is described here. Spool submission may be used with any CERN local schedd, but certain schedds accept only spool jobs and should thus be especially stable. This should make them more suitable for users who have longer jobs and are willing to have slightly more constraints.

Note

Shared filesystems are still available on the worker nodes. Whilst bearing in mind that we recommend as always to use the local working directory for your jobs, there is no difference between the workers that run your jobs using this spool service.

Usage

The basics for using the spool schedds are fairly simple.

  • To select the schedds:

    module load lxbatch/spool

    Note that to switch back to normal schedds you would need to either:

    module unload lxbatch/spool or module load lxbatch/share

  • To Submit to the spool schedds:

    condor_submit -spool <submit file > [...]

Other constraints

  • A max of 100 jobs per submission (ie queue 100 or the equivalent)
  • A max of 500 total jobs per owner
  • A max of 1024mb input or output transfer to the schedds (the workers can do more)

File transfer

For large or numerous input/output files, a probably better alternative to spool submission is the use of the xrootd file transfer plugin. Please consider using it, by configuring it in your submit file, as detailed here, or by making use of the new (experimental) EosSubmit schedds, which automatically use the xrootd plugin for EOS files.


Last update: February 11, 2025