Spool only Schedd
As discussed in Data Flow, submission to the HTCondor schedds at CERN normally makes use of a shared filesystem, ie AFS. This is convenient, but also shared filesystems introduce instability. Therefore spool job submissions where files are transferred to the schedds and no shared filesystems are used may be convenient for users managing many jobs or producing large amounts of files.
Spool submission may be used with any CERN local schedd, but certain schedds accept only spool jobs and should thus be especially stable. This should make them more suitable for users who have longer jobs and are willing to have slightly more constraints.
Note
Shared filesystems are still available on the worker nodes. Whilst bearing in mind that we recommend as always to use the local working directory for your jobs, there is no difference between the workers that run your jobs using this spool service.
Usage
The basics for using the spool schedds are fairly simple.
-
To select the schedds:
module load lxbatch/spool
Note that to switch back to normal schedds you would need to either:
module unload lxbatch/spool
ormodule load lxbatch/share
-
To Submit to the spool schedds:
condor_submit -spool <submit file > [...]
Other constraints
- A max of 100 jobs per submission (ie
queue 100
or the equivalent) - A max of 500 total jobs per owner
- A max of 1024mb input or output transfer to the schedds (the workers can do more)
File transfer
On this last constraint, please consider using file transfer as detailed here