ULHPC Slurm QOS 2.0¶
Quality of Service or QOS is used to constrain or modify the characteristics that a job can have. This could come in the form of specifying a QoS to request for a longer run time or a high priority queue for a given job.
To select a given QOS with a Slurm command, use the --qos <qos>
option:
srun|sbatch|salloc|sinfo|squeue... [-p <partition>] --qos <qos> [...]
The default QoS of your jobs depends on your account and affiliation.
Normally, the --qos <qos>
directive does not need to be set for most jobs
We favor in general cross-partition QOS, mainly tied to priority level
(low
\rightarrow urgent
).
A special preemptible QOS exists for best-effort
jobs and is named besteffort
.
Available QOS¶
QOS (partition) | Prio | GrpTRES | MaxTresPJ | MaxJobPU | MaxWall |
---|---|---|---|---|---|
besteffort (*) |
1 | 50 | |||
low (*) |
10 | 2 | |||
normal (*) |
100 | 50 | |||
long (*) |
100 | node=6 | node=2 | 4 | 14-00:00:00 |
debug (interactive ) |
150 | node=8 | 10 | ||
high (*) |
200 | 50 | |||
urgent (*) |
1000 | 100 |
List QOS Limits¶
Use the sqos
utility function to list the existing QOS limits.
List current ULHPC QOS limits with sqos
$ sqos
\# sacctmgr show qos format="name%20,preempt,priority,GrpTRES,MaxTresPerJob,MaxJobsPerUser,MaxWall,flags"
Name Preempt Priority GrpTRES MaxTRES MaxJobsPU MaxWall Flags
-------------------- ---------- ---------- ------------- ------------- --------- ----------- --------------------
normal besteffort 100 100 DenyOnLimit
besteffort 1 300 NoReserve
low besteffort 10 4 DenyOnLimit
high besteffort 200 50 DenyOnLimit
urgent besteffort 1000 100 DenyOnLimit
debug besteffort 150 node=8 10 DenyOnLimit
long besteffort 100 node=6 node=2 4 14-00:00:00 DenyOnLimit,Partiti+
What are the possible limits set on ULHPC QOS?
At the QOS level, the following elements are composed to define the resource limits for our QOS:
- Limits on Trackable RESources (TRES - a resource (cpu,node,etc.) tracked for usage or used to enforce limits against), in particular:
GrpTRES
: The total count of TRES able to be used at any given time from jobs running from the QOS.- If this limit is reached new jobs will be queued but only allowed to run after resources have been relinquished from this group.
MaxTresPerJob
: the maximum size in TRES (cpu,nodes,...) any given job can have from the QOS
MaxJobsPerUser
: The maximum number of jobs a user can have running at a given timeMaxWall[DurationPerJob]
= The maximum wall clock time any individual job can run for in the given QOS.
As explained in the Limits section, there are basically three layers of Slurm limits, from least to most priority:
- None
- partitions
- account associations: Root/Cluster -> Account (ascending the hierarchy) -> User
- Job/Partition QOS