Skip to content

Job Accounting and Billing

Usage Charging Policy ULHPC Resource Allocation Policy (PDF)

Billing rates

A Job is characterized (and thus billed) according to the following elements:

  • T_\text{exec}: Execution time (in hours) also called walltime
  • N_\text{Nodes}: number of computing nodes, and per node:
    • N_\text{cores}: number of CPU cores allocated per node
    • Mem: memory size allocated per node, in GB
    • N_\text{gpus}: number of GPU allocated per node
  • associated weighted factors \alpha_{cpu},\alpha_{mem},\alpha_{GPU} defined as TRESBillingWeight in Slurm Trackable RESources (TRES) accounting. They capture the consumed resources other than just CPUs and are taken into account in fairshare factor. The following weight are formalized in the definition of the charging factor:
    • \alpha_{cpu}: normalized relative perf. of CPU processor core (reference: skylake 73,6 GFlops/core)
    • \alpha_{mem}: inverse of the average available memory size per core
    • \alpha_{GPU}: weight per GPU accelerator

Billing Rate and Service Units per Job

For a given job, the billing rate is defined from the configured TRESBillingWeight as follows:

B_\text{rate} = N_\text{Nodes}\times[\alpha_{cpu}\times N_\text{cores} + \alpha_{mem}\times Mem + \alpha_{gpu}\times N_\text{gpus}]

It follows that the number of service units associated to a given job is given by:

B_\text{rate}\times T_\text{exec} = N_\text{Nodes}\times[\alpha_{cpu}\times N_\text{cores} + \alpha_{mem}\times Mem + \alpha_{gpu}\times N_\text{gpus}]\times T_\text{exec}

You can quickly access the charging and billing rate of a given job from its Job ID <jobID> with the sbill utility:

$ sbill -h
Usage: sbill -j <jobid>
Display job charging / billing summary

$ sbill -j 2240777
# sacct -X --format=AllocTRES%60,Elapsed -j 2240777
                                                   AllocTRES    Elapsed
       ----------------------------------------------------- ----------
                         billing=448,cpu=224,mem=896G,node=8   11:35:51
       Total usage: 5195.68 SU (indicative price: 155.87€ HT)

Note: For a running job, you can also check the TRES=[...],billing=<Brate> output of scontrol show job <jobID>.

Charge Weight Factors for 2021-2022

Cluster Node Type CPU arch Partition #Cores/node \mathbf{\alpha_{cpu}} \mathbf{\alpha_{mem}} \mathbf{\alpha_{GPU}}
Aion Regular epyc batch 128 0.57 \frac{1}{1.75} 0
Iris Regular broadwell batch 28 1.0* \frac{1}{4} = 0.25 0
Iris Regular skylake batch 28 1.0 \frac{1}{4} = 0.25 0
Iris GPU skylake gpu 28 1.0 \frac{1}{27} 50
Iris Large-Mem skylake bigmem 112 1.0 \frac{1}{27} 0

In particular, interactive jobs are always free-of-charge.

2 regular skylake nodes on iris cluster

Continuous use of 2 regular skylake nodes (56 cores, 224GB Memory) on iris cluster Each node features 28 cores, 4 GigaByte RAM per core i.e., 112GB per node. It follows that for such an allocated job:

B_\text{rate} = 2 \text{ nodes} \times[\alpha_{cpu}\times 28 + \alpha_{mem}\times 4\times 28 + \alpha_{gpu}\times 0] = 2\times[(1.0+\frac{1}{4}\times 4)\times 28] = 112

Such a job running continuously for 30 days would then correspond to:

  • a total of B_\text{rate}\times T_\text{exec}= 112\times 30\text{ days}\times 24\text{ hours} =112\times 720 = 80640 SU.
  • if this job would be billed, it would lead to 80640\text{ SU}\times 0,03€/SU = 2419,2€ VAT excluded
2 regular epyc nodes on aion cluster

Continuous use of 2 regular epyc nodes (256 cores, 448GB Memory) on aion cluster. Each node features 128 cores, 1,75 GigaByte RAM per core i.e., 224GB per node. It follows that for such an allocated job:

B_\text{rate} = 2 \text{ nodes} \times[\alpha_{cpu}\times 128 + \alpha_{mem}\times 1,75\times 128 + \alpha_{gpu}\times 0] = 2\times[(0.57+\frac{1}{1.75}\times 1.75)\times 128]=401.92

Such a job running continuously for 30 days would then correspond to:

  • a total of B_\text{rate}\times T_\text{exec}= 401.92 \times 30\text{ days}\times 24\text{ hours} =401.92\times 720 = 289382,4 SU
  • if this job would be billed, it would lead to 289382,4\text{ SU}\times 0,03€/SU = 8681,47€ VAT excluded
1 GPU node (and its 4 GPUs) on iris cluster

Continuous use of 1 GPU nodes (28 cores, 4 GPUs, 756GB Memory) on iris cluster. Each node features 28 cores, 4 GPUs per node, 27 GigaByte RAM per core, 756 GB per node. It follows that for such an allocated job:

B_\text{rate} = 1 \text{ node} \times[\alpha_{cpu}\times 28 + \alpha_{mem}\times 27\times 28 + \alpha_{gpu}\times 4] = 1\times[(1.0+\frac{1}{27}\times 27)\times 28 + 50.0\times 4]=256

Such a job running continuously for 30 days would then correspond to:

  • a total of B_\text{rate}\times T_\text{exec}= 256 \times 30\text{ days}\times 24\text{ hours} =256\times 720 = 184320 SU
  • if this job would be billed, it would lead to 184320 \text{ SU}\times 0,03€/SU = 5529,6€ VAT excluded
1 Large-Memory node on iris cluster

Continuous use of 1 Large-Memory nodes (112 cores, 3024GB Memory) on iris cluster. Each node features 4\times 28=112 cores, 27 GigaByte RAM per core i.e., 3024GB per node. It follows that for such an allocated job:

B_\text{rate} = 1 \text{ node} \times[\alpha_{cpu}\times 112 + \alpha_{mem}\times 27\times 112 + \alpha_{gpu}\times 0] = 1\times[(1.0+\frac{1}{27}\times 27)\times 112]=224

Such a job running continuously for 30 days would then correspond to:

  • a total of B_\text{rate}\times T_\text{exec}= 224 \times 30\text{ days}\times 24\text{ hours} =224\times 720 = 161280 SU
  • if this job would be billed, it would lead to 161280 \text{ SU}\times 0,03€/SU = 4838,4€ VAT excluded

Trackable RESources (TRES) Billing Weights

The above policy is in practice implemented through the Slurm Trackable RESources (TRES) and remains an important factor for the Fairsharing score calculation.

As explained in the ULHPC Usage Charging Policy, we set TRES for CPU, GPU, and Memory usage according to weights defined as follows:

Weight Description
\alpha_{cpu} Normalized relative performance of CPU processor core (ref.: skylake 73.6 GFlops/core)
\alpha_{mem} Inverse of the average available memory size per core
\alpha_{GPU} Weight per GPU accelerator

Each partition has its own weights (combined into TRESBillingWeight) you can check with

# /!\ ADAPT <partition> accordingly
scontrol show partition <partition>

Last update: April 28, 2022