Partitions on M3#
The nodes in the M3 cluster are categorised into different partitions. Each partition has a particular set of characteristics. For example, the m3j
partition contains nodes with very high memory, and the gpu
partition contains nodes with access to GPUs.
When submitting a SLURM job request, you can specify a partition with --partition
. For example, to request a node in the m3j
partition, you would do:
sbatch --partition=m3j my-job.sh
The tables below indicate which partitions are available to every M3 user. You may notice other partitions also exist on M3, but access to these is restricted. If you don’t specify a partition, then it defaults to the comp
partition.
Additionally, you may specify multiple partitions at once. This can be useful if you just want any GPU and don’t care what type it is, in which case you could do:
sbatch --partition=m3g,gpu --gres=gpu:1 my-job.sh
You may find the show_cluster command useful to see how busy each partition is at any given moment. This command will also help you understand the specifications of individual nodes where we have reported a partition as having “up to” some amount of resources.
Compute partitions#
Name |
Partition |
Total nodes |
Total cores |
CPUs per node |
Memory per node (GB) |
---|---|---|---|---|---|
High-Density CPUs |
m3i |
45 |
810 |
18 |
181 |
High-Density CPUs with High Memory |
m3j |
11 |
198 |
18 |
373 |
High-Density CPUs with Extra High Memory |
m3m |
1 |
18 |
18 |
948 |
Short Jobs |
short |
2 |
36 |
18 |
181 |
General Computation |
comp |
79 |
1864 |
Up to 96 |
Up to 1532 |
GPU partitions#
Warning
You should never explicitly request the desktop
partition, since it is reserved for use by STRUDEL desktops.
When you want a GPU, you must additionally specify the --gres
parameter. This is done like so:
sbatch --partition=gpu --gres=gpu:1 my-job.sh
# Or if you want a specific kind of GPU
sbatch --partition=gpu --gres=gpu:A40:1 my-job.sh
Some more specific details about these partitions can be found in GPU Look-Up Tables.
GPU type |
Partition |
Total nodes |
Total cores |
CPUs per node |
Memory per node (GB) |
Total GPUs |
GPUs per node |
---|---|---|---|---|---|---|---|
V100 |
m3g |
19 |
342 |
18 |
Up to 373 |
56 |
Up to 3 |
A100,T4,A40 |
gpu |
20 |
552 |
Up to 28 |
Up to 1020 |
52 |
Up to 8 |
P4,T4,A40 |
desktop |
28 |
682 |
Up to 32 |
Up to 1020 |
158 |
Up to 8 |
Restricted partitions#
As already noted, some partitions on M3 are restricted. General users cannot access these. These are:
Partition |
Who can access it? |
---|---|
|
Hosts the special H100 GPU nodes on M3, currently just two nodes. (QoS: m3h) Contact help to request access. |
|
Partition for standard jobs with four hour wall-time for omics community |
|
Partition with high-RAM nodes for omics community |
|
Intended for real-time processing of data collected from instruments |
|
Dedicated partition for Patrick Sexton’s lab |
|
Partition dedicated to CCEMMP |
|
Users associated with the Hudson Institute of Medical Research |
|
Zongyuan’s partnership node m3n000 |
|
FIT dedicated GPU nodes: m3u[000-008] (QoS: fitq) |
|
FIT dedicated CPU nodes: m3s[000-023],m3v[000-005] (QoS: fitqc) |
|
BDI dedicated nodes: m3a[108-109],m3u[020-022 (QoS: bdiq) |