Attention

This documentation is under active development, meaning that it can change over time as we refine it. Please email help@massive.org.au if you require assistance, or have suggestions to improve this documentation.

GPU Look-Up Tables#

These look-up tables provide an overview of key information about our GPUs, to assist you when choosing a GPU on M3 for your research. For more detailed discussion of GPU selection, see Starter Guide: GPUs on M3, or for more detailed hardware information, see About M3.

How do I choose a GPU?#

Some key things to consider when choosing the right GPU for your workload are:

More details can be found on the Starter Guide: GPUs on M3 page.

Look-Up Tables#

We have compute GPUs accessible via the queue or interactively, and GPUs specifically reserved for desktops. The tables are split accordingly.

Compute GPUs#

GPU

Should I use this?

More details

QoS/Partition

V100 16GB (Volta)

  • In single GPU jobs these match the DGX GPUs on performance but are available for everyone to submit to.

  • There are long queue times for these.

  • In general, the wait time for these is justified by the performance.

  • 20 servers (nodes)

  • 3 V100 GPUs per server

  • 36 CPU cores per server

  • 16GB of RAM per GPU

  • 340-373GB of RAM per server

  • #SBATCH --partition=m3g

V100 32GB (Volta)

  • In single GPU jobs these match the DGX GPUs on performance but are available for everyone to submit to.

  • There are only 4 servers with 32GB memory V100s. Queue time is therefore longer, so if you dont need 32GB of memory, consider using the 16GB variety instead - they’re just as fast.

  • 4 servers (nodes)

  • 3 V100 GPUs per server

  • 36 CPU cores per server

  • 32GB of RAM per GPU

  • 340-373GB of RAM per server

  • #SBATCH --partition=m3g

  • To specify you need a 32GB V100, you also need to add: #SBATCH --constraint=V100-32G

T4 (Turing)

  • Successor to the P4 GPUs with higher clock-speeds and 16GB of GDDR6 RAM.

  • Also available as desktops via Strudel2 (see Desktop GPUs below.)

  • 1 server (nodes)

  • 8 T4 GPUs per server

  • 52 CPUs per server

  • 16GB of RAM per GPU

  • 1TB of RAM per server

  • #SBATCH --partition=gpu

A40 (Ampere)

  • These GPUs are the second-newest and come with the second-most GPU RAM of the GPUs on M3.

  • Also available as desktops via Strudel2 (see Desktop GPUs below.)

  • 2 servers (nodes)

  • 4 A40 GPUs per server

  • 52 CPUs per server

  • 48GB of RAM per GPU

  • 1TB of RAM per server

  • #SBATCH --partition=gpu

A100 (Ampere)

  • These GPUs are the newest and come with the most GPU RAM of the GPUs on M3.

  • You may find A100s on other nodes (e.g. bdi) but these may be reserved for special users. See our partitions page for more details

  • 13 servers (nodes)

  • 2 A100 GPUs per server

  • 56 CPUs per server

  • 80GB of RAM per GPU

  • 1TB of RAM per server

  • #SBATCH --partition=gpu --constraint=A100-80G

DGX (Volta)

  • THESE ARE BEING DEPRECATED SOON (some time after 08/May/2024). They will not be upgraded to Rocky 9.

  • These contain 8 GPUS per server, and are purpose built for deep learning.

  • Use these when you require multiple GPUs on one server, or leverage the NVLink capabilities.

  • You must apply for access to the DGX.

  • Jobs submitted to the DGX must use a minimum of 4 GPUs.

  • They are a limited resource and thus have a lengthy queue time - these should be reserved for jobs that demonstrate their scalability.

  • 11 servers (nodes)

  • 8 DGX GPUs per server

  • 40 CPU cores per server

  • 32GB of RAM per GPU

  • 512GB of RAM per server

  • #SBATCH --qos=dgx

  • #SBATCH --partition=dgx

  • You need to apply to use the DGX using the form on this page.

Desktop GPUs#

We have some GPUs available through the desktop. As these are accessed via desktops, there is no partition column in this table; select the GPU when setting up your desktop session as described in the Strudel documentation.

GPU

Should I use this?

More details

P4 (Pascal)

  • This is the option most likely to meet your desktop needs.

  • Currently one of M3’s least powerful GPUs, but still sufficient for many activities, including testing your work before submitting to a more powerful GPU. Some visualisation software may require a P4.

  • Typically minimal wait time to start a P4 desktop.

  • 54 desktops available

  • 1 P4 GPU per desktop

  • 6 CPU cores per desktop

  • 55GB of RAM per desktop

T4 (Turing)

  • Successor to the P4 GPUs with higher clock-speeds and 16GB of GDDR6 RAM.

  • Available in either 1 or 2 GPUs per desktop configurations.

  • The dual T4 GPU desktop can be useful for testing multi-GPU workflows before queueing for more powerful GPUs.

  • 62 T4 GPUs available in total

  • 1 or 2 T4 GPUs per desktop

  • 6 or 13 CPU cores per desktop

  • 100GB or 225GB of RAM per desktop

A40 (Ampere)

  • Highest GPU RAM of the GPUs available on M3 (48GB).

  • 24 A40 desktops available

  • 1 A40 GPU per desktop

  • 13 CPU cores per desktop

  • 250GB of RAM per desktop