Attention

This documentation is under active development, meaning that it can change over time as we refine it. Please email help@massive.org.au if you require assistance.

About M3

M3 is the third stage of the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE).

MASSIVE is a High Performance Computing (HPC) facility designed specifically to process complex data. Since 2010, MASSIVE has played a key role in driving discoveries across many disciplines including biomedical sciences, materials research, engineering and geosciences.

MASSIVE is pioneering and building high performance computing upon Monash’s specialist Research Cloud fabric. M3 has been supplied by Dell with a Mellanox low latency network and NVIDIA GPUs.

System configuration

M3 consists of 87 nodes with the following configurations:

  • “Compute” nodes in 2 configurations
    • Standard Memory:
      • Number of nodes: 21
      • Number of cores per node: 24
      • Processor model: 2 x Intel Xeon CPU E5-2680 v3
      • Processor frequency: 2.50GHz, with max Turbo frequency 3.30GHz
      • Memory per node: 128GB RAM
      • Partition: m3a
    • High Memory:
      • Number of nodes: 10
      • Number of cores per node: 24
      • Processor model: 2 x Intel Xeon CPU E5-2680 v3
      • Processor frequency: 2.50GHz, with max Turbo frequency 3.30GHz
      • Memory per node: 256GB RAM
      • Partition: m3d
  • “GPU” nodes in 5 configurations
    • Desktops:
      • Number of desktop sessions: 32
      • Number of cores per desktop session: 3
      • Processor model: 2 x Intel Xeon CPU E5-2680 v3
      • Processor frequency: 2.50GHz, with max Turbo frequency 3.30GHz
      • GPU model: nVidia Grid K1
      • Number of GPU per desktop session: 1
      • GPU cores : 192 CUDA cores
      • Memory per desktop session: 16GB RAM
      • Partition name: m3f
    • K80:
      • Number of desktop sessions: 26
      • Number of cores per desktop session: 12
      • Processor model: 1 x Intel Xeon CPU E5-2680 v3
      • Processor frequency: 2.50GHz, with max Turbo frequency 3.30GHz
      • GPU model: nVidia Tesla K80
      • Number of GPUs per desktop session: 2
      • GPU cores per card: 4,992 CUDA cores
      • Total GPU cores per desktop session: 9984 CUDA cores
      • Memory per desktop session: 128GB RAM
      • Partition name: m3c
    • K80 with High-Density GPUs:
      • Number of nodes: 1
      • Number of cores per node: 24
      • Processor model: 2 x Intel Xeon CPU E5-2680 v3
      • Processor frequency: 2.50GHz, with max Turbo frequency 3.30GHz
      • GPU model: nVidia Tesla K80
      • Number of GPUs per node: 8
      • GPU cores per card: 4,992 CUDA cores
      • Total GPU cores per node: 39,936 CUDA cores
      • Memory per node: 256GB RAM
      • Partition name: m3e
    • P100:
      • Number of nodes: 10
      • Number of cores per node: 28
      • Processor model: 2 x Intel Xeon CPU E5-2680 v4
      • Processor frequency: 2.40GHz, with max Turbo frequency 3.30GHz
      • GPU model: nVidia Tesla P100
      • Number of GPUs per node: 2
      • GPU cores per card: 3,584 CUDA cores
      • Total GPU cores per node: 7,168 CUDA cores
      • Memory per node: 256GB RAM
      • Partition name: m3h
    • DGX1:
      • Number of nodes: 1
      • Number of cores per node: 40
      • Processor model: 2 x Intel Xeon CPU E5-2698 v4
      • Processor frequency: 2.20GHz, with max Turbo frequency 3.60GHz
      • GPU model: nVidia Tesla GP100
      • Number of GPUs per node: 8
      • GPU cores per card: 3,584 CUDA cores
      • Total GPU cores per node: 28,672 CUDA cores
      • Memory per node: 512GB RAM
      • Partition name: TBU
  • Total number of cores: 1,496
  • Total number of GPU cores: 1,736,704 CUDA cores
  • Total memory: 12 TByte
  • Storage capacity: 1.15 PByte Lustre parallel file system
  • Interconnect: 100 Gb/s Ethernet Mellanox Spectrum network

M3 utilises SLURM scheduler to manage the resources with 5 partitions available to users, with different configurations to suit a variety of computational requirements. Details about the partitions can be found here