M3 User Guide
Help and Support
Using M3
Communities
FAQs
MASSIVE
News
Events
Help and Support
Using M3
Communities
FAQs
MASSIVE
News
Events
Quick search

Help and Support

  • Help and Support
  • Scheduled Maintenance
  • MASSIVE System Status and Known Issues
  • MASSIVE Terms of Use

Using M3

  • About M3
  • Requesting an account
  • Requesting help on M3
  • Connecting to M3
  • File Systems on M3
  • Copying files to and from M3
  • Software on M3
  • Running jobs on M3
    • Running jobs on M3
  • Lustre File System Quickstart Guide

Communities

  • MX2 Eiger
  • Deep Learning
  • Neuroimaging
  • Cryo EM
  • Bioinformatics
  • DGX

FAQs

  • Frequently asked questions
  • About Field of Research (FOR) and Socio-Economic Objective (SEO) codes
  • Miscellaneous
  • CloudStor Plus
  • Docs »
  • Running jobs on M3
  • ← Software on M3
  • Partitions Available →

Running jobs on M3¶

Launching jobs on M3 is controlled by SLURM, the Slurm Workload Manager, which allocates the compute nodes, resources, and time requested by the user through command line options and batch scripts. Submitting and running script jobs on the cluster is a straight forward procedure with 3 basic steps:

  • Setup and launch

  • Monitor progress

  • Retrieve and analyse results

For more details please see:

  • Partitions Available
  • Other partitions
  • Checking the status of M3
    • The STATUS field explained
  • Slurm Accounts
    • Default accounts
    • Setting the account for a job
    • Questions about slurm accounts
  • Getting started with job submission scripts
  • Running Simple Batch Jobs
    • An example Slurm job script
    • Cancelling jobs
  • Running MPI Jobs
    • An example Slurm MPI job script
  • Running Multi-threading Jobs
    • An example Slurm Multi-threading job script
  • Running Interactive Jobs
    • Submitting an Interactive Job
    • How long do interactive jobs last for?
    • Reconnecting to/Disconnecting from an Active Interactive Job
  • Running GPU Jobs
    • Running GPU Batch Jobs
    • Compiling your own CUDA or OpenCL codes for use on M3
  • Running Array Jobs
    • An example of Slurm Array job script
  • QoS (Quality of Service)
    • How to run jobs with QoS
    • Explanation
  • Features & Constraints
    • An example of using constraint in the job script
    • Features Available
  • Checking job status
    • Method 1: show_job
    • Method 2: Slurm commands
  • Project Allocation
    • Project Space
    • Questions about allocation
  • ← Software on M3
  • Partitions Available →
© Copyright 2020, Monash eResearch Centre.
Created using Sphinx 2.1.1 with Press Theme.