.. HPC Documentation master file | .. image:: _static/massive-website-banner.png :width: 500px :alt: The MASSIVE logo :align: center :target: http://massive.org.au .. _contact us: help@massive.org.au **************************** Welcome to the M3 user guide **************************** .. important:: **Cybersecurity Alert - 16 April 2024** A critical security vulnerability has been discovered with previous versions of: - putty - filezilla - winscp - tortoisegit we strongly advise you to update to the latest versions. Source: https://thehackernews.com/2024/04/widely-used-putty-ssh-client-found.html .. important:: **Upcoming Maintenance - 12-14 March 2024 ** Please be advised that Massive M3 will be undergoing a three-day scheduled maintenance starting Tuesday the 12th of March 2024. Access to Massive M3 will not be available throughout this maintenance, as we will be conducting the following essential works: - upgrade the servers and clients for the Lustre /projects and /scratch file systems; - upgrade the system software on the M3 network switches; and - perform other critical updates. For any concerns and enquiries, please contact our helpdesk at: help@massive.org.au .. important:: **Updates to Network Switches** Over the next few weeks (from 31 January 2024), we will be updating the network switches that underpin MASSIVE M3. Each switch takes a short time (i.e., several minutes) to update, but during this update, jobs that are sensitive to the network may be impacted. We will be placing batches of compute nodes on SLURM reservations to ensure that they will be drained of jobs before the updates. The schedule of work and any potential impact to the service will be provided below. [Update: Feb 09 2024] We have scheduled the first lot of compute nodes for this update on the 13th of February 2024. Work should not take a couple of hours to complete. [Update: Feb 16 2024] Another 28 switches need to be updated and we will be scheduling a downtime for this. This downtime will also be used to perform necessary updates to the Lustre File system, along with other updates. .. important:: **Data Fluency Training - Note the date changes** We will be presenting two courses that might be of interest to new users. Please register your email at `Data Fluency `_ to be sent details on the courses. Please see https://www.monash.edu/data-fluency/events for the link to the event and other events. * Introduction to Unix Shell (7th and 8th March 2024, 1/2 day each). To enroll please click at `Data Fluency `_ * Introduction to HPC (15th March 2024, 1 day) .. important:: **Hardware Refresh Plan** -- Update: 5 Jul 2022 Please be advised of the following M3 hardware refresh schedule. These servers are now coming into end-of-life and will be retired. While this will result in a reduction of total CPU and GPU capacity, retiring these servers is necessary to make room for new and faster compute nodes. This round of procurement for new hardware is expected to be provisioned to include new remote desktop servers to replace those being decommissioned. In the interim, more desktops will be made available from our existing pool of GPUs. Note that only the Strudel Beta list of desktops will be updated to reflect the change in desktop options. +------------------+---------------------------+------------------------+ | Compute Nodes | Capability | To be retired by | +==================+===========================+========================+ | ``m3f[000-031]`` | NVIDIA K1 GPU desktops | **21 Jun 2022** | +------------------+---------------------------+------------------------+ | ``m3c[000-013]`` | NVIDIA K80 GPU desktops | **18 Jul 2022** | +------------------+---------------------------+ | | ``m3e000`` | NVIDIA K80 GPU | | +------------------+---------------------------+ | | ``m3h005`` | NVIDIA P100 GPU desktop | | +------------------+---------------------------+------------------------+ | ``m3h[006-008]`` | NVIDIA P100 GPUs | Late 2022 | | | | Exact date **TBC** | +------------------+---------------------------+------------------------+ We will be enabling the appropriate mechanisms (e.g., SLURM ``reservation``) to ensure that these nodes will be idle of running jobs prior to their retirement. Please check your job scripts to ensure they do not specify these nodes using ``--nodelist``. .. important:: **Planned maintenance outages** We have scheduled quarterly outages for the M3 cluster. This is to ensure we communicate scheduled outages to our HPC users in advance. Where possible, we perform rolling upgrades with the cluster online. However, sometimes we have to perform upgrades that require the cluster to be taken offline. These include: * system software upgrades; * network maintenance; * bug and security patches; and * hardware maintenance .. raw:: html :file: rss_events_feed.html .. raw:: html :file: rss_news_feed.html This site contains the documentation for the MASSIVE HPC systems. M3 is the newest addition to the facility. .. toctree:: :maxdepth: 1 :caption: Help and Support help-and-support scheduled-maintenance status M3/terms-of-use .. _m3-docs: .. toctree:: :maxdepth: 3 :caption: Using M3 M3/m3users M3/requesting-an-account M3/requesting-help-on-m3 M3/connecting-to-m3 M3/file-systems-on-M3 M3/lustre M3/transferring-files M3/software/software M3/slurm/slurm-overview M3/lustre/lustre-quickstart M3/GPUs-on-M3 .. _communities: .. toctree:: :maxdepth: 3 :caption: Communities MX2 Eiger Machine Learning Neuroimaging Cryo EM Bioinformatics DGX Data Collections XNAT .. _faqs_generic: .. toctree:: :maxdepth: 3 :caption: FAQs FAQ/faq field-of-research-and-socioeconomic-objectives-codes FAQ/misc FAQ/cvl-cloudstor