This documentation is under active development, meaning that it can change over time as we refine it. Please email email@example.com if you require assistance, or have suggestions to improve this documentation.
This section covers workflows to assist the Neuroimaging Community.
Human Connectome Project Data Set¶
MASSIVE hosts a copy of the Human Conectome Project (HCP) Data, 900 and 1200 subject series. This data is restricted to M3 users that have their email registered with HCP (i.e. an account) AND have accepted the HCP terms and conditions associated with the datasets.
The data is located at:
The following outlines the process of getting access to this data:
Human Connectome Project Steps for Access¶
Create an Account¶
Follow the “Get Data” link to: http://www.humanconnectome.org/data/
Select “Log in to ConnectomeDB”
Create an account and make sure that your email address matches the email associated with you MASSIVE account
You will recieve an email to validate your account
MASSIVE Steps for Access¶
Request Access to HCPdata¶
If you have completed the Human Connectome Project steps above you can request access with this link: HCPData.
We will verify your MASSIVE email against the HCP site and grant access.
Accessing the Data¶
The data is available on M3 via /scratch/hcp and /scratch/hcp1200 You can also create a symbolic link to this in your home directory using
ln -s /scratch/hcp ~/hcp ln -s /scratch/hcp1200 ~/hcp1200
Note: The Connectome Workbench is available via the MASSIVE Desktop. Any updates to this or other software requirements can be directed to firstname.lastname@example.org
Using SLURM to submit a simple FSL job¶
Example data at
FIRST is a simple tool in the FSL library to segment sub-cortical regions of a structural image.
It usually takes 20 minutes to process each image.
The typical output is a group of image masks of all sub-cortical regions, e.g., hippocampus, amygdala…
Data and scripts¶
6 T1-weighted structural images
- Three files in the script directory
Id.txt = base name list of these imaging files (check basename function for further help)
First_task.txt = file with the resource and environment detail of one
fsl-first jobNEED TO change the DIR for the correct path of input data
Submit_template= loop to submit all six jobs.
To run the job, simply do
Try to use
squeue -u (your_masssive_accName)
to check the job.
Output log files:
Log files should appear in the script folder, recording all the logs of
Output data should start to pop up in the data folder