diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..966015a --- /dev/null +++ b/404.html @@ -0,0 +1,609 @@ + + + +
+ + + + + + + + + + + + + + +Open your terminal and type:
+ssh <username>@graham.computecanada.ca # Graham login node
+
+SSH key pairs are very useful to avoid typing passwords
+There are pre-installed modules that you’ll need to load in order to use. To see
+all modules available: module avail
To load module (you can put this in your .bashrc if you need the module all the
+time): module load <module_name>
Example: Check if git is available and load it
+module avail git
+module load apps/git/2.13.0
+
+scp <filename> <username>@graham.computecanada.ca:<PATH/TO/FILE>
+scp <username>@graham.computecanada.ca:<PATH/TO/FILE> <LocalPath>
+
+rsync <LocalPath/filename> <username>@graham.computecanada.ca:<PATH/TO/FILE>
+rsync <username>@graham.computecanada.ca:<PATH/TO/FILE> <LocalPath>
+
+Install Datalad on your cluster :
+module load git-annex python/3
+virtualenv ~/venv_datalad
+source ~/venv_datalad/bin/activate
+pip install datalad
+
+See +here for more details.
+Here is an example of a simple bash script:
+#!/bin/bash
+#SBATCH --time=00:05:00
+#SBATCH --account=def-flepore
+echo 'Hello, world!'
+sleep 20
+
+In the cluster terminal
+sbatch <name of the file>
+
+Example:
+sbatch simple.sh
+Submitted batch job 65869853
+
+Use squeue or sq to list jobs
+sq
+
+JOBID USER ACCOUNT NAME ST TIME_LEFT NODES CPUS TRES_PER_N MIN_MEM NODELIST (REASON)
+65869853 mmaclean def-flepore_cpu simple.sh PD 5:00 1 1 N/A 256M (Priority)
+
+Use email notification to learn when your job starts and ends by adding the +following at the top of your script:
+#SBATCH --mail-user=michele.maclean@umontreal.ca
+#SBATCH --mail-type=BEGIN
+#SBATCH --mail-type=END
+#SBATCH --mail-type=FAIL
+#SBATCH --mail-type=REQUEUE
+#SBATCH --mail-type=ALL
+
+scancel <jobid>
+scancel 65869853
+
+By default the output is placed in a file named "slurm-", suffixed with the job +ID number and ".out", e.g. slurm-65869853.out, in the directory from which the +job was submitted. Having the job ID as part of the file name is convenient for +troubleshooting Files will be output according to where you specified in your +bash script
+$SCRATCH
disk to run your scripts, because $SCRATCH
is much faster
+ than $HOME`.diskusage_report
Download the containers for fmriprep and mriqc here:
+Repro nim containers: https://github.com/ReproNim/containers
+mkdir parallel_analysis
+
+cd parallel_analysis
+datalad install https://github.com/ReproNim/containers.git
+
+datalad get containers/images/bids/bids-fmriprep--21.0.1.sing
+
+you might need to unlock the container to be able to use it
+datalad unlock containers/images/bids/bids-fmriprep--21.0.1.sing
+
+Have your freesurfer license ready.
+Here is an example script
+#!/bin/bash
+
+# fail whenever something is fishy
+# -e exit immediately
+# -x to get verbose logfiles
+# -u to fail when using undefined variables
+# https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html
+set -e -x -u -o pipefail
+
+#-------------------------------------------
+#SBATCH -J fmriprep
+#SBATCH --account=def-flepore
+#SBATCH --time=15:00:00
+#SBATCH -n 1
+#SBATCH --cpus-per-task=4
+#SBATCH --mem-per-cpu=8G
+#SBATCH --mail-user=michele.maclean@umontreal.ca
+#SBATCH --mail-type=BEGIN
+#SBATCH --mail-type=END
+#SBATCH --mail-type=FAIL
+#SBATCH --mail-type=REQUEUE
+#SBATCH --mail-type=ALL
+# ------------------------------------------
+
+source ~/venv_datalad/bin/activate
+
+module load git-annex/8.20200810
+module load freesurfer/5.3.0
+module load singularity/3.8
+
+cd $HOME || exit
+
+singularity run \
+ --cleanenv \
+ -B $HOME/scratch:/scratch \
+ -B $HOME/projects/def-flepore/mmaclean:/mmaclean \
+ $HOME/projects/def-flepore/mmaclean/parallel_analysis/containers/images/bids/bids-fmriprep--21.0.1.sing \
+ /mmaclean/raw \
+ /mmaclean/fmriprep-output \
+ participant \
+ --participant-label CTL01 \
+ --work-dir /scratch/work-fmriprep \
+ --fs-license-file /mmaclean/license/freesurfer.txt \
+ --output-spaces MNI152NLin2009cAsym T1w \
+ --skip_bids_validation \
+ --notrack \
+ --stop-on-first-crash
+
+
+Extra reference:
+ + + + + + + + + + + + + + +Interesting slides & other stuff :
+It is a BIDSapp so there is a classical way to use it.
+See BIDSapps page : +https://remi-gau.github.io/bids_workshop/bids_apps.html
+Through docker : Everything you do on docker will be done in your terminal.
+For the cluster : you will use “Singularity”, quite similar overall.
+They are “virtual machines” somehow, you need to tell them a certain mapping +between your folders in your machine and inside the container.
+Careful : Docker by default : inside the virtual machine you are the root user +(admin), you have to change the permission so that you have write/change +permissions on the data that will be in your computer. That needs to be done +before running the conversion/analyses..
+--user "$(id -u):$(id -g)"
+
+fMRIprep has extensive documentation : +https://fmriprep.org/en/stable/
+Important
+When using fMRIPrep, use the working directory option : it +allows to save “temporary” analytical steps, so that if it crashes/stops, you +will not need to re-run it again from scratch, it will start again at the point +where it crashed/was killed/stopped. There are a lot of checkpoints where it can +re-start.
+bash
+--work-dir /tmp
If you use it : it saves all the intermediate steps : that’s a lot of data, so +you can run out of space.
+Eg. you run participant 1, whenever participant 1 has finished, delete all the +tmp files.
+Unless you want to try different options and don’t want to start each time from +scratch, than you can keep it, but once you’ve finished testing options etc you +delete it all.
+Note that fmriprep runs with freesurfer.
+You will not have the data from each step of preprocessing.
+It will give you a .tsv file with the confounds
+Specific options from fMRIPrep that are interesting other than that :
+[--stop-on-first-crash]
Think about using the BIDS validator on your dataset beforehand. Even though by +default, fmriprep runs a bids validation step (you can ask it to skip that step +but not recommended).
+You can (should) directly copy/paste the “methods” section from the fmriprep +outputs : it writes down every single step and options (and versions) you used +for preprocessing.
+Example of a script of fmriprep on container :
+https://github.com/cpp-lln-lab/CPP_brewery/blob/master/remi/containers/code/run_fmriprep.sh
+What’s amazing is that you can just run it and do not worry about it. +You can ask it to send you emails when the job starts/stops/crashes…
+There is no user interface, you need to get familiar +with using terminal lines only, and get used to some specific lines.
+You can use it anywhere in the world.
+We should look at our own cluster specifications but globally it should be +similar.
+From your terminal window : you want to connect with ssh name@cluster & asks + your password (and you will need ssh keys). Now you are in your cluster
+For the CECI see the doc:
+https://support.ceci-hpc.be/doc/_contents/QuickStart/ConnectingToTheClusters/FromAUnixComputer.html
+You need to load the things you need to use (each time). Depending on your + cluster, there will be modules available already ,but you need to call it to + retrieve it.
+Look at the available modules and retrieve the needed modules. Module avail : + list the modules available // module load : loads the module you want
+Transferring files from your computer to your cluster
+Secure copy : scp
. To copy individual files / directories from your computer
+ to the cluster. scp <name file>
and where you want to copy it (see the
+ documentation above). This is for small files.
Datalad : easiest way because there is version control. You can pull the data + from there to the cluster. You need to have datalad on the cluster or install + it. (we should check that) Here how to install it on the CECI cluster + https://github.com/cpp-lln-lab/CPP_HPC
+Submitting jobs
+Submitting jobs with slurm on the CECI Cluster :
+https://support.ceci-hpc.be/doc/_contents/QuickStart/SubmittingJobs/SlurmTutorial.html
+If you want to write a script you can use a line that opens an editor within the +terminal.
+Example : nano sbatch simplejob.sh
Example below from Michele’s HackMD
+
+#!/bin/bash That’s a general line you have to put
+
+#SBATCH --time=00:05:00 Specify the time
+
+#SBATCH --account=def-flepore Specify the account
+
+echo 'Hello, world!' And your script…
+
+sleep 20
+
+
+sq
gives you the information on the jobs that you have asked to run.
But instead of that you just ask for emails because sq
is computationally
+greedy
For this you put the following lines at the very top of your script
+ #SBATCH --mail-user=michele.maclean@umontreal.ca
+ #SBATCH --mail-type=BEGIN
+ #SBATCH --mail-type=END
+ #SBATCH --mail-type=FAIL
+ #SBATCH --mail-type=REQUEUE
+ #SBATCH --mail-type=ALL
+
+How to cancel a job :
+scancel <jobid>
By default, the outputs go to output files called slurm- with the job ID.
+**Scratch directory **
+After a certain timeline, all the files will be automatically deleted. This +allows you to have more space and don’t need to worry about deleting your files. +You will get an email before that.
+The scratch directory will be in the cluster, maybe in the team directory or +yours depending on the cluster.
+For fmriprep you need to download the singularity image of fmriprep : one that +works is the reproname container. It has datalad containers for fmriprep/mriqc +etc..
+https://github.com/ReproNim/containers
+You have a container of everything that’s needed to run fmriprep/mriqc on your +cluster.
+You create a directory with mkdir (make directory), and install the containers +from repronim. Online they have everything that you need, and you pull the +images you need. You need datalad installed on the cluster for that.
+You can also use an image for fmriprep that you already have on your computer +(if you have done that previously, but not recommended if you are beginning this +process).
+datalad install https://github.com/ReproNim/containers.git
+
+datalad get containers/images/bids/bids-fmriprep--21.0.1.sing
+
+datalad unlock containers/images/bids/bids-fmriprep--21.0.1.sing
+
+Depending on the cluster “unlock” is needed or not.
+For fmriprep & mriqc you need a freesurfer license: user specific (free). It is +a .txt file.
+https://surfer.nmr.mgh.harvard.edu/registration.html
+You copy paste it from your computer to the cluster with the secure copy (scp +command).
+Writing a script
+Idea : have a basic script on github that we modify
+-J : job you want to run
+-account : account you want to use
+-time : time you want
+-n :
+-cpus per task : how much cpus you want to allocate
+In the script have the things you want to load so that you don’t forget and have +to run it before. You need to load each module each time.
+Singularity run : that’s the main line of the script
+\
: tells it that the script continues but on another line
The lines of code for fmriprep are the same, except that you use singularity.
+What’s important :
+you need to mount the files you want to analyze into the
+container. -B
: mount what’s in my user into the container. If you don’t do
+that it won’t know where the files are because it would be “stuck” in the image.
scratch
directory, all the folders in the
+ user, and you need to say which image of fmriprep you will use./maclean/
).Launch it : sbatch fmriprep.sh
+Running mriqc : same but the image you need will be mriqc
+Options are different but the lines are similar
+CECI Cluster
+Main webpage : https://www.ceci-hpc.be/
+Documentation : +https://support.ceci-hpc.be/doc/
+Make a script: +https://www.ceci-hpc.be/scriptgen.html
+The cluster we will use is most likely Lemaître3
+To connect to the cluster :
+ + + + + + + + + + + + + + +Here the CECI documentation for creating an +account and accessing the cluster. But please do check in the folder where this +google doc is saved for quick tips for creating an account.
+Access the cluster: things have changed after august 2020, if you had an + account before this date you need to follow all the steps explained + here + again. It is very important that you follow all the steps in this link (e.g. + get a private ceci key, configure ssh, …) They also provide troubleshooting + tips.
+Avoiding providing password/passphrase all the time you access to the + cluster:
+Doing ssh clustername
does not work in a vanilla way as before, you may need
+to insert the pss every time or try this:
ssh-add -k ~/.ssh/id_rsa.ceci ssh-add -l ssh clustername
+
+clutername
should be replaced by the cluster you want to join, e.g. lemaitre3
Finally, if everything is working, the next time one logs in, the following +should be fine:
+ssh lemaitre3
+
+Issues with connecting with the cluster.(even after following steps 1 and 2.)
+Solution: Since CISM changed the server gwceci.cism.ucl.ac.be .
+Go to: /Users/your_local_username/.ssh/known_hosts
Warning
+In your local machine, you need to go to the folder where you keep the .ssh +folder. And in that folder you need to navigate to known_hosts. You can type as +follow in the terminal
+atom ~/.ssh/known_hosts
+
+Remove or comment line 4 as shown below. According to your known_hosts file, +the line number can vary. !
+Tip#1: Used Atom, as evident from above, easy to edit, and save.
+Tip#2: While using OpenConnectVPN from home, set preferences to IPV4 (and not +IPV6).
+Then repeat the step2. You should log in. And if you can log in, that’s all! +This action will add another line in knownhosts document, starting with +gwceci.cism.ucl.ac.be as can be seen in above _line 6.
+cd /home/ucl/irsp/ cd $HOME cd $GLOBALSCRATCH
+
+pwd
+
+scp -r mydir/ clustername:cluster/dir/
+
+scp -r clustername:cluster/dir/ mydir/
+
+Note: The commands above need to be entered in a terminal that is logged into +the local machine not in the cluster-logged terminal.
+Example to load a file:
+scp sub-x001_ses-001_T1w.nii
+lemaitre3:/home/ucl/cosy/battal/RhythmCateg/RhythmCateg_Anat
+
+First one would need to create scripts, which should be under this +format.
+Then follow the above examples for carrying your folder of input into the +cluster folder.
+Copy from your local to the cluster:
+scp -r MovieBlind_Anatomicals lemaitre3:/home/ucl/irsp/morezk/MovieBlind
+
+Copy from the cluster to the local
+scp -r
+lemaitre3:/home/ucl/irsp/morezk/MovieBlind/MovieBlind_Anatomicals/fs_output
+~/Desktop/
+
+Copy scripts
+scp -r run_reconall.slurm lemaitre3:/home/ucl/irsp/morezk/MovieBlind
+scp -r run_reconall_batch.slurm lemaitre3:/home/ucl/irsp/morezk/MovieBlind
+
+Lastly, when running scripts use sbatch instead of sh
+This is wrong:
+sh ./
+
+Try:
+sbatch submit.slurm
+
+For more general information please see this +link.
+Some quick and helpful guides: + slurm workload manager
+One can type to check if their job is R: RUNNING, PD:PENDING
+squeue
+
+If one uses this website to generate
+their bash script, they will also get a notification email when the job is
+started and done including the job-id
.
How to read the slurm<jobID>.out files?
+.out are error /log files.
+cat slurm $jobID>.out
+
+To create the singularity image:
+singularity pull --name ~/sing_images/mriqc_0.15.2.sif
+docker://poldracklab/mriqc:0.15.2
+
+To run singularity:
+ singularity run --cleanenv \
+ --bind ~/data/V5_high-res_pilot-1/raw:/data \
+ --bind ~/data/V5_high-res_pilot-1/derivatives/mriqc:/out \
+ ~/sing_images/mriqc_0.15.2.sif \
+ /data /out participant \
+ --participant_label pilot001
+
+Lecture slides on singularity : +Packaging software in portable containers with Singularity
+Recorded Event(lecture) : +on Teams
+Command to navigate the folders and open text files in a terminal “gui”
+mc
+
+There is Cloud computing (what Remi was suggesting, virtual machines) and + Cluster computing. UCL only supports cluster computing for the moment.
+To install new software:
+They can install it
+https://nl.mathworks.com/products/compiler/matlab-runtime.html +https://en.wikibooks.org/wiki/SPM/Standalone
+ + + + + + + + + + + + + +List of fmriprep version available here: +https://hub.docker.com/r/nipreps/fmriprep/tags/
+Latest (long term support) LTS version: 20.2.7
+Requirements
+datalad for data source control and installing data
+VERSION=21.0.2
+singularity build ~/my_images/fmriprep-${VERSION}.simg \
+ docker://nipreps/fmriprep:${VERSION}
+
+datalad create --force -c yoda ~/my_analysis
+cd ~/my_analysis
+tree
+
+Folder structure
+├── CHANGELOG.md
+├── code
+│ └── README.md
+└── README.md
+
+Adding derivatives
folder for output and cloning a raw BIDS dataset into the
+input folder. Using the MoAE SPM BIDS dataset from GIN:
`git@gin.g-node.org:/SPM_datasets/spm_moae_raw.git
mkdir derivatives
+mkdir inputs
+datalad install -d . --get-data -s git@gin.g-node.org:/SPM_datasets/spm_moae_raw.git inputs/raw
+tree
+
+Folder structure
+├── CHANGELOG.md
+├── code
+│ └── README.md
+├── derivatives <-- this is where the output data will go
+├── inputs
+│ └── raw <-- installed as a subdataset
+│ ├── CHANGES
+│ ├── dataset_description.json
+│ ├── participants.tsv
+│ ├── README
+│ ├── sub-01
+│ │ ├── anat
+│ │ │ └── sub-01_T1w.nii
+│ │ └── func
+│ │ ├── sub-01_task-auditory_bold.nii
+│ │ └── sub-01_task-auditory_events.tsv
+│ └── task-auditory_bold.json
+└── README.md
+
+Copy the freesurfer license into the code folder:
+cp ~/Dropbox/Software/Freesurfer/License/license.txt \
+ ~/my_analysis/code
+
+Create a temporary dir to keep intermediate results: +useful if fmriprep crashes, it won't start from zero.
+mkdir tmp/wdir
+
+Add a bids_filter_file.json
config file to help you define what fmriprep
+should consider as bold
as T1w
.
The one below corresponds to the fMRIprep default (also available inside this +repo).
+See this part of the FAQ for more info:
+https://fmriprep.org/en/21.0.2/faq.html#how-do-I-select-only-certain-files-to-be-input-to-fMRIPrep
+{
+ "fmap": {
+ "datatype": "fmap"
+ },
+ "bold": {
+ "datatype": "func",
+ "suffix": "bold"
+ },
+ "sbref": {
+ "datatype": "func",
+ "suffix": "sbref"
+ },
+ "flair": {
+ "datatype": "anat",
+ "suffix": "FLAIR"
+ },
+ "t2w": {
+ "datatype": "anat",
+ "suffix": "T2w"
+ },
+ "t1w": {
+ "datatype": "anat",
+ "suffix": "T1w"
+ },
+ "roi": {
+ "datatype": "anat",
+ "suffix": "roi"
+ }
+}
+
+
+Create a singularity_run_fmriprep.sh
script in the code folder
+with following content:
#!/bin/bash
+
+# to be called from the root of the YODA dataset
+
+# subject label passed as argument
+participant_label=$1
+
+# binds the root folder of the YODA dataset on your machine
+# onto the /my_analysis inside the container
+
+# runs the container: ~/my_images/fmriprep-${VERSION}.sing
+
+# tweak the parameters below to your convenience
+
+# see here for more info: https://fmriprep.org/en/stable/usage.html#usage-notes
+
+VERSION=21.0.2
+nb_dummy_scans=0
+task="auditory"
+
+# https://fmriprep.org/en/21.0.2/spaces.html
+output_spaces="MNI152NLin6Asym T1w"
+
+
+singularity run --cleanenv \
+ --bind "$(pwd)":/my_analysis \
+ ~/my_images/fmriprep-${VERSION}.simg \
+ /my_analysis/inputs/raw \
+ /my_analysis/derivatives \
+ participant \
+ --participant-label ${participant_label} \
+ --fs-license-file /my_analysis/code/license.txt \
+ -w /my_analysis/tmp/wdir \
+ --dummy-scans ${nb_dummy_scans} \
+ --task-id ${task} \
+ --bids-filter-file /my_analysis/code/bids_filter_file.json \
+ --output-spaces ${output_spaces}
+
+Folder structure
+├── CHANGELOG.md
+├── code
+│ ├── bids_filter_file.json
+│ ├── license.txt
+│ ├── README.md
+│ └── singularity_run_fmriprep.sh
+├── derivatives
+├── inputs
+│ └── raw
+│ ├── CHANGES
+│ ├── dataset_description.json
+│ ├── participants.tsv
+│ ├── README
+│ ├── sub-01
+│ │ ├── anat
+│ │ │ └── sub-01_T1w.nii
+│ │ └── func
+│ │ ├── sub-01_task-auditory_bold.nii
+│ │ └── sub-01_task-auditory_events.tsv
+│ └── task-auditory_bold.json
+└── README.md
+
+Pass argument of the participant label.
+. code/singularity_run_fmriprep.sh 01
+
+<!--
+#!/bin/bash
+#-------------------------------------------
+#SBATCH -J fmriprep
+#SBATCH --account=def-flepore
+#SBATCH --time=3:00:00
+#SBATCH -n 1
+#SBATCH --cpus-per-task=4
+#SBATCH --mem-per-cpu=8G
+#SBATCH --mail-user=michele.maclean@umontreal.ca
+#SBATCH --mail-type=BEGIN
+#SBATCH --mail-type=END
+#SBATCH --mail-type=FAIL
+#SBATCH --mail-type=REQUEUE
+#SBATCH --mail-type=ALL
+# ------------------------------------------
+
+source ~/venv_datalad/bin/activate
+module load git-annex/8.20200810
+module load freesurfer/5.3.0
+module load singularity/3.8
+
+cd
+# run the fmriprep job with singularity
+singularity run --cleanenv /home/mmaclean/projects/def-flepore/mmaclean/parallel_analysis/containers/images/bids/bids-fmriprep--21.0.1.sing /home/mmaclean/projects/def-flepore/mmaclean/CVI-raw /home/mmaclean/projects/def-flepore/mmaclean/preprocessing participant --participant-label CTL17 --fs-license-file /home/mmaclean/projects/def-flepore/mmaclean/license/freesurfer.txt --skip_bids_validation --notrack
+``` -->
+
+
+## Datalad + fmriprep
+
+Example on how to run it locally
+
+### Folder structure
+
+
+derivatives +├── env +│ ├── bin +│ ├── lib +│ └── share +├── derivatives <-- this is where the data will go +│ ├── fmriprep +│ └── freesurfer +└── raw <-- installed as a subdataset + ├── code + └── sub-01
+
+#### fMRIprep
+
+Install datalad & others in a virtual environment
+
+```bash
+virtualenv -p python3.8 env
+source env/bin/activate
+pip install datalad datalad-neuroimaging datalad-container
+
+Get fmriprep (make sure singualrity is installed)
+datalad containers-add fmriprep --url docker://nipreps/fmriprep:20.2.0
+
+Run fmriprep
+input_dir=`pwd`/raw
+output_dir=`pwd`/derivatives
+participant_label=01
+
+# the following will depend on where you keep your freesurfer license
+freesurfer_licence=~/Dropbox/Software/Freesurfer/License/license_1.txt
+
+datalad containers-run -m "fmriprep 01" \
+ --container-name fmriprep \
+ --input ${input_dir} \
+ --output ${output_dir} \
+ fmriprep \
+ $input_dir \
+ ${output_dir} \
+ participant \
+ --participant-label ${participant_label} \
+ -w /tmp --fs-license-file ${freesurfer_licence} \
+ --output-spaces T1w:res-native MNI152NLin2009cAsym:res-native
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Va=/["'&<>]/;qn.exports=za;function za(e){var t=""+e,r=Va.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i