Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dianne tutorials #734

Merged
merged 19 commits into from
Feb 12, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright [2014-2019] [Heudiconv developers]
Copyright [2014-2023] [HeuDiConv developers]
asmacdo marked this conversation as resolved.
Show resolved Hide resolved

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand All @@ -11,3 +11,10 @@ Copyright [2014-2019] [Heudiconv developers]
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.


Some parts of the codebase/documentation are borrowed from other sources:

- HeuDiConv tutorial from https://bitbucket.org/dpat/neuroimaging_core_docs/src

Copyright 2023 Dianne Patterson
14 changes: 14 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -115,3 +115,17 @@ Docker image preparation being found in ``.github/workflows/release.yml``.
---------------------

- https://github.com/courtois-neuromod/ds_prep/blob/main/mri/convert/heuristics_unf.py


Support
-------

All bugs, concerns and enhancement requests for this software can be submitted here:
https://github.com/nipy/heudiconv/issues.

If you have a problem or would like to ask a question about how to use ``heudiconv``,
please submit a question to `NeuroStars.org <http://neurostars.org/tags/heudiconv>`_ with a ``heudiconv`` tag.
NeuroStars.org is a platform similar to StackOverflow but dedicated to neuroinformatics.

All previous ``heudiconv`` questions are available here:
http://neurostars.org/tags/heudiconv/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all good, again independent of the goal of the PR

12 changes: 12 additions & 0 deletions docs/commandline.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
=============
CLI Reference
=============

``heudiconv`` processes DICOM files and converts the output into user defined
paths.

.. argparse::
:ref: heudiconv.cli.run.get_parser
:prog: heudiconv
:nodefault:
:nodefaultconst:
312 changes: 312 additions & 0 deletions docs/custom-heuristic.rst

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions docs/heuristics.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
=========
Heuristic
=========
========================
Heuristic File Reference
========================

The heuristic file controls how information about the DICOMs is used to convert
to a file system layout (e.g., BIDS). ``heudiconv`` includes some built-in
Expand Down
5 changes: 3 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ Contents

installation
changes
usage
heuristics
tutorials
heuristics
commandline
api

125 changes: 125 additions & 0 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
Quickstart
==========

Heudiconv Hello World: Using `heuristic.py`

.. TODO convert to a datalad dataset
.. TODO ``datalad install https://osf.io/mqgzh/``
asmacdo marked this conversation as resolved.
Show resolved Hide resolved
.. TODO delete any sequences of no interest prior to push, lets make the
example ds only contain what is needed for these tutorials
.. TODO create a docker/podman section explaining how to use containers
in lieu of `heudiconv`, change the tutorials to `heudiconv`, not
container.
.. TODO convert bash script to docs

This section demonstrates how to use the heudiconv tool with `heuristic.py` to convert DICOMS into the BIDS data structure.

* Download and unzip `sub-219_dicom.zip <https://osf.io/mqgzh/>`_. You will see a directory called MRIS.
* Under the MRIS directory, is the *dicom* subdirectory: Under the subject number *219* the session *itbs* is nested. Each dicom sequence folder is nested under the session. You can delete sequences folders if they are of no interest::

dicom
└── 219
└── itbs
├── Bzero_verify_PA_17
├── DTI_30_DIRs_AP_15
├── Localizers_1
├── MoCoSeries_19
├── MoCoSeries_31
├── Post_TMS_restingstate_30
├── T1_mprage_1mm_13
├── field_mapping_20
├── field_mapping_21
└── restingstate_18


* Pull the HeuDiConv Docker container to your machine::

docker pull nipy/heudiconv

* From a BASH shell (no, zsh will not do), navigate to the MRIS directory and run the ``hdc_run.sh`` script for subject *219*, session *itbs*, like this::

#!/bin/bash

: <<COMMENTBLOCK
This code calls the docker heudiconv tool to convert DICOMS into the BIDS data structure.
It requires that you are in the parent directory of both the Dicom and Nifti directories
AND that your Nifti directory contain a subdirectory called code with the conversion routine, e.g., heuristic.py in it.
See https://neuroimaging-core-docs.readthedocs.io/en/latest/pages/heudiconv.html.
See also https://heudiconv.readthedocs.io/en/latest/
COMMENTBLOCK


# Exit if number of arguments is less than 3
if [ $# -lt 3 ]
then
echo "======================================================"
echo "Three arguments are required:"
echo "argument 1: name of conversion file in the Nifti/code directory, e.g., heuristic.py"
echo "argument 2: name of subject dicom folder to convert"
echo "argument 3: optional name of the session"
echo "e.g., $0 heuristic.py 219 itbs"
echo "output will be a BIDS directory under the Nifti folder"
echo "This assumes you are running docker"
echo "and have downloaded heudiconv: docker pull nipy/heudiconv"
echo "It also assumes that you are running from the parent directory to both dicom and Nifti"
echo "If you have a session argument, this assumes your DICOMS are nested under subject and then session"
echo "Finally, note that the dicoms are assumed to be *.dcm files"
echo "======================================================"
exit 1
fi

# Define the three variables
converter=${1}
subject=${2}
session=${3}

echo "Nifti/code/${converter} will be used to convert subject ${subject} and session ${session} under dicom"
echo "to BIDS output under Nifti/sub-${subject} and session ${session}"

# This docker command assumes you are in in the bound (-v) base directory, e.g., the unzipped MRIS directory (PWD).
# dicom files are under dicom in a directory labeled with the subject number and session number (e.g. 219/itbs)
# output (-o) will be placed in the directory labeled Nifti
# The conversion file is in Nifti/code
# dcm2niix is the engine that does the conversion
# --minmeta guarantees that meta-information in the dcms does not get inserted into the JSON sidecar.
# This is good because the information is not needed but can overflow the JSON file causing some BIDS apps to crash.

docker run --rm -it -v ${PWD}:/base nipy/heudiconv:latest -d /base/dicom/{subject}/{session}/*/*.dcm -o /base/Nifti/ -f /base/Nifti/code/${converter} -s ${subject} -ss ${session} -c dcm2niix -b --minmeta --overwrite


.. TODO rm this command (note the args tho)
./hdc_run.sh heuristic1.py 219 itbs

* This should complete the conversion. After running, the *Nifti* directory will contain a bids-compliant subject directory::


└── sub-219
└── ses-itbs
├── anat
├── dwi
├── fmap
└── func

* The following required BIDS text files are also created in the Nifti directory. Details for filling in these skeleton text files can be found under `tabular files <https://bids-specification.readthedocs.io/en/stable/02-common-principles.html#tabular-files>`_ in the BIDS specification::

CHANGES
README
dataset_description.json
participants.json
participants.tsv
task-rest_bold.json

* Next, visit the `bids validator <https://bids-standard.github.io/bids-validator/>`_.
* Click `Choose File` and then select the *Nifti* directory. There should be no errors (though there are a couple of warnings).

.. Note:: Your files are not uploaded to the BIDS validator, so there are no privacy concerns!
* Look at the directory structure and files that were generated.
* When you are ready, remove everything that was just created::

rm -rf Nifti/sub-* Nifti/.heudiconv Nifti/code/__pycache__ Nifti/*.json Nifti/*.tsv Nifti/README Nifti/CHANGE

* Now you know what the results should look like.
* In the following sections, you will build *heuristic.py* yourself so you can test different options and understand how to work with your own data.



128 changes: 128 additions & 0 deletions docs/reproin.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
================
Reproin
================

If you don't want to modify a Python file as you did for *heuristic.py*, an alternative is to name your image sequences at the scanner using the *reproin* naming convention. Take some time getting the scanner protocol right, because it is the critical job for running *reproin*. Then a single Docker command converts your DICOMS to the BIDS data structure. There are more details about *reproin* in the
.. TODO new link ref:`Links <heudiconv_links>` section above.

* You should already have Docker installed and have downloaded HeuDiConv as described in Lesson 1.
* Download and unzip the phantom dataset: `reproin_dicom.zip <https://osf.io/4jwk5/>`_ generated here at the University of Arizona on our Siemens Skyra 3T with Syngo MR VE11c software on 2018_02_08.
* You should see a new directory *REPROIN*. This is a simple reproin-compliant dataset without sessions. Derived dwi images (ADC, FA etc.) that the scanner produced were removed.
* Change directory to *REPROIN*. The directory structure should look like this::

REPROIN
├── data
└── dicom
└── 001
└── Patterson_Coben\ -\ 1
├── Localizers_4
├── anatT1w_acqMPRAGE_6
├── dwi_dirAP_9
├── fmap_acq4mm_7
├── fmap_acq4mm_8
├── fmap_dirPA_15
└── func_taskrest_16

* From the *REPROIN* directory, run this Docker command::

docker run --rm -it -v ${PWD}:/base nipy/heudiconv:latest -f reproin --bids -o /base/data --files /base/dicom/001 --minmeta
* ``--rm`` means Docker should cleanup after itself
* ``-it`` means Docker should run interactively
* ``-v ${PWD}:/base`` binds your current directory to ``/base`` inside the container. Alternatively, you could provide an **absolute path** to the *REPROIN* directory.
* ``nipy/heudiconv:latest`` identifies the Docker container to run (the latest version of heudiconv).
* ``-f reproin`` specifies the converter file to use
* ``-o /base/data/`` specifies the output directory *data*. If the output directory does not exist, it will be created.
* ``--files /base/dicom/001`` identifies the path to the DICOM files.
* ``--minmeta`` ensures that only the minimum necessary amount of data gets added to the JSON file when created. On the off chance that there is a LOT of meta-information in the DICOM header, the JSON file will not get swamped by it. fmriprep and mriqc are very sensitive to this information overload and will crash, so minmeta provides a layer of protection against such corruption.
asmacdo marked this conversation as resolved.
Show resolved Hide resolved

That's it. Below we'll unpack what happened.

Output Directory Structure
===============================

*Reproin* produces a hierarchy of BIDS directories like this::

data
└── Patterson
└── Coben
├── sourcedata
│   └── sub-001
│   ├── anat
│   ├── dwi
│   ├── fmap
│   └── func
└── sub-001
├── anat
├── dwi
├── fmap
└── func


* The dataset is nested under two levels in the output directory: *Region* (Patterson) and *Exam* (Coben). *Tree* is reserved for other purposes at the UA research scanner.
* Although the Program *Patient* is not visible in the output hierarchy, it is important. If you have separate sessions, then each session should have its own Program name.
* **sourcedata** contains tarred gzipped (tgz) sets of DICOM images corresponding to each NIFTI image.
asmacdo marked this conversation as resolved.
Show resolved Hide resolved
* **sub-001** contains the BIDS dataset.
asmacdo marked this conversation as resolved.
Show resolved Hide resolved
* The hidden directory is generated: *REPROIN/data/Patterson/Coben/.heudiconv*.
asmacdo marked this conversation as resolved.
Show resolved Hide resolved

At the Scanner
====================

Here is this phantom dataset displayed in the scanner dot cockpit. The directory structure is defined at the top: *Patterson >> Coben >> Patient*
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Says "Here" like there would be an image like those https://github.com/ReproNim/reproin/blob/master/docs/walkthrough-1.md#new-program
anything missing?


* *Region* = *Patterson*
* *Exam* = *Coben*
* *Program* = *Patient*



Reproin Scanner File Names
==============================

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yarikoptic This section needs review from someone who is more familiar with bids/reproin/mri

* For both BIDS and *reproin*, names are composed of an ordered series of key-value pairs. Each key and its value are joined with a dash ``-`` (e.g., ``acq-MPRAGE``, ``dir-AP``). These key-value pairs are joined to other key-value pairs with underscores ``_``. The exception is the modality label, which is discussed more below.
asmacdo marked this conversation as resolved.
Show resolved Hide resolved
* *Reproin* scanner sequence names are simplified relative to the final BIDS output and generally conform to this scheme (but consult the `reference <https://github.com/nipy/heudiconv/blob/master/heudiconv/heuristics/reproin.py>`_ for additional options): ``sequence type-modality label`` _ ``session-session name`` _ ``task-task name`` _ ``acquisition-acquisition detail`` _ ``run-run number`` _ ``direction-direction label``::

| func-bold_ses-pre_task-faces_acq-1mm_run-01_dir-AP

* Each sequence name begins with the seqtype key. The seqtype key is the modality and corresponds to the name of the BIDS directory where the sequence belongs, e.g., ``anat``, ``dwi``, ``fmap`` or ``func``.
* The seqtype key is optionally followed by a dash ``-`` and a modality label value (e.g., ``anat-scout`` or ``anat-T2W``). Often, the modality label is not needed because there is a predictable default for most seqtypes:
* For **anat** the default modality is ``T1W``. Thus a sequence named ``anat`` will have the same output BIDS files as a sequence named ``anat-T1w``: *sub-001_T1w.nii.gz*.
* For **fmap** the default modality is ``epi``. Thus ``fmap_dir-PA`` will have the same output as ``fmap-epi_dir-PA``: *sub-001_dir-PA_epi.nii.gz*.
* For **func** the default modality is ``bold``. Thus, ``func-bold_task-rest`` will have the same output as ``func_task-rest``: *sub-001_task-rest_bold.nii.gz*.
* *Reproin* gets the subject number from the DICOM metadata.
* If you have multiple sessions, the session name does not need to be included in every sequence name in the program (i.e., Program= *Patient* level mentioned above). Instead, the session can be added to a single sequence name, usually the scout (localizer) sequence e.g. ``anat-scout_ses-pre``, and *reproin* will propagate the session information to the other sequence names in the *Program*. Interestingly, *reproin* does not add the localizer to your BIDS output.
* When our scanner exports the DICOM sequences, all dashes are removed. But don't worry, *reproin* handles this just fine.
* In the UA phantom reproin data, the subject was named ``01``. Horos reports the subject number as ``01`` but exports the DICOMS into a directory ``001``. If the data are copied to an external drive at the scanner, then the subject number is reported as ``001_001`` and the images are ``*.IMA`` instead of ``*.dcm``. *Reproin* does not care, it handles all of this gracefully. Your output tree (excluding *sourcedata* and *.heudiconv*) should look like this::

.
|-- CHANGES
|-- README
|-- dataset_description.json
|-- participants.tsv
|-- sub-001
| |-- anat
| | |-- sub-001_acq-MPRAGE_T1w.json
| | `-- sub-001_acq-MPRAGE_T1w.nii.gz
| |-- dwi
| | |-- sub-001_dir-AP_dwi.bval
| | |-- sub-001_dir-AP_dwi.bvec
| | |-- sub-001_dir-AP_dwi.json
| | `-- sub-001_dir-AP_dwi.nii.gz
| |-- fmap
| | |-- sub-001_acq-4mm_magnitude1.json
| | |-- sub-001_acq-4mm_magnitude1.nii.gz
| | |-- sub-001_acq-4mm_magnitude2.json
| | |-- sub-001_acq-4mm_magnitude2.nii.gz
| | |-- sub-001_acq-4mm_phasediff.json
| | |-- sub-001_acq-4mm_phasediff.nii.gz
| | |-- sub-001_dir-PA_epi.json
| | `-- sub-001_dir-PA_epi.nii.gz
| |-- func
| | |-- sub-001_task-rest_bold.json
| | |-- sub-001_task-rest_bold.nii.gz
| | `-- sub-001_task-rest_events.tsv
| `-- sub-001_scans.tsv
`-- task-rest_bold.json

* Note that despite all the the different subject names (e.g., ``01``, ``001`` and ``001_001``), the subject is labeled ``sub-001``.


17 changes: 14 additions & 3 deletions docs/tutorials.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
==============
User Tutorials
==============

==================
Tutorials
==================

asmacdo marked this conversation as resolved.
Show resolved Hide resolved
.. toctree::

quickstart
custom-heuristic
reproin


External Tutorials
******************

Luckily(?), we live in an era of plentiful information. Below are some links to
other users' tutorials covering their experience with ``heudiconv``.
Expand Down
Loading