Skip to content

Commit

Permalink
updated preprint
Browse files Browse the repository at this point in the history
  • Loading branch information
mwegrzyn committed Aug 6, 2018
1 parent a0cb67e commit 445918f
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 5 deletions.
4 changes: 2 additions & 2 deletions preprint/ms_thoughtExperiment2.log
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian) (preloaded format=pdflatex 2017.4.4) 6 AUG 2018 11:20
This is pdfTeX, Version 3.14159265-2.6-1.40.15 (TeX Live 2015/dev/Debian) (preloaded format=pdflatex 2017.4.4) 6 AUG 2018 11:41
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
Expand Down Expand Up @@ -1169,7 +1169,7 @@ Here is how much of TeX's memory you used:
e/texmf-dist/fonts/type1/urw/helvetic/uhvb8a.pfb></usr/share/texlive/texmf-dist
/fonts/type1/urw/helvetic/uhvr8a.pfb></usr/share/texlive/texmf-dist/fonts/type1
/urw/helvetic/uhvro8a.pfb>
Output written on ms_thoughtExperiment2.pdf (17 pages, 7590621 bytes).
Output written on ms_thoughtExperiment2.pdf (17 pages, 7590620 bytes).
PDF statistics:
492 PDF objects out of 1000 (max. 8388607)
417 compressed objects within 5 object streams
Expand Down
Binary file modified preprint/ms_thoughtExperiment2.pdf
Binary file not shown.
Binary file modified preprint/ms_thoughtExperiment2.synctex.gz
Binary file not shown.
6 changes: 3 additions & 3 deletions preprint/ms_thoughtExperiment2.tex
Original file line number Diff line number Diff line change
Expand Up @@ -201,18 +201,18 @@ \subsection{Study design}

\subsection{Data acquisition}

MRI data were collected using a 3T Siemens Verio scanner. A high-resolution MPRAGE structural scan was acquired with 192 sagittal slices (TR=1900 msec, TE=2.5 msec, 0.8mm slice thickness, 0.75x0.75 in-plane resolution), using a 32-channel head coil. Functional echo-planar images (EPI) were acquired with 21 axial slices oriented along the rostrum and splenium of the corpus callosum (slice thickness of 5 mm, in-plane resolution 2.4x2.4 mm), using a 12-channel head coil. To allow for audible instructions during scanning, a sparse temporal sampling strategy was used (TR=3000ms with 1800ms acquisition time and 1200ms pause between acquisitions). Excluding two dummy scans, a total of 253 volumes were collected for each run. The full raw data are available on OpenNeuro \href{https://openneuro.org/datasets/ds001419}{openneuro.org/datasets/ds001419}.
MRI data were collected using a 3T Siemens Verio scanner. A high-resolution MPRAGE structural scan was acquired with 192 sagittal slices (TR=1900 msec, TE=2.5 msec, 0.8mm slice thickness, 0.75x0.75 in-plane resolution), using a 32-channel head coil. Functional echo-planar images (EPI) were acquired with 21 axial slices oriented along the rostrum and splenium of the corpus callosum (slice thickness of 5 mm, in-plane resolution 2.4x2.4 mm), using a 12-channel head coil. To allow for audible instructions during scanning, a sparse temporal sampling strategy was used (TR=3000ms with 1800ms acquisition time and 1200ms pause between acquisitions). Excluding two dummy scans, a total of 253 volumes were collected for each run. The full raw data are available on OpenNeuro ( \href{https://openneuro.org/datasets/ds001419}{openneuro.org/datasets/ds001419}).

\subsection{Data preprocessing}

Basic preprocessing was performed using SPM12 (www.fil. ion.ucl.ac.uk/spm). Functional images were motion corrected using the realign function. The structural image was co-registered to the mean image of the functional time series and then used to derive deformation maps using the segment function \citep{Ashburner_2005}. The deformation fields were then applied to all images (structural and functional) to transform them into MNI standard space and up-sample them to 2mm isomorphic voxel size. The full normalized fMRI time courses are available online ( \href{https://doi.org/10.6084/m9.figshare.5951563.v1}{doi.org/10.6084/ m9.figshare.5951563.v1}). All further preprocessing steps were carried out using Nilearn 0.2.5 \citep{Abraham_2014} in Python 2.7. To generate an activity map for each of the 75 blocks, each voxel's time course was z-transformed to have mean zero and standard deviation one. Time courses were detrended using a linear function and movement parameters were added as confounds. Then TRs were grouped into blocks using a simple boxcar design shifted by 2 TR (the expected shift in the hemodynamic response function) and averaged, to give one averaged image per block. These images were used for all further analyses and are available on NeuroVault (\href{https://neurovault.org/collections/3467}{neurovault.org/collections/3467}).
Basic preprocessing was performed using SPM12 (www.fil. ion.ucl.ac.uk/spm). Functional images were motion corrected using the realign function. The structural image was co-registered to the mean image of the functional time series and then used to derive deformation maps using the segment function \citep{Ashburner_2005}. The deformation fields were then applied to all images (structural and functional) to transform them into MNI standard space and up-sample them to 2mm isomorphic voxel size. The full normalized fMRI time courses are available online (\href{https://doi.org/10.6084/m9.figshare.5951563.v1}{doi.org/10.6084/ m9.figshare.5951563.v1}). All further preprocessing steps were carried out using Nilearn 0.2.5 \citep{Abraham_2014} in Python 2.7. To generate an activity map for each of the 75 blocks, each voxel's time course was z-transformed to have mean zero and standard deviation one. Time courses were detrended using a linear function and movement parameters were added as confounds. Then TRs were grouped into blocks using a simple boxcar design shifted by 2 TR (the expected shift in the hemodynamic response function) and averaged, to give one averaged image per block. These images were used for all further analyses and are available on NeuroVault (\href{https://neurovault.org/collections/3467}{neurovault.org/collections/3467}).

\subsection{Data analysis}

Emulating the “common task framework” \citep{Liberman_2015,Donoho_2017}, the study's data were analyzed with regard to a clearly defined objective and a metric for evaluating success. In the “common task framework”, data for training are shared and used by different parties. The parties try to learn a prediction rule from the training data, which can be applied to a set of test data. Only after the predictions have been submitted, is the prediction of test data evaluated. It can then be explored how different approaches to prediction compared to one another, given the same dataset and objective.
Accordingly, the first two fMRI runs (50 blocks total, 10 blocks per condition) of our study were used as a training set and the third fMRI run (25 blocks total, 5 blocks per condition) was used as the held-out test set. To ensure proper blinding of test data, the block order was randomly shuffled and the 25 blocks were then assigned letters from A to Y. The true labels of the blocks were only known by the first author (MW), who did not participate in making predictions for the test data. Fifteen of the authors formed four groups. Each group had to submit their predictions regarding the domain (e.g. “motor imagery”) and specific content (e.g. “tennis”) for each block in written form.
The authors making the predictions were all graduate students of psychology, enrolled in a project seminar at Bielefeld University. Only after all predictions were submitted were the true labels of the test blocks revealed.
The groups were allowed to analyze the training and test data in any way they deemed fit, but all used a combination of the following methods: (i) Visual inspection with dynamic varying of thresholds using a software such as Mricron or FSLView. (ii) Voxel-wise correlation of brain maps from the training and the test set, to find the blocks which are most similar to each other. (iii) Voxel-wise correlations of brain maps with maps from NeuroSynth \citep{Yarkoni_2011}, to find the keywords from the NeuroSynth database whose posterior probability maps are most similar to the participant's activity patterns. The basic principles of these analyses are presented in the following sections of the manuscript. Full code is available online (\href{(https://doi.org/10.5281/zenodo.1323665}{(doi.org/10.5281/zenodo.1323665}).
The groups were allowed to analyze the training and test data in any way they deemed fit, but all used a combination of the following methods: (i) Visual inspection with dynamic varying of thresholds using a software such as Mricron or FSLView. (ii) Voxel-wise correlation of brain maps from the training and the test set, to find the blocks which are most similar to each other. (iii) Voxel-wise correlations of brain maps with maps from NeuroSynth \citep{Yarkoni_2011}, to find the keywords from the NeuroSynth database whose posterior probability maps are most similar to the participant's activity patterns. The basic principles of these analyses are presented in the following sections of the manuscript. Full code is available online (\href{https://doi.org/10.5281/zenodo.1323665}{doi.org/10.5281/zenodo.1323665}).

\subsubsection*{Similarity of blocks}
For similarity analyses, Pearson correlations between the voxels of two brain images were computed. This was done either by correlating the activity maps of two individual blocks with each other, or by correlating an individual block with an average of all independent blocks belonging to the same condition.
Expand Down

0 comments on commit 445918f

Please sign in to comment.