Skip to content

Latest commit

 

History

History
101 lines (59 loc) · 4.15 KB

README.md

File metadata and controls

101 lines (59 loc) · 4.15 KB

Vicarious_somatotopy

Image description

Overview

This repository contains the scripts for performing multi-source spectral connective field model fitting as described in this preprint. Beyond standard scientific python libraries, the two main packages that drive the analyses are himalaya for model fitting and pycortex for surface ultilities and manipulation.

⚙ Installation and Setup.

This software has been tested on Rocky Linux 8.9 (Green Obsidian). Follow these steps to setup:

1. Clone the Repository

git clone https://github.com/yourusername/Vicarious_somatotopy.git
cd Vicarious_somatotopy

2. You will need to create a python environment that replicates that used to perform the analysis.

conda create -n testenv python=3.10.2
conda activate testenv

3. You will then need to install vicsompy - the package associated with this repository.

pip install -e .

This should recognise and install all the dependencies associated with the package, which are defined in the setup.cfg file. Full environment details are also contained in the environment.yml file.

4. You will also need to download the pycortex subject 'hcp_999999_draw_NH'

and put it in your pycortex directory.

5. You will need to download the source region directory of surfaces, lookup tables and masks for V1 and S1.

6. You will need to change the following in the config/config.yml file:

paths:
    in_base: "/tank/shared/2019/visual/hcp_{experiment}/" # Where are the HCP data stored?
    out_base: "/tank/hedger/DATA/vicsompy_outputs" # Where do you want the model fits to be output?
    plot_out: "/tank/hedger/scripts/Vicarious_somatotopy/results" # Where do you want the plots to be output?

source_regions:
    source_region_dir: "/tank/hedger/scripts/Sensorium/data" # Where are the source regions stored?

Example data

Example data (HCP average subject) can be found here . This directory can be put inside the directory you define in in_base in the yaml file described above. This will then allow you to run the following inside notebooks/HCP Fitting . Himalaya leverages tqdm and so will include a progress bar to indicate how long to expect for model fitting.

av_fit=analyse_subject('movie','999999',analysis_name='TEST')

The expected output is a csv file containing the following columns:

  • train_scores_modality_score: Within-set variance explained for modality.
  • test_scores_modality_score: Out of-set variance explained for modality.
  • best_alphas: ridge alphas for the given voxel.
  • spliced_params_param_modality: The connective field derived quantification for the modality (e.g. eccentricity_visual).
  • null_score_modality: The null (nonspatial) model score for the modality.

🕑 Expected installation time

The expected installation time, inclusive of donwloads and package installation should be less than 30 minutes.

📒 Notebooks

  • The main notebook that drives the analysis is in: notebooks/HCP Fitting.📘

  • Cortical flatmaps for each of the figures are produced in notebooks/Aggregate Plot and output to results folder. 📘

📁 Configuration files ⚙

  • The parameters underlying these analyses are in config/config.yml.

  • The parameters driving the plots are in config/plot_config.yml.

🐍 Python scripts

  • vicsompy/subject.py: For loading in subject data. 📜

  • viscompy/modeling.py: For performing the connective field modeling 📜

  • viscompy/aggregate.py: For aggregating outcomes. 📜

  • viscompy/surface.py: For handling surface data. 📜

  • viscompy/utils.py: Various utilities. 📜

  • viscompy/vis.py: For plotting. 📜