Skip to content

This repository is a container of methods that Neurodata uses to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additionally, methods for computational neuroscience on brains too specific for a general scientific library can be found here, su…

License

Notifications You must be signed in to change notification settings

NeuroDataDesign/brainlit

 
 

Repository files navigation

Brainlit

DOI Python Build Status PyPI version Downloads Code style: black codecov Docker Cloud Build Status Docker Image Size (latest by date) License
This repository is a container of methods that Neurodata uses to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additionally, methods for computational neuroscience on brains too specific for a general scientific library can be found here, such as image registration software tuned specifically for large brain volumes.

Documentation

Netlify Status

Brainlight Features

Motivation

The repository originated as the project of a team in Joshua Vogelstein's class Neurodata at Johns Hopkins University. This project was focused on data science towards the mouselight data. It became apparent that the tools developed for the class would be useful for other groups doing data science on large data volumes. The repository can now be considered a "holding bay" for code developed by Neurodata for collaborators and researchers to use.

Installation

Operating Systems

Brainlit is compatible with Mac, Windows, and Unix systems.

Windows Linux Subsystem 2

For Windows 10 users that prefer Linux functionality without the speed sacrifice of a Virtual Machine, Brainlit can be installed and run on WSL2. See installation walkthrough here.

Environment

(optional, any python >= 3.8 environment will suffice)

  • get conda
  • create a virtual environment: conda create --name brainlit python=3.8
  • activate the environment: conda activate brainlit

Install from pypi

  • install brainlit: pip install brainlit

Install from source

  • clone the repo: git clone https://github.com/neurodata/brainlit.git
  • cd into the repo: cd brainlit
  • install brainlit: pip install -e .

For Windows Users setting up a Conda environment:

Users currently may run into an issue with installing dependencies on Python 3.8. There are a couple workarounds currently available:

Use Python 3.7 - RECOMMENDED

  • Create a new environment using Python 3.7 instead: conda create --name brainlit3.7 python=3.7

  • Run pip install -e . This should successfully install the brainlit module for Conda on Windows.

Other potential fixes

Potentially, gcc is missing, which is necessary for wheel installation from Python 3.6 onwards.

  • Install gcc for Windows and run pip install brainlit -e . --no-cache-dir.

Post-Python 3.6, windows handles wheels through the Microsoft Manifest Tool, it might be missing.

How to use Brainlit

Data setup

The source data directory should have an octree data structure

 data/
├── default.0.tif
├── transform.txt
├── 1/
│   ├── 1/, ..., 8/
│   └── default.0.tif
├── 2/ ... 8/
└── consensus-swcs (optional)
    ├── G-001.swc
    ├── G-002.swc
    └── default.0.tif

If your team wants to interact with cloud data, each member will need account credentials specified in ~/.cloudvolume/secrets/x-secret.json, where x is one of [aws, gc, azure] which contains your id and secret key for your cloud platform. We provide a template for aws in the repo for convenience.

Create a session

Each user will start their scripts with approximately the same lines:

from brainlit.utils.ngl import NeuroglancerSession

session = NeuroglancerSession(url='file:///abc123xyz')

From here, any number of tools can be run such as the visualization or annotation tools. Interactive demo.

Features

Registration

The registration subpackage is a facsimile of ARDENT, a pip-installable (pip install ardent) package for nonlinear image registration wrapped in an object-oriented framework for ease of use. This is an implementation of the LDDMM algorithm with modifications, written by Devin Crowley and based on "Diffeomorphic registration with intensity transformation and missing data: Application to 3D digital pathology of Alzheimer's disease." This paper extends on an older LDDMM paper, "Computing large deformation metric mappings via geodesic flows of diffeomorphisms."

This is the more recent paper:

Tward, Daniel, et al. "Diffeomorphic registration with intensity transformation and missing data: Application to 3D digital pathology of Alzheimer's disease." Frontiers in neuroscience 14 (2020).

https://doi.org/10.3389/fnins.2020.00052

This is the original LDDMM paper:

Beg, M. Faisal, et al. "Computing large deformation metric mappings via geodesic flows of diffeomorphisms." International journal of computer vision 61.2 (2005): 139-157.

https://doi.org/10.1023/B:VISI.0000043755.93987.aa

A tutorial is available in docs/notebooks/registration_demo.ipynb.

Core

The core brainlit package can be described by the diagram at the top of the readme:

(Push and Pull Data)

Brainlit uses the Seung Lab's Cloudvolume package to push and pull data through the cloud or a local machine in an efficient and parallelized fashion. Interactive demo.
The only requirement is to have an account on a cloud service on s3, Azure, or Google Cloud.

Loading data via local filepath of an octree structure is also supported. Interactive demo.

Visualize

Brainlit supports many methods to visualize large data. Visualizing the entire data can be done via Google's Neuroglancer, which provides a web link as shown below.

screenshot

Brainlit also has tools to visualize chunks of data as 2d slices or as a 3d model. Interactive demo.

screenshot

Manually Segment

Brainlit includes a lightweight manual segmentation pipeline. This allows collaborators of a projec to pull data from the cloud, create annotations, and push their annotations back up as a separate channel. Interactive demo.

Automatically and Semi-automatically Segment

Similar to the above pipeline, segmentations can be automatically or semi-automatically generated and pushed to a separate channel for viewing. Interactive demo.

Tests

Running tests can easily be done by moving to the root directory of the brainlit package and typing pytest tests or python -m pytest tests.
Running a specific test, such as test_upload.py can be done simply by ptest tests/test_upload.py.

Common errors and troubleshooting

Contributing

Contribution guidelines can be found via CONTRIBUTING.md

Credits

Thanks to the Neurodata team and the group in the Neurodata class which started the project. This project is currently managed by Tommy Athey and Bijan Varjavand.

About

This repository is a container of methods that Neurodata uses to expose their open-source code while it is in the process of being merged with larger scientific libraries such as scipy, scikit-image, or scikit-learn. Additionally, methods for computational neuroscience on brains too specific for a general scientific library can be found here, su…

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 50.8%
  • Python 34.3%
  • MATLAB 13.3%
  • Mathematica 1.4%
  • Dockerfile 0.1%
  • Shell 0.1%