Copyright © 2024 by Melanie Tschiersch. All rights reserved.
This package provides an implementation of a recurrent neural network trainer and simulator with pytorch. Networks can have multiple neural populations with different connectivity types (all to all, sparse) that can be structured (tuned, low rank).
Network weights can be trained in a unsupervised or supervised manner just as vanilla RNNs in torch.
For more info look at the notebooks in ./notebooks and the configuration files in ./conf.
- Following the release of Pytorch 2.3, NeuroFlame now includes support for SemiSparseTensor operations!
pip install -r requirements.txt
or alternatively using conda (I recommend using mamba’s miniforge, a c++ conda implementation)
mamba install --file conda_requirements.txt
The full documentation can be found here.
Here is how to run a simulation
# import the network class
from src.network import Network
# Define repository root
repo_root = '/'
# Choose a config file
conf_file = './conf/conf_EI.yml'
# Other parameters can be overwriten with kwargs
# kwargs can be any of the args in the config file
# initialize model
model = Network(conf_file, repo_root, **kwargs)
# run a forward pass
rates = model()
.
├── conf # contains configuration files in yaml format.
│ ├── *.yml
├── notebooks # contains ipython notebooks.
│ ├── setup.py
│ └── *.ipynb
├── org # contains org notebooks.
│ ├── /doc/*.org
│ └── *.org
├── src # contains source code.
│ ├── activation.py # contains custom activation functions.
│ ├── connectivity.py # contains custom connectivity profiles.
│ ├── decode.py
│ ├── lif_network.py # implementation of a LIF network.
│ ├── lif_neuron.py
│ ├── lr_utils.py # utils for low rank networks.
│ ├── network.py # core of the project.
│ ├── plasticity.py # contains STP.
│ ├── plot_utils.py
│ ├── sparse.py # utils for large sparse matrices.
│ ├── stimuli.py # contains custom stimuli for behavioral tasks.
│ ├── train.py # utils to train networks.
└── └── utils.py
Network dynamics is described here.
The connectivities available in NeuroFlame are described here.
I describe in detail how to run a network simulation and use NeuroFlame to effectively run parallel simulations for different parameters here.
Here, I show how to train networks.
Here, a tutorial on balanced networks.
Here, a notebook that shows how to use NeuroFlame to locate ring attractors in parameter space.
Here, a notebook that shows how to use NeuroFlame to locate multistable balance states in parameter space.
Here, a tutorial on STP with NeuroFlame.
Here, a tutorial on how to use different stimuli to get the model to perform different behavioral tasks.
Here, a tutorial on how to get serial bias in a balanced network model.
Feel free to contribute.
MIT License Copyright (c) [2023] [A. Mahrach]