Skip to content

Commit

Permalink
Merge pull request #11 from Saran-nns/dev
Browse files Browse the repository at this point in the history
prerelease 0.3.1
  • Loading branch information
Saran-nns authored Nov 1, 2020
2 parents d4d24c6 + 0dd2150 commit 46af134
Show file tree
Hide file tree
Showing 6 changed files with 283 additions and 168 deletions.
13 changes: 13 additions & 0 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# This workflow will triage pull requests and apply a label based on the
# paths that are modified in the pull request.
name: "Pull Request Labeler"
on:
- pull_request_target

jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@main
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
13 changes: 13 additions & 0 deletions .github/workflows/labeler.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# This workflow will triage pull requests and apply a label based on the
# paths that are modified in the pull request.
name: "Pull Request Labeler"
on:
- pull_request_target

jobs:
triage:
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@main
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
94 changes: 44 additions & 50 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

SORN is a class of neuro-inspired artificial network build based on plasticity mechanisms in biological brain and mimic neocortical circuits ability of learning and adaptation through neuroplasticity mechanisms.

For ease of maintanance, example use cases and the API(under developement) are moved to https://github.com/Saran-nns/PySORN_0.1
The network is developed as part of my Master thesis at Universität Osnabrück, Germany. For the ease of maintainance, the notebooks, use cases and the API(under developement) are moved to https://github.com/Saran-nns/PySORN_0.1

[![Build Status](https://travis-ci.org/Saran-nns/sorn.svg?branch=master)](https://travis-ci.org/Saran-nns/sorn)
[![codecov](https://codecov.io/gh/Saran-nns/sorn/branch/master/graph/badge.svg)](https://codecov.io/gh/Saran-nns/sorn)
Expand All @@ -13,14 +13,9 @@ For ease of maintanance, example use cases and the API(under developement) are m
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)

<h4 align="Left">SORN Reservoir and the evolution of synaptic efficacies</h4>
<a href="url"><img src="https://raw.githubusercontent.com/Saran-nns/PySORN_0.1/master/v0.1.0/doc/images/SORN1.png" height="320" width="430"></a> <a href="url"><img src="https://raw.githubusercontent.com/Saran-nns/PySORN_0.1/master/v0.1.0/doc/images/weights.png" height="375" width="425" ></a>
<a href="url"><img src="https://raw.githubusercontent.com/Saran-nns/PySORN_0.1/master/v0.1.0/doc/images/SORN1.png" height="320" width="430"></a> <a href="url"><img src="https://raw.githubusercontent.com/Saran-nns/PySORN_0.1/master/v0.1.0/doc/images/weights.png" height="375" width="425" ></a> <a href="url"><img src="https://raw.githubusercontent.com/Saran-nns/PySORN_0.1/master/v0.1.0/doc/images/networkxx.jpg" height="375" width="425" ></a>

<h4 align="center">Neural Connectome</h4>
<p align="center">
<a href="url"><img src="https://github.com/Saran-nns/PySORN_0.1/blob/master/v0.1.0/doc/images/neuralcorrelationall.png" height="450" width="450" ></a>
</p>

#### To install the latest release:
### To install the latest release:

```python
pip install sorn
Expand All @@ -32,17 +27,16 @@ The library is still in alpha stage, so you may also want to install the latest
pip install git+https://github.com/Saran-nns/sorn
```

#### Dependencies
### Dependencies
SORN supports Python 3.5+ ONLY. For older Python versions please use the official Python client


#### Usage:

##### Update Network configurations

Navigate to home/conda/envs/ENVNAME/Lib/site-packages/sorn
### Usage:

or if you are unsure about the directory of sorn
#### Update Network configurations
There are two ways to update/configure the network parameters,
1. Navigate to home/conda/envs/ENVNAME/Lib/site-packages/sorn
```or``` if you are unsure about the directory of ```sorn```

Run

Expand All @@ -53,53 +47,61 @@ sorn.__file__
```
to find the location of the sorn package

Then, update/edit the configuration.ini
Then, update/edit arguments in ```configuration.ini```


##### Plasticity Phase
2. Pass the arguments with valid names (listed below). This will override the default values at ```configuration.ini```
. The allowed ```kwargs``` are,
```Python
kwargs_ = ['ne', 'nu', 'network_type_ee', 'network_type_ei', 'network_type_ie', 'lambda_ee','lambda_ei', 'lambda_ie', 'eta_stdp','eta_inhib', 'eta_ip', 'te_max', 'ti_max', 'ti_min', 'te_min', 'mu_ip','sigma_ip']
```
#### Simulation: Plasticity Phase
The default ```ne, nu``` values are overriden by passing them while calling the ```simulate_sorn``` method.

```Python
# Import
from sorn.sorn import RunSorn
from sorn import Simulator

# Sample input
inputs = [0.]
num_features = 10
time_steps = 200
inputs = np.random.rand(num_features,timesteps)

# To simulate the network;
matrices_dict, Exc_activity, Inh_activity, Rec_activity, num_active_connections = RunSorn(phase='Plasticity', matrices=None,
time_steps=100).run_sorn(inputs)
matrices_dict, Exc_activity, Inh_activity, Rec_activity, num_active_connections = Simulator.simulate_sorn(inputs = inputs, phase='Plasticity', matrices=None, noise = True, time_steps=time_steps, ne = 200, nu=num_features)

# To resume the simulation, load the matrices_dict from previous simulation;
matrices_dict, Exc_activity, Inh_activity, Rec_activity, num_active_connections = RunSorn(phase='Plasticity', matrices=matrices_dict,
time_steps=100).run_sorn(inputs)
matrices_dict, Exc_activity, Inh_activity, Rec_activity, num_active_connections = Simulator.simulate_sorn(inputs = inputs, phase='Plasticity', matrices=matrices_dict, noise= True, time_steps=time_steps,ne = 200, nu=num_features)
```

##### Training phase:
#### Training phase:

```Python
matrices_dict, Exc_activity, Inh_activity, Rec_activity, num_active_connections = RunSorn(phase='Training', matrices=matrices_dict,
time_steps=100).run_sorn(inputs)
inputs = np.random.rand(num_features,1)

# SORN network is frozen during training phase
matrices_dict, Exc_activity, Inh_activity, Rec_activity, num_active_connections = Trainer.train_sorn(inputs = inputs, phase='Training', matrices=matrices_dict,nu=num_features, time_steps=1)
```

#### Network Output Descriptions:
matrices_dict - Dictionary of connection weights ('Wee','Wei','Wie') , Excitatory network activity ('X'), Inhibitory network activities('Y'), Threshold values ('Te','Ti')
### Network Output Descriptions:
```matrices_dict``` - Dictionary of connection weights ('Wee','Wei','Wie') , Excitatory network activity ('X'), Inhibitory network activities('Y'), Threshold values ('Te','Ti')

```Exc_activity``` - Collection of Excitatory network activity of entire simulation period

Exc_activity - Collection of Excitatory network activity of entire simulation period
```Inh_activity``` - Collection of Inhibitory network activity of entire simulation period

Inh_activitsy - Collection of Inhibitory network activity of entire simulation period
```Rec_activity``` - Collection of Recurrent network activity of entire simulation period

Rec_activity - Collection of Recurrent network activity of entire simulation period
```num_active_connections``` - List of number of active connections in the Excitatory pool at each time step

num_active_connections - List of number of active connections in the Excitatory pool at each time step

### Sample use with OpenAI gym :
#### Cartpole balance problem
Without changing the default network parameters.

#### Sample use with OpenAI gym :
##### Cartpole balance problem
```python
# Imports

import utils.InitHelper as initializer
from sorn.sorn import Sorn, Plasticity, TrainSorn, TrainSornPlasticity
from sorn.sorn import Simulator, Trainer
import gym

# Load the simulated network matrices
Expand All @@ -108,7 +110,6 @@ import gym
with open('simulation_matrices.pkl','rb') as f:
sim_matrices,excit_states,inhib_states,recur_states,num_reservoir_conn = pickle.load(f)


# Training parameters

NUM_EPISODES = 2e6
Expand All @@ -122,21 +123,15 @@ for EPISODE in range(NUM_EPISODES):
state = env.reset()[None,:]

# Play the episode

while True:

if EPISODE < NUM_PLASTICITY_EPISODE:

# Plasticity phase
sim_matrices,excit_states,inhib_states,recur_states,num_reservoir_conn = TrainSornPlasticity.train_sorn(phase = 'Plasticity',
matrices = sim_matrices,
inputs = state)
sim_matrices,excit_states,inhib_states,recur_states,num_reservoir_conn = Simulator.simulate_sorn(inputs = state, phase ='Plasticity', matrices = sim_matrices, noise=False)

else:
# Training phase with frozen reservoir connectivity
sim_matrices,excit_states,inhib_states,recur_states,num_reservoir_conn = TrainSorn.train_sorn(phase = 'Training',
matrices = sim_matrices,
inputs = state)
sim_matrices,excit_states,inhib_states,recur_states,num_reservoir_conn = Trainer.train_sorn(inputs = state, phase = 'Training', matrices = sim_matrices, noise= False)

# Feed excit_states as input states to your RL algorithm, below goes for simple policy gradient algorithm
# Sample policy w.r.t excitatory states and take action in the environment
Expand All @@ -149,8 +144,7 @@ for EPISODE in range(NUM_EPISODES):
break
```


#### Sample Plotting functions
### Sample Plotting functions

```Python
from sorn.utils import Plotter
Expand All @@ -164,7 +158,7 @@ Plotter.scatter_plot(spike_train = np.asarray(Exc_activity), savefig=False)
Plotter.raster_plot(spike_train = np.asarray(Exc_activity), savefig=False)
```

#### Sample Statistical analysis functions
### Sample Statistical analysis functions

```Python
from sorn.utils import Statistics
Expand All @@ -175,7 +169,7 @@ Statistics.autocorr(firing_rates = [1,1,5,6,3,7],t= 2)
Statistics.fanofactor(spike_train= np.asarray(Exc_activity),neuron = 10,window_size = 10)
```

#### The network is inspired by folowing articles:
### The network is inspired by following articles:

Lazar, A. (2009). SORN: a Self-organizing Recurrent Neural Network. Frontiers in Computational Neuroscience, 3. https://doi.org/10.3389/neuro.10.023.2009

Expand Down
7 changes: 7 additions & 0 deletions sorn/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from . import sorn
import logging

__author__ = "Saranraj Nambusubramaniyan"
__version__ = "0.2.10"

logging.basicConfig(level=logging.INFO)
Loading

0 comments on commit 46af134

Please sign in to comment.