Skip to content

Commit

Permalink
edited intro.md and meta_science.md
Browse files Browse the repository at this point in the history
  • Loading branch information
thesamovar committed May 1, 2024
1 parent fdedcf5 commit 3b7c2ea
Show file tree
Hide file tree
Showing 4 changed files with 21 additions and 16 deletions.
10 changes: 10 additions & 0 deletions paper/paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,13 @@ @article{Yin2019
volume = {9},
year = {2019},
}
@conference{Kluyver2016jupyter,
Title = {Jupyter Notebooks -- a publishing format for reproducible computational workflows},
Author = {Thomas Kluyver and Benjamin Ragan-Kelley and Fernando P{\'e}rez and Brian Granger and Matthias Bussonnier and Jonathan Frederic and Kyle Kelley and Jessica Hamrick and Jason Grout and Sylvain Corlay and Paul Ivanov and Dami{\'a}n Avila and Safia Abdalla and Carol Willing},
Booktitle = {Positioning and Power in Academic Publishing: Players, Agents and Agendas},
Editor = {F. Loizides and B. Schmidt},
Organization = {IOS Press},
Pages = {87 - 90},
Year = {2016},
doi = {10.3233/978-1-61499-649-1-87},
}
2 changes: 0 additions & 2 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,6 @@ downloads:
```{include} sections/discussion.md
```

# References

# Appendices

In this section, each subsection is the detailed results as written up by the author of those results.
Expand Down
14 changes: 6 additions & 8 deletions paper/sections/intro.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,9 @@
Inspired by the success of endeavours like the Human Genome Project (ref) and CERN (ref), neuroscientists are increasingly initiating large-scale collaborations. Though, how to best structure these projects remains an open-question (ref Zach). The largest efforts, e.g. International Brain Laboratory (ref), The Blue Brain Project (ref) and Human Brain Project (ref) bring together tens to hundreds of researchers across multiple laboratories. However, while these projects represent a step-change in scale, they retain a legacy structure which resembles a consortia grant. I.e. there are participating laboratories who collaborate together and then make their data, methods and results available upon publication. As such, interested participants face a high barrier to entry: joining a participating laboratory, initiating a collaboration with the project, or awaiting publications. So how could these projects be structured differently?

One alternative are bench marking contests, in which participants compete to obtain the best score on a specific task. Such contests have driven progress in fields from machine learning (ref ImageNet) to protein folding (ref CASP), and have begun to enter neuroscience. For example, in Brain-Score (refs) participants submit models, capable of completing a visual processing task, which are then ranked according to a quantitative metric. As participants can compete both remotely and independently, these contests offer a significantly lower barrier to entry. Though, they emphasise competition over collaboration, and critically they require a well defined, quantifiable end-point. In Brain-Score, this end-point is a composite metric which describes the model's similarity to experimental data in terms of both behaviour and unit activity (refs). However, defining such endpoints for neuroscientific questions remains challenging.

Another alternative are massively collaborative projects in which participants work together to solve a common goal. For example, in the Polymath Project (refs) unsolved mathematical problems are posed, and then participants share comments, ideas and equations online as they collectively work towards solutions. Inspired by this approach, we founded COMOB (Collaborative Modelling of the Brain) - an open-source movement, which aims to tackle neuroscientific questions. Here, we share our experiences and results from our first project, in which we explored spiking neural network models of sound localization.



:::{attention}
Missing refs below, please add them Marcus.
:::

Inspired by the success of endeavours like the [Human Genome Project](https://www.genome.gov/human-genome-project) and [CERN](https://home.cern/), neuroscientists are increasingly initiating large-scale collaborations. Though, how to best structure these projects remains an open-question (ref Zach). The largest efforts, e.g. [International Brain Laboratory](https://www.internationalbrainlab.com/), [The Blue Brain Project](https://www.epfl.ch/research/domains/bluebrain/) and [Human Brain Project](https://www.humanbrainproject.eu) bring together tens to hundreds of researchers across multiple laboratories. However, while these projects represent a step-change in scale, they retain a legacy structure which resembles a consortia grant. I.e. there are participating laboratories who collaborate together and then make their data, methods and results available upon publication. As such, interested participants face a high barrier to entry: joining a participating laboratory, initiating a collaboration with the project, or awaiting publications. So how could these projects be structured differently?

One alternative is a bench marking contest, in which participants compete to obtain the best score on a specific task. Such contests have driven progress in fields from machine learning {cite:p}`10.1109/CVPR.2009.5206848` to [protein folding](https://predictioncenter.org/), and have begun to enter neuroscience. For example, in [Brain-Score](https://www.brain-score.org/) [@10.1101/407007;@10.1016/j.neuron.2020.07.040] participants submit models, capable of completing a visual processing task, which are then ranked according to a quantitative metric. As participants can compete both remotely and independently, these contests offer a significantly lower barrier to entry. Though, they emphasise competition over collaboration, and critically they require a well defined, quantifiable end-point. In [Brain-Score](https://www.brain-score.org/), this end-point is a composite metric which describes the model's similarity to experimental data in terms of both behaviour and unit activity. However, defining such endpoints for neuroscientific questions remains challenging.

Another alternative is massively collaborative projects in which participants work together to solve a common goal. For example, in the [Polymath Project](https://polymathprojects.org/) unsolved mathematical problems are posed, and then participants share comments, ideas and equations online as they collectively work towards solutions. Inspired by this approach, we founded [COMOB (Collaborative Modelling of the Brain)](https://comob-project.github.io/) - an open-source movement, which aims to tackle neuroscientific questions. Here, we share our experiences and results from our first project, in which we explored spiking neural network models of sound localization.
11 changes: 5 additions & 6 deletions paper/sections/meta_science.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,16 @@
## Infrastructure
### GitHub
Like many open-source efforts, our public GitHub repository (https://github.com/comob-project/snn-sound-localization) was the heart of our project. This provided us with three main benefits. First, it made joining the project as simple as cloning and committing to the repository. Second, it allowed us to collaborate asynchronously. I.e., we could easily complete work in our own time, and then share it with the group. Third, it allowed us to track contributions to the project. Measured in this way, x individuals contributed to the project. However, interpreting this number is challenging, as these contributions vary significantly in size, and participants who worked in pairs or small groups, often contributed under a single username. For those interested in pursuing a similar project our repository template, which we structured as follows, is available here: link.
* A *research* folder held Jupyter Notebooks and markdown files which participants added to and edited.
* A *web* folder contained code which built our repository into a website, and finally
* a *paper* folder held the markdown files which we compiled into this paper.
Like many open-source efforts, [our public GitHub repository](https://github.com/comob-project/snn-sound-localization) was the heart of our project. This provided us with three main benefits. First, it made joining the project as simple as cloning and committing to the repository. Second, it allowed us to collaborate asynchronously. I.e., we could easily complete work in our own time, and then share it with the group. Third, it allowed us to track contributions to the project. Measured in this way, 26 individuals contributed to the project. However, interpreting this number is challenging, as these contributions vary significantly in size, and participants who worked in pairs or small groups, often contributed under a single username.

For those interested in pursuing a similar project our repository can easily be used as a template. It consists of a collection of documents written in Markdown and executable [Jupyter Notebooks](https://jupyter.org/) {cite:p}`Kluyver2016jupyter` containing all the code for the project. Each time the repository is updated, GitHub automatically builds these documents and notebooks into a website so that the current state of the project can be seen by simply navigating to the [project website](https://comob-project.github.io/snn-sound-localization). We used [MyST Markdown](https://mystmd.org/) to automate this process with minimal effort. This paper itself was written using these tools and was publicly visible throughout the project write-up.

### Workflow
Our project grew out of a tutorial at the Cosyne conference (2022) for which we provided video lectures and code online (ref). Participants joining the project were encouraged to review this material, and then to work through an introductory Jupyter Notebook containing Python code, figures and markdown text, which taught them how to train a spiking neural network to perform a sound localisation task. Participants were then directed to our website where we maintained a list of open scientific and technical questions for inspiration. For example, how does the use of different neuron models impact network performance and can we learn input delays with gradient descent? Then, with a proposed or novel question in hand, participants were free to approach their work as they wished. In practice, much like a "typical" research project, most work was conducted individually, shared at monthly online meetings and then iteratively improved upon. For example, several early career researchers tackled questions full-time as their dissertation or thesis work and benefited from external input at monthly workshops. In the following two sections we discuss what worked well with our workflow, and how future efforts could be improved.
Our project grew out of a tutorial at the [Cosyne conference](https://www.cosyne.org/) (2022) for which [we provided video lectures and code online](https://neural-reckoning.github.io/cosyne-tutorial-2022/) {cite:p}`10.5281/zenodo.7044500`. Participants joining the project were encouraged to review this material, and then to work through an introductory Jupyter Notebook containing Python code, figures and markdown text, which taught them how to train a spiking neural network to perform a sound localisation task. Participants were then directed to our website where we maintained a list of open scientific and technical questions for inspiration. For example, how does the use of different neuron models impact network performance and can we learn input delays with gradient descent? Then, with a proposed or novel question in hand, participants were free to approach their work as they wished. In practice, much like a "typical" research project, most work was conducted individually, shared at monthly online meetings and then iteratively improved upon. For example, several early career researchers tackled questions full-time as their dissertation or thesis work and benefited from external input at monthly workshops. In the following two sections we discuss what worked well with our workflow, and how future efforts could be improved.

## A neuroscientific sandbox
By providing models which used spiking neurons to transform sensory inputs into behavioural outputs, participants were free to explore in virtually any direction they wished, much like an open-world or sandbox video game. Indeed over the course of the project we explored the full sensory-motor transformation from manipulating the nature of the input signals to perturbing unit activity and assessing network behaviour. Consequently, our code forms an excellent basis for teaching, as concepts from across neuroscience can be introduced and then implemented in class. For example, one could introduce how optogenetics can be used to assess the relationship between neural activity and behaviour, and then students can implement and explore this themselves *in silico*. Or similarly, an introduction to different neuron models can be followed by an exercise in which students must code and study how each alters network behaviour. In the longer-term, extending our code and written material to a full introduction to neuroscience course remains an exciting future direction.

Beyond providing teaching and hands-on research experience, the project also offered many opportunities for participants to improve their "soft" scientific skills. For early career researchers (undergraduate and master's students) these included learning how to work with Git, collaborate with researchers from diverse countries and career stages, and contribute to a scientific publication. For later career researchers (PhD, Postdoc) the project provided many supervision and leadership opportunities. For example, during online workshops, later career participants were encouraged to lead small groups focussed on tackling specific questions.

## Directing play in the sandbox
While our sandbox design offered several advantages (discussed above), it's open nature did present two challenges. Our first challenge was standardising work across participants; for example, ensuring that everyone used the same code and hyperparameters. Along these lines, future projects would benefit from having participants dedicated to maintaining the code base and standardising participants work. Our second challenge, was the projects exploratory nature. While this appealed to many participants, the lack of a clear goal or end-point may have been off-putting to others. For future efforts, one alternative would be to define clear goals a priori, however in a worst case this could simply reduce to a to-do list passed from more senior to more junior researchers. A more appealing alternative would be to structure the project in clearly defined phases. For example, early months reviewing literature, could be followed by a period of proposals and question refinement, before a final stretch of research. Or alternatively, to begin by collecting project proposals and allowing participants to vote on a project to pursue.
While our sandbox design offered several advantages (discussed above), the open nature of the process did present two challenges. Our first challenge was standardising work across participants; for example, ensuring that everyone used the same code and hyperparameters. Along these lines, future projects would benefit from having participants dedicated to maintaining the code base and standardising participants work. Our second challenge, was the project's exploratory nature. While this appealed to many participants, the lack of a clear goal or end-point may have been off-putting to others. For future efforts, one alternative would be to define clear goals a priori, however if handled carelessly, this runs the risk of reducing to a to-do list passed from more senior to more junior researchers. A more appealing alternative could be to structure the project in clearly defined phases. For example, early months reviewing literature could be followed by a period of proposals and question refinement, before a final stretch of research. Or alternatively, to begin by collecting project proposals and allowing participants to vote on a project to pursue.

0 comments on commit 3b7c2ea

Please sign in to comment.