CodeNeuro held its second annual conference in San Francisco during November 20-21, 2015. The conference featured an excellent curation of new tools for collecting and processing neural data.
Here follows a brief description of each tool that was presented, and how that tool relates to the rest of the neurotechnology sphere.
Light Field Microscopy, Fiber Photometry (Logan Grosenick)
What it does: Provides the ability to optically record and perturb dynamics in long-range, genetically-specified projections, and the ability to observe inter-regional and inter-laminar population dynamics at cellular resolution across large volumes of tissue, are necessary to understand how behaviorally-relevant dynamics emerge from cellular activity across the intact brain at multiple spatial scales in scattering tissue.
What it means: Complex animal behavior results from the coordinated activity of neurons across large volumes of tissue. Our understanding of large neural systems and their dynamics is circumscribed by available recording and intervention technologies.
Thunder (from the Freeman Lab)
What it does: “Thunder is a library for analyzing large-scale neural data. It’s fast to run, easy to develop for, and can be used interactively. It is built on Spark, a new framework for cluster computing. Thunder includes utilities for loading and saving different formats, classes for working with distributed spatial and temporal data, and modular functions for time series analysis, factorization, and model fitting.”
What it means: Thunder is one of the foremost examples (along with OpenWorm) of efforts to leverage the energy in the open-source community to contribute to neural data processing.
What it does: These systems can efficiently process high-dimensional, noisy sensory data in real time, while consuming orders of magnitude less power than conventional computer architectures. They could also be built into devices (smartphones, perhaps implanted medical devices?) where computation is constrained by power and speed.
See also: an earlier criticism of True North’s architecture.
Collaborative Research in Computational Neuroscience (Fatma Imamoglu)
What it is: Open-access, high-quality datasets from physiological recordings of sensory systems, memory systems, and eye movement data. Useful for testing computational models of the brain and new analysis methods. (Current datasets include Visual, auditory, motor, somatosensory, and prefrontal cortex; and thalamus, hippocampus, retina and LGN).
What it means: To enable concerted efforts in understanding the brain experimental data and other resources such as stimuli and analysis tools should be widely shared by researchers all over the world. To serve this purpose, this website provides a marketplace and discussion forum for sharing tools and data in neuroscience. (More info on the aims and scope of the project was published in Neuroinformatics.)
Dat (Max Ogden)
What it means: Streamlined tools for sharing datasets will make it easier to perform new analyses and reproduce results of published findings. “The high level goal of the dat project is to build a streaming interface between every database and file storage backend in the world. By building tools to build and share data pipelines we aim to bring to data a style of collaboration similar to what git brings to source code.”
Another project for sharing fMRI data is OpenfMRI.
Want to know more about why open science is important? Check out our post about why open science!