Skip to content

A collaborative list of resources for Computational Neuroscience

License

Notifications You must be signed in to change notification settings

ngebodh/resources

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

50 Commits
 
 
 
 

Repository files navigation

List of Resources

A collaborative list of resources for Computational Neuroscience.

Interesting Papers/ Articles/ Blog Posts:

Contents

Information Theory

  • Foundational paper in the field of information theory by Claude Shannon in 1948 A Mathematical Theory of Communication. Might be helpful to watch this video by Kan Academy describing the work (from Markov Chain perspective) before diving into the paper.

    Details!

    This work developed the concepts of information entropy and redundancy, and introduced the term bit (which Shannon credited to John Tukey) as a unit of information. It was also in this paper that the Shannon–Fano coding technique was proposed – a technique developed in conjunction with Robert Fano.
    Shannon's article laid out the basic elements of communication:

    • An information source that produces a message
    • A transmitter that operates on the message to create a signal which can be sent through a channel
    • A channel, which is the medium over which the signal, carrying the information that composes the message, is sent
    • A receiver, which transforms the signal back into the message intended for delivery
    • A destination, which can be a person or a machine, for whom or which the message is intended

    More on Shannon and his contributions to the world of Computer sci, entropy, info theory, signal detection etc.

  • Ian Goodfellow's (developed GANs) Book Chapter on Information Theory from a Deep Learning Perspective

    Details!

    Goodfellow is best known for inventing generative adversarial networks (GANs). He is also the lead author of the textbook Deep Learning. At Google, he developed a system enabling Google Maps to automatically transcribe addresses from photos taken by Street View cars and demonstrated security vulnerabilities of machine learning systems.

Entropy

Noise

Brain Oscillations

Causality

  • Quasi-experimental causality in neuroscience and behavioural research

    Details!

    In many scientific domains, causality is the key question. For example, in neuroscience, we might ask whether a medication affects perception, cognition or action. Randomized controlled trials are the gold standard to establish causality, but they are not always practical. The field of empirical economics has developed rigorous methods to establish causality even when randomized controlled trials are not available. Here we review these quasi-experimental methods and highlight how neuroscience and behavioural researchers can use them to do research that can credibly demonstrate causal effects.

  • A Causal Network Analysis of Neuromodulation in the Cortico-Subcortical Limbic Network applied to neurons.

    Details!

    Neural decoding and neuromodulation technologies hold great promise for treating mood and other brain disorders in next-generation therapies that manipulate functional brain networks. Here we perform a novel causal network analysis to decode multiregional communication in the primate mood processing network and determine how neuromodulation, short-burst tetanic microstimulation (sbTetMS), alters multiregional network communication. The causal network analysis revealed a mechanism of network excitability that regulates when a sender stimulation site communicates with receiver sites. Decoding network excitability from neural activity at modulator sites predicted sender-receiver communication, whereas sbTetMS neuromodulation temporarily disrupted sender-receiver communication. These results reveal specific network mechanisms of multiregional communication and suggest a new generation of brain therapies that combine neural decoding to predict multiregional communication with neuromodulation to disrupt multiregional communication.

  • Advancing functional connectivity research from association to causation

    Details!

    Cognition and behavior emerge from brain network interactions, such that investigating causal interactions should be central to the study of brain function. Approaches that characterize statistical associations among neural time series—functional connectivity (FC) methods—are likely a good starting point for estimating brain network interactions. Yet only a subset of FC methods (‘effective connectivity’) is explicitly designed to infer causal interactions from statistical associations. Here we incorporate best practices from diverse areas of FC research to illustrate how FC methods can be refined to improve inferences about neural mechanisms, with properties of causal neural interactions as a common ontology to facilitate cumulative progress across FC approaches. We further demonstrate how the most common FC measures (correlation and coherence) reduce the set of likely causal models, facilitating causal inferences despite major limitations. Alternative FC measures are suggested to immediately start improving causal inferences beyond these common FC measures.

  • VIDEO A series of talks on causality in machine learning and how to think causally with machine learning. Frontiers in Machine Learning: Big Ideas in Causality and Machine Learning

    Details!

    Causal relationships are stable across distribution shifts. Models based on causal knowledge have the potential to generalize to unseen domains and offer counterfactual predictions: how do outcomes change if a certain feature is changed in the real world. In recent years, machine learning methods based on causal reasoning have led to advances in out-of-domain generalization, fairness and explanation, and robustness to data selection biases. ¬ In this session, we discuss big ideas at the intersections of causal inference and machine learning towards building stable predictive models and discovering causal insights from data.

Dimensionality

General Dimnesionality

  • Towards the neural population doctrine

    Details!

    We detail four areas of the field where the joint analysis of neural populations has significantly furthered our understanding of computation in the brain: correlated variability, decoding, neural dynamics, and artificial neural networks.

  • SVD and PCA explained. Handout walking through the math behind both and a few other topics (regression,covariance etc.).

    Details!

    This handout is a review of some basic concepts in linear algebra. For a detailed introduction, consult a linear algebra text. Linear Algebra and its Applications by Gilbert Strang (Harcourt, Brace, Jovanovich, 1988) is excellent.

Non-Linear Dimensionality Reduction

  • Using t-SNE. An interactive guide on how to use t-SNE effectively

    Details! Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading. By exploring how it behaves in simple cases, we can learn to use it more effectively.
  • Perform non-linear dimensionality reduction with Isomap and LLE in Python from scratch

  • Isomap tutorial in Python

  • Looking at different non-linear dimensionality reductions methods: Iterative Non-linear Dimensionality Reduction with Manifold Sculpting.

    Details!

    Many algorithms have been recently developed for reducing dimensionality by projecting data onto an intrinsic non-linear manifold. Unfortunately, existing algorithms often lose significant precision in this transformation. Manifold Sculpting is a new algorithm that iteratively reduces dimensionality by simulating surface tension in local neighborhoods. We present several experiments that show Manifold Sculpting yields more accurate results than existing algorithms with both generated and natural data-sets. Manifold Sculpting is also able to benefit from both prior dimensionality reduction efforts.

  • Using manifolds/ dimensionality reduction on sleep data. The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep

    Details!

    We characterize and directly visualize manifold structure in the mammalian head direction circuit, revealing that the states form a topologically nontrivial one-dimensional ring. The ring exhibits isometry and is invariant across waking and rapid eye movement sleep. This result directly demonstrates that there are continuous attractor dynamics and enables powerful inference about mechanism.

  • A Global Geometric Framework for Nonlinear Dimensionality Reduction

    Details!

    Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.

Modeling

General Modeling

  • A guide for applying Machine learning for neural decoding.

    Details!

    Description: This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus.

  • Stochastic dynamics as a principle of brain function

    Details!

    We show that in a finite-sized cortical attractor network, this can be an advantage, for it leads to probabilistic behavior that is advantageous in decision-making, by preventing deadlock, and is important in signal detectability. We show how computations can be performed through stochastic dynamical effects, including the role of noise in enabling probabilistic jumping across barriers in the energy landscape describing the flow of the dynamics in attractor networks. The results obtained in neurophysiological studies of decision-making and signal detectability are modelled by the stochastical neurodynamics of integrate-and-fire networks of neurons with probabilistic neuronal spiking. We describe how these stochastic neurodynamical effects can be analyzed, and their importance in many aspects of brain function, including decision-making, memory recall, short-term memory, and attention.

  • A How-to-Model Guide for Neuroscience. Steps on how to go about posing questions that models can answer.

  • Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks

  • Preprint on Neural Network Poisson Models for Behavioural and Neural Spike Train Data by Dayan's group.

  • Maximum likelihood estimation for neural data slide deck by Jonathan Pillow. A walk through of the concept and derivation.

  • A Short Introduction to Bayesian Neural Networks

    Details!

    With the rising success of deep neural networks, their reliability in terms of robustness (for example, against various kinds of adversarial examples) and confidence estimates becomes increasingly important. Bayesian neural networks promise to address these issues by directly modeling the uncertainty of the estimated network weights. In this article, I want to give a short introduction of training Bayesian neural networks, covering three recent approaches.

Optimization for Modeling

Bayesian Modeling

  • Probabilistic population codes for Bayesian decision making

    Details! We present a neural model of decision making that can perform both evidence accumulation and action selection optimally. More specifically, we show that, given a Poisson-like distribution of spike counts, biological neural networks can accumulate evidence without loss of information through linear integration of neural activity, and can select the most likely action through attractor dynamics. This holds for arbitrary correlations, any tuning curves, continuous and discrete variables, and sensory evidence whose reliability varies over time. Our model predicts that the neurons in the lateral intraparietal cortex involved in evidence accumulation encode, on every trial, a probability distribution which predicts the animal’s performance.

Linear and Non-Linear Systems

Markov Processes

Control Theory

  • The coordination of movement: optimal feedback control and beyond

    Details! Optimal control theory and its more recent extension, optimal feedback control theory, provide valuable insights into the flexible and task-dependent control of movements. Here, we focus on the problem of coordination, defined as movements that involve multiple effectors (muscles, joints or limbs). Optimal control theory makes quantitative predictions concerning the distribution of work across multiple effectors. Optimal feedback control theory further predicts variation in feedback control with changes in task demands and the correlation structure between different effectors. We highlight two crucial areas of research, hierarchical control and the problem of movement initiation, that need to be developed for an optimal feedback control theory framework to characterise movement coordination more fully and to serve as a basis for studying the neural mechanisms involved in voluntary motor control.

Machine Learning

General Machine Learning

  • Recomended learning Relational inductive biases, deep learning, and graph networks

    Details!

    Artificial intelligence (AI) has undergone a renaissance recently, making major progress in key domains such as vision, language, control, and decision-making. This has been due, in part, to cheap data and cheap compute resources, which have fit the natural strengths of deep learning. However, many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches. In particular, generalizing beyond one's experiences--a hallmark of human intelligence from infancy--remains a formidable challenge for modern AI. The following is part position paper, part review, and part unification. We argue that combinatorial generalization must be a top priority for AI to achieve human-like abilities, and that structured representations and computations are key to realizing this objective. Just as biology uses nature and nurture cooperatively, we reject the false choice between "hand-engineering" and "end-to-end" learning, and instead advocate for an approach which benefits from their complementary strengths. We explore how using relational inductive biases within deep learning architectures can facilitate learning about entities, relations, and rules for composing them. We present a new building block for the AI toolkit with a strong relational inductive bias--the graph network--which generalizes and extends various approaches for neural networks that operate on graphs, and provides a straightforward interface for manipulating structured knowledge and producing structured behaviors. We discuss how graph networks can support relational reasoning and combinatorial generalization, laying the foundation for more sophisticated, interpretable, and flexible patterns of reasoning. As a companion to this paper, we have released an open-source software library for building graph networks, with demonstrations of how to use them in practice.

  • Proto-value Functions: A Laplacian Framework for Learning Representation and Control in Markov Decision Processes

  • The why, how, and when of representations for complex systems

    Details!

    Complex systems thinking is applied to a wide variety of domains, from neuroscience to computer science and economics. The wide variety of implementations has resulted in two key challenges: the progenation of many domain-specific strategies that are seldom revisited or questioned, and the siloing of ideas within a domain due to inconsistency of complex systems language. In this work we offer basic, domain-agnostic language in order to advance towards a more cohesive vocabulary. We use this language to evaluate each step of the complex systems analysis pipeline, beginning with the system and data collected, then moving through different mathematical formalisms for encoding the observed data (i.e. graphs, simplicial complexes, and hypergraphs), and relevant computational methods for each formalism. At each step we consider different types of \emph{dependencies}; these are properties of the system that describe how the existence of one relation among the parts of a system may influence the existence of another relation. We discuss how dependencies may arise and how they may alter interpretation of results or the entirety of the analysis pipeline. We close with two real-world examples using coauthorship data and email communications data that illustrate how the system under study, the dependencies therein, the research question, and choice of mathematical representation influence the results. We hope this work can serve as an opportunity of reflection for experienced complexity scientists, as well as an introductory resource for new researchers.

  • Reconciling modern machine-learning practice and the classical bias–variance trade-off

  • A deep learning framework for neuroscience

    Details!

    Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

Autoencoders

  • Variational autoencoders used with dimensionality reduction. VAE-SNE: a deep generative model for simultaneous dimensionality reduction and clustering

    Details!

    Description: We introduce a method for both dimension reduction and clustering called VAE-SNE (variational autoencoder stochastic neighbor embedding). Our model combines elements from deep learning, probabilistic inference, and manifold learning to produce interpretable compressed representations while also readily scaling to tens-of-millions of observations. Unlike existing methods, VAE-SNE simultaneously compresses high-dimensional data and automatically learns a distribution of clusters within the data --- without the need to manually select the number of clusters. This naturally creates a multi-scale representation, which makes it straightforward to generate coarse-grained descriptions for large subsets of related observations and select specific regions of interest for further analysis.

  • Encoders for timeseries. Deep reconstruction of strange attractors from time series

    Details!

    Experimental measurements of physical systems often have a limited number of independent channels, causing essential dynamical variables to remain unobserved. However, many popular methods for unsupervised inference of latent dynamics from experimental data implicitly assume that the measurements have higher intrinsic dimensionality than the underlying system---making coordinate identification a dimensionality reduction problem. Here, we study the opposite limit, in which hidden governing coordinates must be inferred from only a low-dimensional time series of measurements. Inspired by classical techniques for studying the strange attractors of chaotic systems, we introduce a general embedding technique for time series, consisting of an autoencoder trained with a novel latent-space loss function. We show that our technique reconstructs the strange attractors of synthetic and real-world systems better than existing techniques, and that it creates consistent, predictive representations of even stochastic systems. We conclude by using our technique to discover dynamical attractors in diverse systems such as patient electrocardiograms, household electricity usage, and eruptions of the Old Faithful geyser---demonstrating diverse applications of our technique for exploratory data analysis.

  • Inferring single-trial neural population dynamics using sequential auto-encoders

    Details!

    Neuroscience is experiencing a revolution in which simultaneous recording of thousands of neurons is revealing population dynamics that are not apparent from single-neuron responses. This structure is typically extracted from data averaged across many trials, but deeper understanding requires studying phenomena detected in single trials, which is challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. We introduce latent factor analysis via dynamical systems, a deep learning method to infer latent dynamics from single-trial neural spiking data. When applied to a variety of macaque and human motor cortical datasets, latent factor analysis via dynamical systems accurately predicts observed behavioral variables, extracts precise firing rate estimates of neural dynamics on single trials, infers perturbations to those dynamics that correlate with behavioral choices, and combines data from non-overlapping recording sessions spanning months to improve inference of underlying dynamics.

Reinforcement Learning

  • See book by Reinforcement Learning by Sutton and Barto

  • How teaching AI to be curious helps machines learn for themselves

  • Deep active inference as variational policy gradients

    Details!

    Active Inference is a theory arising from theoretical neuroscience which casts action and planning as Bayesian inference problems to be solved by minimizing a single quantity — the variational free energy. The theory promises a unifying account of action and perception coupled with a biologically plausible process theory. However, despite these potential advantages, current implementations of Active Inference can only handle small policy and state–spaces and typically require the environmental dynamics to be known. In this paper we propose a novel deep Active Inference algorithm that approximates key densities using deep neural networks as flexible function approximators, which enables our approach to scale to significantly larger and more complex tasks than any before attempted in the literature. We demonstrate our method on a suite of OpenAIGym benchmark tasks and obtain performance comparable with common reinforcement learning baselines. Moreover, our algorithm evokes similarities with maximum-entropy reinforcement learning and the policy gradients algorithm, which reveals interesting connections between the Active Inference framework and reinforcement learning.

  • Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition

  • Deep Reinforcement Learning and Its Neuroscientific Implications

    Details!
    The emergence of powerful artificial intelligence (AI) is defining new research directions in neuroscience. To date, this research has focused largely on deep neural networks trained using supervised learning in tasks such as image classification. However, there is another area of recent AI work that has so far received less attention from neuroscientists but that may have profound neuroscientific implications: deep reinforcement learning (RL). Deep RL offers a comprehensive framework for studying the interplay among learning, representation, and decision making, offering to the brain sciences a new set of research tools and a wide range of novel hypotheses. In the present review, we provide a high-level introduction to deep RL, discuss some of its initial applications to neuroscience, and survey its wider implications for research on brain and behavior, concluding with a list of opportunities for next-stage research.
    

General Neuroscience

  • Classic must-read neuroscience papers suggested by SfN (Society for Neuroscience). Broken down by topic.

  • The Cost of Cortical Computation a breakdown of the energy cost of neural firing/spiking.

  • Computational Neuroscience: Mathematical and Statistical Perspectives

    Details!
    Mathematical and statistical models have played important roles in neuroscience, especially by describing the electrical activity of neurons recorded individually, or collectively across large networks. As the field moves forward rapidly, new challenges are emerging. For maximal effectiveness, those working to advance computational neuroscience will need to appreciate and exploit the complementary strengths of mechanistic theory and the statistical paradigm.
    

Books

Datasets

  • A list of open datasets that span EEG, MEG, ECoG, and LFP.

  • A large list of BCI resources including datasets, tutorials, papers, books etc.

  • The TUH EEG Corpus, a list of several EEG dataset with several resources. Requies filling out form to download the data.

  • Project Tycho named after Tycho Brache. The project aims to share reliable massive neural and behavioral data for understanding brain mechanism.

    Details!

    Tycho Brahe was a Danish nobleman, astronomer, and writer known for his accurate and comprehensive astronomical observations. He was born in the then Danish peninsula of Scania. Tycho was well known in his lifetime as an astronomer, astrologer, and alchemist.

  • PhysioNet is a large database of different types of data and most can be easily downloaded.

  • Open Neuro an initiative to encourage the sharing of neuro data.

Videos

  • Gradients of Brain Organization Workshop.

    Details!

    Description: Recent years have seen a rise of new methods and applications to study smooth spatial transitions — or gradients — of brain organization. Identification and analysis of cortical gradients provides a framework to study brain organization across species, to examine changes in brain development and aging, and to more generally study the interrelation between brain structure, function and cognition. We will bring together outstanding junior and senior scientists to discuss the challenges and opportunities afforded by this emerging perspective.

  • The 2017 Fisher Awards and Lecture given by Robert Kass

Jobs

List of job boards that update often and have neuro related jobs.

  • Neuromodec gathers job in the fields of neuromodulation, engineering, neurosicence, and mental health.

  • Researchgate job board usualy jobs in academia around the world.

  • For vision and vision related jobs and posts (industry and academia) sign up for the Vision List Mailing List. Note that the job board on their main site is not updated often but researchers do send out job notifications often through the mailing list.

Memes

About

A collaborative list of resources for Computational Neuroscience

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published