Local Neocortical Circuitry
We study neurons and their synaptic connections within volumes of ~500 μm3 in murine neocortex primarily using multiphoton or miniscope imaging of multineuronal dynamics. Because most connections in neocortex are between neighboring neurons this is the scale over which (1) most excitatory and inhibitory interactions take place and (2) synaptic connections strengthen or weaken according to the relative spike timing between pre- and post-synaptic neurons – meaning that this is also the scale at which Hebbian learning occurs. Consequently, the local circuitry in neocortex is the scale at which the majority of the algorithms that underlie cortical computation(s) are implemented, and therefore the ideal scale to reveal rules that govern information processing in neocortex.
Network science draws on statistics, physics, and computer science to study large complex systems. The complex system is made tractable by reducing it to a weight matrix based on its connections, or some more abstract interactions such as correlation. It is then possible to quantitatively compare and contrast networks, probe them for specific structural features, model information transmission or to employ them as encoding models. We summarize the statistical dependencies in spiking activity between pairs of neocortical neurons across a population as weighted, directed networks, using a variety of algorithmic approaches.
Higher-order patterns in both structure and activity have been reported to be intrinsic features of neocortex (Yu, 2011) and may be key to a fundamental understanding of neuronal networks. Synaptic connectivity displays a prevalence of specific triplet motifs (Song, 2005) and cliques of neurons (Reimann, 2017). Propagating activity in real neuronal networks have small-world characteristics and elevated clustering, and are dominated by specific triplet motifs (Chambers, 2016). Triplet correlations may also enhance coding (Cayco-Gajic, 2015) and are tied to the prediction of responses in primary visual cortex (Dechery, 2018). For these reasons, we analyze weighted and directed triplet motifs (Fagiolo, 2007) in many studies in the lab.
Single Trial Circuit Representation of Sensory Input
Perception and behavior take place in real time, after all, so it is necessary that any encoding model of stimulus representations in cortex encompass single trial responses. Sensory neurons exhibit substantial trial-to-trial variability yet the population robustly encodes sensory information in single trials. Understanding representation and computation in sensory cortices thus requires investigation in how single-trial activity is explained by (1) shared variability between neurons, (2) trial-averaged responses to stimuli, and (3) global fluctuations in population activity. Neuronal pairwise correlations that arise during circuit activity can be summarized as a weight in a functional network. We have used these weights in a simple linear encoding model and are able to near optimally predict moment to moment neuron activity finding that we are able to use the neighbors of a neuron, including tuned and untuned neurons, to predict both individual neuron activity on a single-trial basis and also recapitulate average tuning properties of tuned neurons. Moreover, we identified a specific triplet motif that improved predictions of single trial responses. Only through study of the functional network were we able to find this emergent signature of informative correlations. We have also used these measured, rather than fitted, weights as coupling coefficients in a generalized linear model (GLM) encoding model allowing us to account for stimulus and brain state. We find that not only are trial-averaged tuning properties and global fluctuations insufficient in explaining single-trials, but further that single-trial dynamics are entirely determined by the coactivity of its functional group (i.e. neighbor neurons in functional graph). With this result, we can systematically investigate the functional group itself to identify which subsets of neurons are most informative for prediction. By building variants of the GLM, we find that that the prediction-relevant units in a neuron’s functional group tend to (1) be highly correlated with the neuron, (2) be coactive within a 30ms time window, and (3) share tuning properties. Then, using a Bayesian decoder, we are able to decode single-trials more accurately with knowledge of the functional group compared to treating neurons as independent units, suggesting that functional groups are relevant to sensory computation. Using the GLM and functional graph framework, we have shown that sensory cortex is organized into computationally relevant functional groups which accurately capture single-trial dynamics without a need for trial-averaged tuning curves. Thus, predicting neuronal activity at any moment requires both knowledge of external variable(s) (the visual stimulus in this case) and knowledge of the functional interrelationships between neurons. Of course we have only begun to scratch the surface of the question of single trial representations.
The Role of Structure in Sustained Activity
Neocortical architecture supports many complex functions, yet before any of these can occur, signals must travel within circuits and between regions. Activity propagation is the most basic function that arises from the structure of synaptic connectivity in the brain. Given the fact that synapses are weak and unreliable, and connections are simultaneously sparse and recurrent, stable activity propagation is highly non-trivial. We seek to understand how neocortical networks sustain activity by using low-rate, sparsely connected spiking neural network models. Our models are constructed to closely approximate the statistics of activity in neocortex, yet low-rate network simulations will sometimes spontaneously cease to perpetuate spikes. To understand how failed simulations differ from sustained ones, we examine how the representation of high-order interactions – specifically triplet motifs – evolve in the network over time. This project builds on work which began in the lab with Chambers & MacLean, 2015. Finally, we seek to verify our modeling results in vivo using two-photon calcium imaging of activity in a large neuronal population in layer ⅔ of mouse primary visual cortex.
Training Spiking Neural Networks
Several groups have recently succeeded in training spiking neural networks to perform benchmark tasks previously achieved only by rate-based artificial neural networks. Trained SNNs are a promising avenue for studying brain computation, particularly in concert with simultaneous recording of hundreds of neurons in awake brains. Together they enable rigorous comparison between multineuronal data and trained models, and our lab is versed in both spiking model and in vivo analysis. We are building multilayered spiking networks to mirror the laminar structure of neocortex and training them on visual benchmark tasks. The networks are designed specifically with the structural connectivity statistics of neocortex in consideration, and we hope to apply the learning rules of neocortex to train the model as well. We are then imaging, using 2-photon microscopy, multineuronal activity in layer ⅔ of mouse primary visual cortex to compare with and inform trained SNNs. In so doing, we seek to understand how the organization and learning rules of neocortex are suited to spike-based computation.