Neurons Have Thousands of Synapses

Frontiers in Neural Circuits · October 2015

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in Neocortex

Jeff Hawkins, Subutai Ahmad

Numenta, Inc, Redwood City, California, United States of America

[paraphrase]

Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.     Excitatory neurons in the neocortex have thousands of excitatory synapses. The proximal synapses, those closest to the cell body, have a relatively large effect on the likelihood of a cell generating an action potential. However, a majority of the synapses are distal, or far from the cell body. The activation of a single distal synapse has little effect at the soma, and for many years it was hard to imagine how the thousands of distal synapses could play an important role in determining a cell’s responses. We now know that dendrite branches are active processing elements. The activation of several distal synapses within close spatial and temporal proximity can lead to a local dendritic NMDA spike and consequently a significant and sustained depolarization of the soma. This has led some researchers to suggest that dendritic branches act as independent pattern recognizers. Yet, despite the many advances in understanding the active properties of dendrites, it remains a mystery why neurons have so many synapses and what their precise role is in memory and cortical processing.

Lacking a theory of why neurons have active dendrites, almost all artificial neural networks, such as those used in deep learning and spiking neural networks use artificial neurons without active dendrites and with unrealistically few synapses, strongly suggesting they are missing key functional aspects of real neural tissue. If we want to understand how the neocortex works and build systems that work on the same principles as the neocortex, we need an understanding of how biological neurons use their thousands of synapses and active dendrites. Of course, neurons cannot be understood in isolation. We also need a complementary theory of how networks of neurons, each with thousands of synapses, work together towards a common purpose. In this paper we introduce such a theory. First, we show how a typical pyramidal neuron with active dendrites and thousands of synapses can recognize hundreds of unique patterns of cellular activity. We show that a neuron can recognize hundreds of patterns even in the presence of large amounts of noise and variability as long as overall neural activity is sparse. Next we introduce a neuron model where the inputs to different parts of the dendritic tree serve different purposes. In this model the patterns recognized by a neuron’s distal synapses are used for prediction.    Each neuron learns to recognize hundreds of patterns that often precede the cell becoming active. The recognition of any one of these learned patterns acts as a prediction by depolarizing the cell without directly causing an action potential. Finally, we show how a network of neurons with this property will learn and recall    sequences of patterns. The network model relies on depolarized neurons firing quickly and inhibiting other nearby neurons, thus biasing the network’s activation towards its predictions. Through simulation we illustrate that the sequence memory network exhibits numerous desirable properties such as on-line learning, multiple simultaneous predictions, and robustness.

Given the similarity of neurons throughout the neocortex and the importance of sequence memory for inference and behavior, we propose that sequence memory is a property of neural tissue throughout the neocortex and thus represents a new and important unifying principle for understanding how the neocortex works.

[end of paraphrase]