Bayesian Inference in Brain Functionality


Bayesian probability theory provides a framework for modeling how an observer should combine information from multiple cues and from prior knowledge about objects in the world to make perceptual inferences. (Doya, et al.; Bayesian Brain, 189)

The idea that an iterative algorithm is carried out in the thalamocortical loop has received experimental confirmation in observed oscillations. (Arbib, Handbook of Brain Theory; Mumford; Thalamus, 982)

Bayesian statistical procedures have been applied to the ideas of actively creating from memory  synthetic patterns that try to match as closely as possible the current stimulus. (Mumford; Thalamus, 983)


Research Study — Bayesian inference with probabilistic population codes

Research Study — Posterior Parietal Cortex represents Sensory History

Research Study — Brain Regions Involved in Decision-Making

Research Study — Prefrontal Cortex Human Reasoning

Research study — Bayesian Models in the Mindour minds make inferences that appear to go far beyond the input data, which are sparse, noisy, and ambiguous.

Research study — Probabilistic Reasoning by Neurons neurons in the lateral intraparietal area (LIP) have been shown to accumulate sensory information in a way that mimics a statistical decision processes in the form of a log likelihood ratio (logLR).

Research study — Hippocampus Association Memory for Decisions hippocampus contributes to an automatic assessment of value, perhaps performing a function similar to Bayesian inference about value.

Research Study — Perception, Face Recognition


Brain operates as a Reality Emulator (Llinás; I of the Vortex, 13)

New actions are planned using knowledge of sensory information and past experiences. (Ratey; User's Guide to Brain, 165)

We often use semantic memory successfully by inferring the right answer. (Baddeley,; Memory, 117)

Schemas provide the basis for us to draw inferences as we read or listen. (Baddeley,; Memory, 129)

Because of the indeterminacy of rewards and risks, a decision is often result of subjective probabilistic estimates of both reward and risk. (Fuster; Prefrontal Cortex, 191)

The estimates of both reward and risk may be unconscious, in which case the choice may be called an intuitive, based on so-called "gut feeling." The decision is emotionally biased. (Fuster; Prefrontal Cortex, 191)

In reality, there is no purely rational or purely emotional decision, as both reason and emotion play a role in all decisions. (Fuster; Prefrontal Cortex, 191)

Experiments that have quantitatively tested the prediction of Bayesian models of cue integration have largely supported the hypothesis that human observers are "Bayes optimal" in their interpretation of image data. (Doya, et al.; Bayesian Brain, 204)

We try to infer people's beliefs and desires from what they do, and try to predict what they will do from our guesses about their beliefs and desires. (Pinker; How the Mind Works, 30)

The mind is a “neural computer” fitted by natural selection with combinatorial algorithms for causal and probabilistic reasoning about plants, animals, objects, and people. (Pinker; How the Mind Works, 524)

Perception is a process of inference, an analysis of probabilities.  (Levitin; Your Brain on Music, 99)

Brain always tries to use the quickest appropriate pathway for the situation at hand. (Baars, Essential Sources in Scientific Consciousness; Crick & Koch; Consciousness and Neuroscience, 40)

A first and very important step in many pattern recognition and information processing tasks is the identification or construction of a reasonably small set of important features in which the essential information for the task is concentrated. (Arbib, Handbook of Brain Theory; Ritter; Self-Organizing Maps, 846)

Association is the most natural form of neural network computation. Neural networks can be thought of as pattern associators, which link an input pattern with the most appropriate output pattern. (Anderson; Associative Networks, 102)

Most scholars emphasize how the collective Gestalt-like traits of the brain and its networks are critical to understanding perception for consciousness. (Koch; Quest for Consciousness, 311)

Noise inherent in brain activity has a number of advantages by making the dynamics stochastic, which allows for many remarkable features of the brain, including creativity,    probabilistic decision making,    stochastic resonance, unpredictability, conflict resolution, symmetry breaking, allocation to discrete categories, and many of the important memory properties. (Rolls & Deco; Noisy Brain, 80)

A classical example of Bayesian inference is the Kalman filter, which has been extensively used in engineering, communication, and control over the past few decades. (Doya, et al.; Bayesian Brain, xi)

Humor and Bayesian Estimation

Via probabilities and associations already incorporated into the strengths and proximities in the network, the spreading activation has the capacity to take on the functional structure of a particular instantiation of a frame, with chains of nested conditional probabilities. (Hurley, Dennett, Adams; Inside Jokes, 102)

Music and Bayesian Estimation

As music unfolds, the brain constantly updates the estimates of when new beats will occur, and take satisfaction in matching a mental beat with a real-world one, and takes delight when a skillful musician violates that expectation in an interesting way. (Levitin; Your Brain on Music, 187)



Nature Neuroscience, published online 22 October 2006 one

Bayesian inference with probabilistic population codes

Wei Ji Ma1'3, Jeffrey M Beck1'3, Peter E Latham2 & Alexandre Pongee

1Department of Brain and Cognitive Sciences, Meliora Hall, University of Rochester, Rochester, New York 14627, USA. 2Gatsby Computational Neuroscience Unit, 17 Queen Square, London WC1N 3AR, UK. 3


Recent psychophysical experiments indicate that humans perform near-optimal Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes' rule. At first sight, it would seem that the high variability in the responses of cortical neurons would make it difficult to implement such optimal statistical inference in cortical circuits. We argue that, in fact, this variability implies that populations of neurons automatically represent probability distributions over the stimulus, a type of code we call probabilistic population codes. Moreover, we demonstrate that the Poisson-like variability observed in cortex reduces a broad class of Bayesian inference to simple linear combinations of populations of neural activity. These results hold for arbitrary probability distributions over the stimulus, for tuning curves of arbitrary shape and for realistic neuronal variability.

Virtually all computations performed by the nervous system are subject to uncertainty, and taking this into account is critical for making inferences about the outside world. For instance, imagine hiking in a forest and having to jump over a stream. To decide whether or not to jump, you could compute the width of the stream and compare it to your internal estimate of your jumping ability. If, for example, you can jump 2 m and the stream is 1.9 m wide, then you might choose to jump. The problem with this approach, of course, is that you ignored the uncertainty in the sensory and motor estimates. If you can jump 2 ± 0.4 m and the stream is 1.9 ± 0.5 m wide, jumping over it is very risky—and even life-threatening if it is filled with, say, piranhas.

Behavioral studies have confirmed that human observers not only take uncertainty into account in a wide variety of tasks, but do so in a way that is nearly optimal" (where 'optimal' is used in a Bayesian sense). This has two important implications. First, neural circuits must represent probability distributions. For instance, in our example, the width of the stream could be represented in the brain by a Gaussian distribution with mean 1.9 m and s.d. 0.5 m. Second, neural circuits must be able to combine probability distributions nearly optimally, a process known as Bayesian inference.

Although it is clear experimentally that human behavior is nearly Bayes-optimal in a wide variety of tasks, very little is known about the neural basis of this optimality. In particular, we do not know how probability distributions are represented in neuronal responses, nor how neural circuits implement Bayesian inference. At first sight, it would seem that cortical neurons are not well suited to this task, as their responses are highly variable: the spike count of cortical neurons in response to the same sensory variable (such as the direction of motion of a visual stimulus) or motor command varies greatly from trial to trial, typically with Poisson-like statistics. It is critical to realize, however, that variability and uncertainty go hand in hand: if neuronal variability did not exist, that is, if neurons were to fire in exactly the same way every time you saw the same object, then you would always know with certainty what object was presented. Thus, uncertainty about the width of the river in the above example is intimately related to the fact that neurons in the visual cortex do not fire in exactly the same way every time you see a river that is 2 m wide. This variability is partly due to internal noise (like stochastic neurotransmitter release), but the potentially more important component arises from the fact that rivers of the same width can look different, and thus give rise to different neuronal responses, when viewed from different distances or vantage points.

Neural variability, then, is not incompatible with the notion that humans can be Bayes-optimal; on the contrary, as we have just seen, neural variability is expected when subjects experience uncertainty. What it not clear, however, is exactly how optimal inference is achieved given the particular type of noise—Poisson-like variability—observed in the cortex. Here we show that Poisson-like variability makes a broad class of Bayesian inferences particularly easy. Specifically, this variability has a unique property: it allows neurons to represent probability distributions in a format that reduces optimal Bayesian inference to simple linear combinations of neural activities.

Our notion of probabilistic population codes offers a new perspective on the role of Poisson-like variability. The presence of such variability throughout the cortex suggests that the entire cortex represents probability distributions, not just estimates, which is precisely what would be expected from a Bayesian perspective. We propose that these distributions are collapsed onto estimates only when decisions are needed, a process that may take place in motor cortex or in subcortical structures. Notably, our previous work shows that attractor dynamics in these decision networks could perform this step optimally by computing maximum a posteriori estimates.

(end of paraphrase)



Return to — Reentry and Recursion

Return to — Working Memory

Link to — Brain Functions as a Reality Emulator

Link to — Scientific Thought Patterns begin in Infancy

Further discussion -- Covington Theory of Consciousness