Scientific Understanding of Consciousness
Consciousness as an Emergent Property of Thalamocortical Activity

Dopamine Prediction Errors Circuitry

 

Nature  525, 243–246 (10 September 2015)

Arithmetic and local circuitry underlying dopamine prediction errors

Neir Eshel,  Michael Bukwich, Vinod Rao, Vivian Hemmelder, Ju Tian & Naoshige Uchida            

Center for Brain Science, Department of Molecular and Cellular Biology, Harvard University, Cambridge, Massachusetts 02138, USA

[Paraphrase]

Dopamine neurons are thought to facilitate learning by comparing actual and expected reward. Despite two decades of investigation, little is known about how this comparison is made. To determine how dopamine neurons calculate prediction error, we combined optogenetic manipulations with extracellular recordings in the ventral tegmental area while mice engaged in classical conditioning. Here we demonstrate, by manipulating the temporal expectation of reward, that dopamine neurons perform subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain. Furthermore, selectively exciting and inhibiting neighbouring GABA (γ-aminobutyric acid) neurons in the ventral tegmental area reveals that these neurons are a source of subtraction: they inhibit dopamine neurons when reward is expected, causally contributing to prediction-error calculations. Finally, bilaterally stimulating ventral tegmental area GABA neurons dramatically reduces anticipatory licking to conditioned odours, consistent with an important role for these neurons in reinforcement learning. Together, our results uncover the arithmetic and local circuitry underlying dopamine prediction errors.

Associative learning depends on comparing predictions with outcomes. When outcomes match predictions, learning is not required. When outcomes violate predictions, animals must update their predictions to reflect experience. Dopamine neurons are thought to promote this process by encoding reward prediction error, or the difference between the reward an animal receives and the reward it expected to receive.

Despite extensive study, how dopamine neurons calculate prediction error remains largely unknown. Theories of reinforcement learning predict that dopamine neurons perform subtraction, simply calculating actual reward minus predicted reward (or, in temporal difference models, the value of the current state minus the value of the previous state). However, dopamine neurons could also perform division, an equally fundamental and arguably more common neural computation. The arithmetic underlying prediction errors has never been investigated.

To probe how dopamine neurons calculate prediction error, we recorded from the ventral tegmental area (VTA) while mice (n = 5) performed a classical conditioning task with two interleaved trial types. In roughly half of the trials, we delivered reward unexpectedly, in the absence of any cue.

Our study provides the first direct evidence for the arithmetic of dopamine prediction errors. Subtraction is an ideal process for prediction-error coding because it maintains a faithful separation between expected and unexpected rewards, even at the extremes of reward size. Indeed, most, if not all, models of reinforcement learning have used subtraction to compute prediction error. However, although cortical pyramidal neurons appear capable of subtracting GABA input, and modelling studies have explored the biophysics of this process, surprisingly few examples of subtraction have been observed in natural settings in vivo. Our finding that reward expectation    reduces    dopamine reward responses in a purely subtractive manner sheds light on how such a computation can emerge from a network of neurons, and may provide a framework for other prediction-related processes in the brain.

[End of Paraphrase]

 

Return to — Learning