Dopamine neurons are thought to promote learning by signaling prediction errors, that is, the difference between actual and expected outcomes. Whether these signals are sufficient for associative learning, however, remains untested. A recent study used optogenetics in a classic behavioral paradigm to confirm the role of dopamine prediction errors in learning.

1 Kamin L. Selective association and conditioning. Figure 1 2 Schultz W.

et al. A neural substrate of prediction and reward. 9 Waelti P.

et al. Dopamine responses comply with basic assumptions of formal learning theory. 4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. Dopamine prediction errors in associative learning. In the classic ‘blocking’ paradigm, an animal learns to associate a cue (here the sound A) with an outcome (a chunk of meat for a hungry dog). After many pairings of cue and outcome, the animal mounts a conditioned response (salivation) to the cue. Next, the original cue (A) is combined with another cue (the light X), and this compound cue (AX) is paired with a reward. Finally, the animal is presented with the new cue (X) alone. If simple pairing of cue and reward is sufficient for learning, the animal should respond to X much like it responded to A. But it does not: learning about X has been ‘blocked’ by A. This effect demonstrates the importance of predictions in associative learning – when a reward is fully predicted (in this case by cue A), then further learning does not occur. What is the neural mechanism for blocking? Dopamine neurons are known to signal prediction errors: they respond to unexpected rewards more than to expected rewards To adapt to their environment, animals must learn which stimuli to approach and which to avoid. This type of learning has a rich pedigree in psychology, from the slobbering of Pavlov's dogs to the pecking of Skinner's pigeons. Associative learning is more complicated than simply pairing a cue with an outcome, however. In the 1960s, Kamin [] reported that if stimulus A already predicts an outcome, then combining A with a new stimulus X fails to trigger an association between X and the outcome ( Figure 1 ). This phenomenon, called blocking, revealed a previously overlooked component of learning: surprise. For associations to form, the outcome must be different than expected. Indeed, this difference between actual and predicted outcome – known as prediction error – has since appeared in a bewildering array of studies, ranging from animal learning to computer science.

2 Schultz W.

et al. A neural substrate of prediction and reward. 3 Steinberg E.E.

Janak P.H. Establishing causality for dopamine in neural function and behavior with optogenetics. 4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. How are prediction errors encoded in the brain? In the 1990s, Schultz and colleagues recorded from midbrain dopamine (DA) neurons in monkeys and found that these neurons are excited by unexpected reward, unaltered by expected reward, and inhibited when expected reward is omitted []. In other words, they encode a bidirectional reward prediction error, exactly as predicted by theory. Furthermore, this error signal varies with reward magnitude, probability, and timing, and develops in lock-step with behavior, all pointing to a possible contribution of dopamine to conditioning. Correlation does not imply causation, however; does this prediction error signal actually trigger learning? More specifically, is blocking (the lack of association with X) caused by the reduced DA response when stimulus A already predicts reward? With cell-type specific, temporally precise methods to manipulate neuronal activity, this prediction can now be tested. Over the past five years, several groups have used channelrhodopsin-2 (ChR2), a light-gated cation channel, to demonstrate that activating DA neurons can reinforce behavior []. The importance of the prediction error response, however, remained unclear. In a recent paper, Steinberg et al. [] combined optogenetics with the blocking paradigm to fill this gap.

4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. Steinberg et al. [] established the blocking effect in a three-phase task. In the first phase, rats were trained to associate auditory cue A with sucrose delivery in a reward port. In the second phase, visual cue X was presented together with auditory cue A, again followed by sucrose. Finally, in the third phase (test phase), cue X was presented alone, in the absence of sucrose. The amount of time the rats spent in the reward port during the cue was used as the conditioned response. As expected, rats showed almost no response to cue X in the test phase – cue A had blocked learning to X. How did this happen? Steinberg et al. hypothesized that blocking occurred because DA neurons failed to respond to the fully-predicted outcome. If this is true, then stimulating DA neurons during the reward should unblock learning, causing rats to respond to cue X. This is exactly what they found.

4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. Taking advantage of transgenic rats that expressed Cre recombinase under the control of the tyrosine hydroxylase promoter, Steinberg et al. [] expressed ChR2 specifically in DA neurons in the ventral tegmental area. They then used a laser to stimulate DA neurons whenever the rats received sucrose. As expected, these rats – but not their wildtype littermates – showed a conditioned response to cue X. This result indicates that blocking – the diminished power of a predicted reward to cause associations – is due to a reduced DA response, providing strong evidence that DA regulates prediction error-based learning.

4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. 4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. 5 Tan K.R.

et al. GABA neurons of the VTA drive conditioned place aversion. In a complementary experiment, Steinberg et al. [] also tested whether the dip in DA neuron activity that occurs when the reward is smaller than expected is necessary to weaken the association between stimulus and outcome. After training the rats to associate a cue with sucrose reward, sucrose was either replaced with water (a less rewarding outcome) or omitted altogether. In this case, DA neurons usually reduce their firing and the animal stops responding to the cue. But when Steinberg et al. [] activated DA neurons during the presentation of the new, worse outcome, rats maintained their response to the cue for a significantly longer period of time. Together with previous studies that inhibited DA neurons to cause avoidance learning [], these results support the conclusion that the DA dip is involved in learning from worse-than-predicted outcomes. It remains to be established whether the brief dip (<500ms) that DA neurons generally exhibit is sufficient for learning, as previous studies used longer periods of DA inhibition.

4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. 1 Kamin L. Selective association and conditioning. 6 Rescorla R.A.

Wagner A.R. A theory of Pavlovian conditioning: variations in the effectiveness of reinforcement and nonreinforcement. 7 Mackintosh N.J. A theory of attention: Variations in the associability of stimuli with reinforcement. 8 Pearce J.M.

Hall G. A model for Pavlovian learning: variations in the effectiveness of conditioned but not of unconditioned stimuli. 9 Waelti P.

et al. Dopamine responses comply with basic assumptions of formal learning theory. 4 Steinberg E.E.

et al. A causal link between prediction errors, dopamine neurons and learning. Steinberg et al.’s [] results may also speak to a longstanding debate in the field about the origin of blocking. In particular, does blocking occur because an unsurprising outcome cannot support conditioning [] or because animals’ attention is drawn to already learned cues []? In other words, is the crucial component the cue or the outcome? The fact that DA stimulation at the time of reward is sufficient to cause unblocking suggests that it may be the outcome, rather than the cue, that explains blocking. Indeed, because DA neurons respond similarly to the single cue A and the compound AX [], it is unlikely that DA cue responses can control attention selectively to one cue vs the other. For a stronger test of this hypothesis, future studies may wish to temporally separate the cue from the outcome and test whether DA stimulation is more effective at one time than the other (note that in Steinberg et al. [], the cue remained still present as the rats received their rewards). Experimenters could also measure the animals’ attention to stimulus X (e.g., their orienting response) to ensure that attention is not altered by stimulation.

10 Cohen J.Y.

et al. Neuron-type-specific signals for reward and punishment in the ventral tegmental area. Knowing that DA prediction-error signaling can cause learning raises a new set of questions. How do DA neurons calculate prediction error []? Which of their projection targets are crucial for learning and how does DA facilitate this learning? What other pathways might the brain use in parallel? We now have the tools to answer all of these questions, and finally connect venerable psychological theories with the ‘black box’ inside the brain.

Acknowledgments NE was supported by NIGMS T32GM007753 and the Sackler Scholar Programme in Psychobiology. Supported in part by NIH grant 5R01MH095953.

Article Info Publication History Identification DOI: https://doi.org/10.1016/j.tics.2013.06.010 Copyright © 2013 Elsevier Ltd. Published by Elsevier Inc. All rights reserved. ScienceDirect Access this article on ScienceDirect